import pointblank as pb
import polars as pl
= pl.DataFrame(
tbl
{"a": [4, 7, 2, 8],
"b": [5, None, 1, None],
}
)
pb.preview(tbl)
Validate.col_vals_not_null
Validate.col_vals_not_null(
columns,=None,
pre=None,
segments=None,
thresholds=None,
actions=None,
brief=True,
active )
Validate whether values in a column are not NULL.
The col_vals_not_null()
validation method checks whether column values in a table are not NULL. This validation will operate over the number of test units that is equal to the number of rows in the table.
Parameters
columns :
str
|list
[str
] |Column
|ColumnSelector
|ColumnSelectorNarwhals
-
A single column or a list of columns to validate. Can also use
col()
with column selectors to specify one or more columns. If multiple columns are supplied or resolved, there will be a separate validation step generated for each column. pre :
Callable
| None = None-
An optional preprocessing function or lambda to apply to the data table during interrogation. This function should take a table as input and return a modified table. Have a look at the Preprocessing section for more information on how to use this argument.
segments :
SegmentSpec
| None = None-
An optional directive on segmentation, which serves to split a validation step into multiple (one step per segment). Can be a single column name, a tuple that specifies a column name and its corresponding values to segment on, or a combination of both (provided as a list). Read the Segmentation section for usage information.
thresholds :
int
|float
|bool
|tuple
|dict
| Thresholds = None-
Set threshold failure levels for reporting and reacting to exceedences of the levels. The thresholds are set at the step level and will override any global thresholds set in
Validate(thresholds=...)
. The default isNone
, which means that no thresholds will be set locally and global thresholds (if any) will take effect. Look at the Thresholds section for information on how to set threshold levels. actions : Actions | None = None
-
Optional actions to take when the validation step(s) meets or exceeds any set threshold levels. If provided, the
Actions
class should be used to define the actions. brief :
str
|bool
| None = None-
An optional brief description of the validation step that will be displayed in the reporting table. You can use the templating elements like
"{step}"
to insert the step number, or"{auto}"
to include an automatically generated brief. IfTrue
the entire brief will be automatically generated. IfNone
(the default) then there won’t be a brief. active :
bool
= True-
A boolean value indicating whether the validation step should be active. Using
False
will make the validation step inactive (still reporting its presence and keeping indexes for the steps unchanged).
Returns
: Validate
-
The
Validate
object with the added validation step.
Preprocessing
The pre=
argument allows for a preprocessing function or lambda to be applied to the data table during interrogation. This function should take a table as input and return a modified table. This is useful for performing any necessary transformations or filtering on the data before the validation step is applied.
The preprocessing function can be any callable that takes a table as input and returns a modified table. For example, you could use a lambda function to filter the table based on certain criteria or to apply a transformation to the data. Note that you can refer to a column via columns=
that is expected to be present in the transformed table, but may not exist in the table before preprocessing. Regarding the lifetime of the transformed table, it only exists during the validation step and is not stored in the Validate
object or used in subsequent validation steps.
Segmentation
The segments=
argument allows for the segmentation of a validation step into multiple segments. This is useful for applying the same validation step to different subsets of the data. The segmentation can be done based on a single column or specific fields within a column.
Providing a single column name will result in a separate validation step for each unique value in that column. For example, if you have a column called "region"
with values "North"
, "South"
, and "East"
, the validation step will be applied separately to each region.
Alternatively, you can provide a tuple that specifies a column name and its corresponding values to segment on. For example, if you have a column called "date"
and you want to segment on only specific dates, you can provide a tuple like ("date", ["2023-01-01", "2023-01-02"])
. Any other values in the column will be disregarded (i.e., no validation steps will be created for them).
A list with a combination of column names and tuples can be provided as well. This allows for more complex segmentation scenarios. The following inputs are all valid:
segments=["region", ("date", ["2023-01-01", "2023-01-02"])]
: segments on unique values in the"region"
column and specific dates in the"date"
columnsegments=["region", "date"]
: segments on unique values in the"region"
and"date"
columns
The segmentation is performed during interrogation, and the resulting validation steps will be numbered sequentially. Each segment will have its own validation step, and the results will be reported separately. This allows for a more granular analysis of the data and helps identify issues within specific segments.
Importantly, the segmentation process will be performed after any preprocessing of the data table. Because of this, one can conceivably use the pre=
argument to generate a column that can be used for segmentation. For example, you could create a new column called "segment"
through use of pre=
and then use that column for segmentation.
Thresholds
The thresholds=
parameter is used to set the failure-condition levels for the validation step. If they are set here at the step level, these thresholds will override any thresholds set at the global level in Validate(thresholds=...)
.
There are three threshold levels: ‘warning’, ‘error’, and ‘critical’. The threshold values can either be set as a proportion failing of all test units (a value between 0
to 1
), or, the absolute number of failing test units (as integer that’s 1
or greater).
Thresholds can be defined using one of these input schemes:
- use the
Thresholds
class (the most direct way to create thresholds) - provide a tuple of 1-3 values, where position
0
is the ‘warning’ level, position1
is the ‘error’ level, and position2
is the ‘critical’ level - create a dictionary of 1-3 value entries; the valid keys: are ‘warning’, ‘error’, and ‘critical’
- a single integer/float value denoting absolute number or fraction of failing test units for the ‘warning’ level only
If the number of failing test units exceeds set thresholds, the validation step will be marked as ‘warning’, ‘error’, or ‘critical’. All of the threshold levels don’t need to be set, you’re free to set any combination of them.
Aside from reporting failure conditions, thresholds can be used to determine the actions to take for each level of failure (using the actions=
parameter).
Examples
For the examples here, we’ll use a simple Polars DataFrame with two numeric columns (a
and b
). The table is shown below:
Let’s validate that none of the values in column a
are Null values. We’ll determine if this validation had any failing test units (there are four test units, one for each row).
= (
validation =tbl)
pb.Validate(data="a")
.col_vals_not_null(columns
.interrogate()
)
validation
STEP | COLUMNS | VALUES | TBL | EVAL | UNITS | PASS | FAIL | W | E | C | EXT | ||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
#4CA64C | 1 |
col_vals_not_null()
|
✓ | 4 | 4 1.00 |
0 0.00 |
— | — | — | — |
Printing the validation
object shows the validation table in an HTML viewing environment. The validation table shows the single entry that corresponds to the validation step created by using col_vals_not_null()
. All test units passed, and there are no failing test units.
Now, let’s use that same set of values for a validation on column b
.
= (
validation =tbl)
pb.Validate(data="b")
.col_vals_not_null(columns
.interrogate()
)
validation
The validation table reports two failing test units. The specific failing cases are for the two Null values in column b
.