PolarsRows13Columns8 |
||||||||
Validation Methods
Pointblank provides a comprehensive suite of validation methods to verify different aspects of your data. Each method creates a validation step that becomes part of your validation plan.
These validation methods cover everything from checking column values against thresholds to validating the table structure and detecting duplicates. Combined into validation steps, they form the foundation of your data quality workflow.
Pointblank provides over 25 validation methods to handle diverse data quality requirements. These are grouped into three main categories:
- Column Value Validations
- Row-based Validations
- Table Structure Validations
- AI-Powered Validations
Within each of these categories, we’ll walk through several examples showing how each validation method creates steps in your validation plan.
And we’ll use the small_table dataset for all of our examples. Here’s a preview of it:
Validation Methods to Validation Steps
In Pointblank, validation methods become validation steps when you add them to a validation plan. Each method creates a distinct step that performs a specific check on your data.
Here’s a simple example showing how three validation methods create three validation steps:
import pointblank as pb
(
pb.Validate(data=pb.load_dataset(dataset="small_table", tbl_type="polars"))
# Step 1: Check that values in column `a` are greater than 2 ---
.col_vals_gt(columns="a", value=2, brief="Values in 'a' must exceed 2.")
# Step 2: Check that column 'date' exists in the table ---
.col_exists(columns="date", brief="Column 'date' must exist.")
# Step 3: Check that the table has exactly 13 rows ---
.row_count_match(count=13, brief="Table should have exactly 13 rows.")
.interrogate()
)Each validation method produces one step in the validation report above. When combined, these steps form a complete validation plan that systematically checks different aspects of your data quality.
Common Arguments
Most validation methods in Pointblank share a set of common arguments that provide consistency and flexibility across different validation types:
columns=: specifies which column(s) to validate (used in column-based validations)pre=: allows data transformation before validationsegments=: enables validation across different data subsetsthresholds=: sets acceptable failure thresholdsactions=: defines actions to take when validations failbrief=: provides a description of what the validation is checkingactive=: determines if the validation step should be executed (default isTrue)na_pass=: controls how missing values are handled (only for column value validation methods)
For column validation methods, the na_pass= parameter determines whether missing values (Null/None/NA) should pass validation (this parameter is covered in a later section).
These arguments follow a consistent pattern across validation methods, so you don’t need to memorize different parameter sets for each function. This systematic approach makes Pointblank more intuitive to work with as you build increasingly complex validation plans.
We’ll cover most of these common arguments in their own dedicated sections later in the User Guide, as some of them represent a deeper topic worthy of focused attention.
1. Column Value Validations
These methods check individual values within columns against specific criteria:
Comparison checks (
col_vals_gt(),col_vals_lt(), etc.) for comparing values to thresholds or other columnsRange checks (
col_vals_between(),col_vals_outside()) for verifying that values fall within or outside specific rangesSet membership checks (
col_vals_in_set(),col_vals_not_in_set()) for validating values against predefined setsNull value checks (
col_vals_null(),col_vals_not_null()) for testing presence or absence of null valuesPattern matching checks (
col_vals_regex(),col_vals_within_spec()) for validating text patterns with regular expressions or against standard specificationsTrending value checks (
col_vals_increasing(),col_vals_decreasing()) for verifying that values increase or decrease as you move down the rowsCustom expression checks (
col_vals_expr()) for complex validations using custom expressions
Now let’s look at some key examples from select categories of column value validations.
Comparison Checks
Let’s start with a simple example of how col_vals_gt() might be used to check if the values in a column are greater than a specified value.
(
pb.Validate(data=pb.load_dataset(dataset="small_table", tbl_type="polars"))
.col_vals_gt(columns="a", value=5)
.interrogate()
)If you’re checking data in a column that contains Null/None/NA values and you’d like to disregard those values (i.e., let them pass validation), you can use na_pass=True. The following example checks values in column c of small_table, which contains two None values:
(
pb.Validate(data=pb.load_dataset(dataset="small_table", tbl_type="polars"))
.col_vals_le(columns="c", value=10, na_pass=True)
.interrogate()
)| STEP | COLUMNS | VALUES | TBL | EVAL | UNITS | PASS | FAIL | W | E | C | EXT | ||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| #4CA64C | 1 |
col_vals_le()
|
✓ | 13 | 13 1.00 |
0 0.00 |
— | — | — | — |
In the above validation table, we see that all test units passed. If we didn’t use na_pass=True there would be 2 failing test units, one for each None value in the c column.
It’s possible to check against column values against values in an adjacent column. To do this, supply the value= argument with the column name within the col() helper function. Here’s an example of that:
(
pb.Validate(data=pb.load_dataset(dataset="small_table", tbl_type="polars"))
.col_vals_lt(columns="a", value=pb.col("c"))
.interrogate()
)This validation checks that values in column a are less than values in column c.
Checking of Missing Values
A very common thing to validate is that there are no Null/NA/missing values in a column. The col_vals_not_null() method checks for the presence of missing values:
(
pb.Validate(data=pb.load_dataset(dataset="small_table", tbl_type="polars"))
.col_vals_not_null(columns="a")
.interrogate()
)| STEP | COLUMNS | VALUES | TBL | EVAL | UNITS | PASS | FAIL | W | E | C | EXT | ||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| #4CA64C | 1 |
col_vals_not_null()
|
✓ | 13 | 13 1.00 |
0 0.00 |
— | — | — | — |
Column a has no missing values and the above validation proves this.
Checking Strings with Regexes
A regular expression (regex) validation via the col_vals_regex() validation method checks if values in a column match a specified pattern. Here’s an example with two validation steps, each checking text values in a column:
(
pb.Validate(data=pb.load_dataset(dataset="small_table", tbl_type="polars"))
.col_vals_regex(columns="b", pattern=r"^\d-[a-z]{3}-\d{3}$")
.col_vals_regex(columns="f", pattern=r"high|low|mid")
.interrogate()
)| STEP | COLUMNS | VALUES | TBL | EVAL | UNITS | PASS | FAIL | W | E | C | EXT | ||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| #4CA64C | 1 |
col_vals_regex()
|
✓ | 13 | 13 1.00 |
0 0.00 |
— | — | — | — | |||
| #4CA64C | 2 |
col_vals_regex()
|
✓ | 13 | 13 1.00 |
0 0.00 |
— | — | — | — |
Checking Strings Against Specifications
The col_vals_within_spec() method validates column values against common data specifications like email addresses, URLs, postal codes, credit card numbers, ISBNs, VINs, and IBANs. This is particularly useful when you need to validate that text data conforms to standard formats:
import polars as pl
# Create a sample table with various data types
sample_data = pl.DataFrame({
"isbn": ["978-0-306-40615-7", "0-306-40615-2", "invalid"],
"email": ["test@example.com", "user@domain.co.uk", "not-an-email"],
"zip": ["12345", "90210", "invalid"]
})
(
pb.Validate(data=sample_data)
.col_vals_within_spec(columns="isbn", spec="isbn")
.col_vals_within_spec(columns="email", spec="email")
.col_vals_within_spec(columns="zip", spec="postal_code[US]")
.interrogate()
)Checking for Trending Values
The col_vals_increasing() and col_vals_decreasing() validation methods check whether column values are increasing or decreasing as you move down the rows. These are useful for validating time series data, sequential identifiers, or any data where you expect monotonic trends:
import polars as pl
# Create a sample table with increasing and decreasing values
trend_data = pl.DataFrame({
"id": [1, 2, 3, 4, 5],
"temperature": [20, 22, 25, 28, 30],
"countdown": [100, 80, 60, 40, 20]
})
(
pb.Validate(data=trend_data)
.col_vals_increasing(columns="id")
.col_vals_increasing(columns="temperature")
.col_vals_decreasing(columns="countdown")
.interrogate()
)| STEP | COLUMNS | VALUES | TBL | EVAL | UNITS | PASS | FAIL | W | E | C | EXT | ||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| #4CA64C | 1 |
col_vals_increasing()
|
✓ | 5 | 5 1.00 |
0 0.00 |
— | — | — | — | |||
| #4CA64C | 2 |
col_vals_increasing()
|
✓ | 5 | 5 1.00 |
0 0.00 |
— | — | — | — | |||
| #4CA64C | 3 |
col_vals_decreasing()
|
✓ | 5 | 5 1.00 |
0 0.00 |
— | — | — | — |
The allow_stationary= parameter lets you control whether consecutive identical values should pass validation. By default, stationary values (e.g., [1, 2, 2, 3]) will fail the increasing check, but setting allow_stationary=True will allow them to pass.
Handling Missing Values with na_pass=
When validating columns containing Null/None/NA values, you can control how these missing values are treated with the na_pass= parameter:
(
pb.Validate(data=pb.load_dataset(dataset="small_table", tbl_type="polars"))
.col_vals_le(columns="c", value=10, na_pass=True)
.interrogate()
)| STEP | COLUMNS | VALUES | TBL | EVAL | UNITS | PASS | FAIL | W | E | C | EXT | ||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| #4CA64C | 1 |
col_vals_le()
|
✓ | 13 | 13 1.00 |
0 0.00 |
— | — | — | — |
In the above example, column c contains two None values, but all test units pass because we set na_pass=True. Without this setting, those two values would fail the validation.
In summary, na_pass= works like this:
na_pass=True: missing values pass validation regardless of the condition being testedna_pass=False(the default): missing values fail validation
2. Row-based Validations
Row-based validations focus on examining properties that span across entire rows rather than individual columns. These are essential for detecting issues that can’t be found by looking at columns in isolation:
rows_distinct(): ensures no duplicate rows exist in the tablerows_complete(): verifies that no rows contain any missing values
These row-level validations are particularly valuable for ensuring data integrity and completeness at the record level, which is crucial for many analytical and operational data applications.
Checking Row Distinctness
Here’s an example where we check for duplicate rows with rows_distinct():
(
pb.Validate(data=pb.load_dataset(dataset="small_table", tbl_type="polars"))
.rows_distinct()
.interrogate()
)We can also adapt the rows_distinct() check to use a single column or a subset of columns. To do that, we need to use the columns_subset= parameter. Here’s an example of that:
Checking Row Completeness
Another important validation is checking for complete rows: rows that have no missing values across all columns or a specified subset of columns. The rows_complete() validation method performs this check.
Here’s an example checking if all rows in the table are complete (have no missing values in any column):
(
pb.Validate(data=pb.load_dataset(dataset="small_table", tbl_type="polars"))
.rows_complete()
.interrogate()
)As the report indicates, there are some incomplete rows in the table.
3. Table Structure Validations
Table structure validations ensure that the overall architecture of your data meets expectations. These structural checks form a foundation for more detailed data quality assessments:
col_exists(): verifies a column exists in the tablecol_schema_match(): ensures table matches a defined schemacol_count_match(): confirms the table has the expected number of columnsrow_count_match(): verifies the table has the expected number of rowstbl_match(): validates that the target table matches a comparison table
These structural validations provide essential checks on the fundamental organization of your data tables, ensuring they have the expected dimensions and components needed for reliable data analysis.
Checking Column Presence
If you need to check for the presence of individual columns, the Validate.col_exists() validation method is useful. In this example, we check whether the date column is present in the table:
(
pb.Validate(data=pb.load_dataset(dataset="small_table", tbl_type="polars"))
.col_exists(columns="date")
.interrogate()
)| STEP | COLUMNS | VALUES | TBL | EVAL | UNITS | PASS | FAIL | W | E | C | EXT | ||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| #4CA64C | 1 |
col_exists()
|
✓ | 1 | 1 1.00 |
0 0.00 |
— | — | — | — |
That column is present, so the single test unit of this validation step is a passing one.
Checking the Table Schema
For deeper checks of table structure, a schema validation can be performed with the col_schema_match() validation method, where the goal is to check whether the structure of a table matches an expected schema. To define an expected table schema, we need to use the Schema class. Here is a simple example that (1) prepares a schema consisting of column names, (2) uses that schema object in a col_schema_match() validation step:
schema = pb.Schema(columns=["date_time", "date", "a", "b", "c", "d", "e", "f"])
(
pb.Validate(data=pb.load_dataset(dataset="small_table", tbl_type="polars"))
.col_schema_match(schema=schema)
.interrogate()
)| STEP | COLUMNS | VALUES | TBL | EVAL | UNITS | PASS | FAIL | W | E | C | EXT | ||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| #4CA64C | 1 |
col_schema_match()
|
✓ | 1 | 1 1.00 |
0 0.00 |
— | — | — | — |
The col_schema_match() validation step will only have a single test unit (signifying pass or fail). We can see in the above validation report that the column schema validation passed.
More often, a schema will be defined using column names and column types. We can do that by using a list of tuples in the columns= parameter of Schema. Here’s an example of that approach in action:
schema = pb.Schema(
columns=[
("date_time", "Datetime(time_unit='us', time_zone=None)"),
("date", "Date"),
("a", "Int64"),
("b", "String"),
("c", "Int64"),
("d", "Float64"),
("e", "Boolean"),
("f", "String"),
]
)
(
pb.Validate(data=pb.load_dataset(dataset="small_table", tbl_type="polars"))
.col_schema_match(schema=schema)
.interrogate()
)| STEP | COLUMNS | VALUES | TBL | EVAL | UNITS | PASS | FAIL | W | E | C | EXT | ||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| #4CA64C | 1 |
col_schema_match()
|
✓ | 1 | 1 1.00 |
0 0.00 |
— | — | — | — |
The col_schema_match() validation method has several boolean parameters for making the checks less stringent:
complete=: requires exact column matching (all expected columns must exist, no extra columns allowed)in_order=: enforces that columns appear in the same order as defined in the schemacase_sensitive_colnames=: column names must match with exact letter casecase_sensitive_dtypes=: data type strings must match with exact letter case
These parameters all default to True, providing strict schema validation. Setting any to False relaxes the validation requirements, making the checks more flexible when exact matching isn’t necessary or practical for your use case.
Comparing Tables with tbl_match()
The tbl_match() validation method provides a comprehensive way to verify that two tables are identical. It performs a progressive series of checks, from least to most stringent:
- Column count match
- Row count match
- Schema match (loose - case-insensitive, any order)
- Schema match (order - columns in correct order)
- Schema match (exact - case-sensitive, correct order)
- Data match (cell-by-cell comparison)
This progressive approach helps identify exactly where tables differ. Here’s an example comparing the small_table dataset with itself:
(
pb.Validate(data=pb.load_dataset(dataset="small_table", tbl_type="polars"))
.tbl_match(tbl_compare=pb.load_dataset(dataset="small_table", tbl_type="polars"))
.interrogate()
)| STEP | COLUMNS | VALUES | TBL | EVAL | UNITS | PASS | FAIL | W | E | C | EXT | ||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| #4CA64C | 1 |
tbl_match()
|
✓ | 1 | 1 1.00 |
0 0.00 |
— | — | — | — |
This validation method is especially useful for:
- Verifying that data transformations preserve expected properties
- Comparing production data against a golden dataset
- Ensuring data consistency across different environments
- Validating that imported data matches source data
Checking Counts of Row and Columns
Row and column count validations check the number of rows and columns in a table.
Using row_count_match() checks whether the number of rows in a table matches a specified count.
(
pb.Validate(data=pb.load_dataset(dataset="small_table", tbl_type="polars"))
.row_count_match(count=13)
.interrogate()
)| STEP | COLUMNS | VALUES | TBL | EVAL | UNITS | PASS | FAIL | W | E | C | EXT | ||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| #4CA64C | 1 |
row_count_match()
|
✓ | 1 | 1 1.00 |
0 0.00 |
— | — | — | — |
The col_count_match() validation method checks if the number of columns in a table matches a specified count.
(
pb.Validate(data=pb.load_dataset(dataset="small_table", tbl_type="polars"))
.col_count_match(count=8)
.interrogate()
)| STEP | COLUMNS | VALUES | TBL | EVAL | UNITS | PASS | FAIL | W | E | C | EXT | ||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| #4CA64C | 1 |
col_count_match()
|
✓ | 1 | 1 1.00 |
0 0.00 |
— | — | — | — |
Expectations on column and row counts can be useful in certain situations and they align nicely with schema checks.
4. AI-Powered Validations
AI-powered validations use Large Language Models (LLMs) to validate data based on natural language criteria. This opens up new possibilities for complex validation rules that are difficult to express with traditional programmatic methods.
Validating with Natural Language Prompts
The prompt() validation method allows you to describe validation criteria in plain language. The LLM interprets your prompt and evaluates each row, producing pass/fail results just like other Pointblank validation methods.
This is particularly useful for:
- Semantic checks (e.g., “descriptions should mention a product name”)
- Context-dependent validation (e.g., “prices should be reasonable for the product category”)
- Subjective quality assessments (e.g., “comments should be professional and constructive”)
- Complex rules that would require extensive regex patterns or custom functions
Here’s a simple example that validates whether text descriptions contain specific information:
import polars as pl
# Create sample data with product descriptions
products = pl.DataFrame({
"product": ["Widget A", "Gadget B", "Tool C"],
"description": [
"High-quality widget made in USA",
"Innovative gadget with warranty",
"Professional tool"
],
"price": [29.99, 49.99, 19.99]
})
# Validate that descriptions mention quality or features
(
pb.Validate(data=products)
.prompt(
prompt="Each description should mention either quality, features, or warranty",
columns_subset=["description"],
model="anthropic:claude-sonnet-4-5"
)
.interrogate()
)The columns_subset= parameter lets you specify which columns to include in the validation, improving performance and reducing API costs by only sending relevant data to the LLM.
Note: To use prompt(), you need to have the appropriate API credentials configured for your chosen LLM provider (Anthropic, OpenAI, Ollama, or AWS Bedrock).
Conclusion
In this article, we’ve explored the various types of validation methods that Pointblank offers for ensuring data quality. These methods provide a framework for validating column values, checking row properties, verifying table structures, and even using AI for complex semantic validations. By combining these validation methods into comprehensive plans, you can systematically test your data against business rules and quality expectations. And this all helps to ensure your data remains reliable and trustworthy.