Usage of pg_yregress¶
Despite being inspired by
pg_yregress is not in any way compatible with
pg_regress as it has a different workflow and an execution model.
pg_yregess is currently developed as part of Omnigres but can be easily built independently of Omnigres ("out of
This tool uses YAML to describe tests. Let's start with
The above specification will test the
select 1... query be executing it and ensuring it was successful. The test will be executed against a
managed instance1 of Postgres.
pg_yregress against this file will produce output adhering to TAP, Test Anything Protocol for human or machine consumption.
As the tool will evolve, we might add other ways to get this information.
The above test can be further simplified
There's a reduced syntax for checking whether query is successful
without naming it. You can even drop the
query key and simply write the query
as a test:
Nothing very interesting. Now, let's amend this test to test the result of this query. For a moment, let's assume we don't know what results are to be returned.
- Here we specify an empty result set
Re-running the tool will output something different:
As you can see, it shows what test specification should
contain in order to pass. You can also observe, that
exited with a non-zero error code.
For better visibility into changes, YAML-specific diff tools can be of use, such as dyff. To make it easier to use these tools,
pg_yregress takes an additional
optional argument where it will output the updated specification instead of stdout.
results to the original specification will make
return zero again (thus, signal that the specification is executed as expected.)
query item is executed within an individual transaction that is rolled back
at the end to ensure it does not interfere with other items.
Handling JSON and JSONB¶
pg_yregress supports JSON types.
- If a supplied query parameter is a mapping or a sequence, it will be automatically converted to JSON strings
- If result value is of a JSON type, it will be converted to YAML value.
- name: json and jsonb params query: select $1::json as json, $2::jsonb as jsonb params: - hello: 1 - hello: 2 results: - json: hello: 1 jsonb: hello: 2 - name: json and jsonb results query: select json_build_object('hello', 1), jsonb_build_object('hello', 2) results: - json_build_object: hello: 1 jsonb_build_object: hello: 2
Testing for failures¶
You can simply test that a certain query will fail:
The above will succeed, since we have set
But how we can test against specific error message? This can be done by
error to a more specific value:
The above will pass as this is the error this test fails with.
Multiple forms of
error report are supported:
When passed as a scalar value, error message will be compared with the provided one.
Full error form¶
In this form, both severity and message can be specified.
A test can be marked negative when it should fail if the test itself passes. This is useful when testing scenarios where something specific should not happen.
The example is slightly contrived as we can test the assumption in the query itself, but at times it is easier or clearer to have this specified as such "negative test".
Some test inolve more than one query and we need to check for more than just the final result, so simply executing all statements and queries delimited by a semicolon wouldn't be great.
For this use-case, instead of using
steps item is executed within an individual transaction and is rolled
back at the end to ensure it does not interfere with other items. Within
every item is not wrapped into a transaction and the results of each step are visible
in the next step.
There are cases when a number of tests that don't need to be executed in the same transaction (like multi-step) but they do form a logical group nevertheless. For example, testing different aspects of a feature, or different inputs on the same function.
For this, one can use
By default, all tests are rolled back to ensure clean environment. However, in some cases, tests need to commit (for example, to test deferred constraints).
When this is necessary, the
commit property of a test should be set
This can be also used for multi-step tests. If any of the steps is committed but the multi-step test itself isn't, it'll roll back the uncommitted steps.
One can also check their tests for notices:
One can also check a
steps-based test the accumulated sequence of notices
(although testing individually in
query steps is still possible):
Sometimes there's a need to test binary encoding of types2.
allows this to be done by manipulating the
binary property of the
Binary encodings are done using hexadecimal notiation prefixed by
This will return results as binary:
And this will return results as characters but take parameters as binary:
If a test not meant to be executed, one can use
skip directive to suppress its execution. Given a boolean scalar, if it is positive, the test will be skipped. If a negative boolean scalar will be given, it will not be skipped. If any other scalar will be given, it will be used as a reason for skipping the test.
Skipped tests don't need to have a valid instruction (
If a skipped test is meant to be executed but shouldn't fail the execution of test suite in case if it fails,
todo directive can be used instead of
Sometimes it is useful to reset a connection to the database to test certain
behaviors (for example, ensuring that functionality works across different
backend instances). For this,
reset property can be set to
Tests may have one more instances they run on. By default,
pg_yregress will provision one. However, if you want to configure the instance or add more than one, you can use
instances configuration which is a mapping of names to the configuration dictionaries:
instances: configured: # Can be configured with a mapping config: log_connections: yes configured_1: # Can be configured with a string using `postgresql.conf` format config: | log_connections = yes default: init: # Executes a sequence of queries - create extension my_extension # One instance may be specified as default default: yes other: init: - alter system set config_param = '...' # Initialization may require restarting the instance - restart: true
Each test will run on a default instance, unless
instance property is
specified and the name of the instance is referenced.
You can also configure an instance with a custom
pg_hba.conf file by using
This is useful when tests impose special authentication requirements.
Single instance configuration¶
In case when only one instance is necessary but it needs to be configured,
instead of using
instances and naming the default instance, one can use
instance key instead:
pg_yregress manages Postgres instances itself: provisions the
database and its configuration, starts and stops processes. However, it can also
be used to run tests against other instances of Postgres operated outside of its
own workflow. This can be used for testing functionality or data patterns in an
In order to use it, one has to pass one of the following options:
|--host||-h||Host to connect to. Defaults to
|--port||-p||Port to connect to. Default to
|--username||-U||Username. Defaults to current username.|
|--dbname||-d||Database name. Defaults to username.|
|--password||-W||Force to prompt for a password before connecting to the database.|
|--no-password||-w||Never issue a password prompt. Will attempt to get a password from
For example, this will attempt to connect to a local Postgres instance on port
omnigres as a database name and a username, prompting for a
The following options are not available for unmanaged instances and will make pg_yregress terminate early with a corresponding error message.
Configuring test suite¶
In certain cases, it may be useful to pass some configuration information to the test suite itself. While it is generally recommended to avoid this, sometimes it's right answer.
All test suites receive an implicit
env mapping at the root that contains a
mapping of all environment variables. Using YAML Path (YPath) notation, one can
retrieve configuration specified through environment variables:
Named test suites¶
A test suite (the YAML file) can be given a human-readable name using