skpro.utils.estimator_checks.check_estimator#

skpro.utils.estimator_checks.check_estimator(estimator, raise_exceptions=False, tests_to_run=None, fixtures_to_run=None, verbose=True, tests_to_exclude=None, fixtures_to_exclude=None)[source]#

Run all tests on one single estimator.

Tests that are run on estimator:

all tests in test_all_estimators all interface compatibility tests from the module of estimator’s scitype

for example, test_all_regressors if estimator is a regressor

Parameters:
estimatorestimator class or estimator instance
raise_exceptionsbool, optional, default=False

whether to return exceptions/failures in the results dict, or raise them

  • if False: returns exceptions in returned results dict

  • if True: raises exceptions as they occur

tests_to_runstr or list of str, optional. Default = run all tests.

Names (test/function name string) of tests to run. sub-sets tests that are run to the tests given here.

fixtures_to_runstr or list of str, optional. Default = run all tests.

pytest test-fixture combination codes, which test-fixture combinations to run. sub-sets tests and fixtures to run to the list given here. If both tests_to_run and fixtures_to_run are provided, runs the union, i.e., all test-fixture combinations for tests in tests_to_run,

plus all test-fixture combinations in fixtures_to_run.

verbosestr, optional, default=True.

whether to print out informative summary of tests run.

tests_to_excludestr or list of str, names of tests to exclude. default = None

removes tests that should not be run, after subsetting via tests_to_run.

fixtures_to_excludestr or list of str, fixtures to exclude. default = None

removes test-fixture combinations that should not be run. This is done after subsetting via fixtures_to_run.

Returns:
resultsdict of results of the tests in self

keys are test/fixture strings, identical as in pytest, e.g., test[fixture] entries are the string “PASSED” if the test passed, or the exception raised if the test did not pass returned only if all tests pass, or raise_exceptions=False

Raises:
if raise_exceptions=True,
raises any exception produced by the tests directly

Examples

>>> from skpro.regression.residual import ResidualDouble
>>> from skpro.utils import check_estimator

Running all tests for ResidualDouble class, this uses all instances from get_test_params and compatible scenarios

>>> results = check_estimator(ResidualDouble)
All tests PASSED!

Running all tests for a specific ResidualDouble this uses the instance that is passed and compatible scenarios

>>> from sklearn.linear_model import LinearRegression
>>> results = check_estimator(ResidualDouble(LinearRegression()))
All tests PASSED!

Running specific test (all fixtures) for ResidualDouble

>>> results = check_estimator(ResidualDouble, tests_to_run="test_clone")
All tests PASSED!

{‘test_clone[ResidualDouble-0]’: ‘PASSED’, ‘test_clone[ResidualDouble-1]’: ‘PASSED’}

Running one specific test-fixture-combination for ResidualDouble

>>> check_estimator(
...    ResidualDouble, fixtures_to_run="test_clone[ResidualDouble-1]"
... )
All tests PASSED!
{'test_clone[ResidualDouble-1]': 'PASSED'}