ColumnMapExpectation
- class great_expectations.expectations.expectation.ColumnMapExpectation(configuration: Optional[great_expectations.core.expectation_configuration.ExpectationConfiguration] = None)#
- Base class for ColumnMapExpectations. - ColumnMapExpectations are evaluated for a column and ask a yes/no question about every row in the column. Based on the result, they then calculate the percentage of rows that gave a positive answer. If the percentage is high enough, the Expectation considers that data valid. - ColumnMapExpectations must implement a _validate(…) method containing logic for determining whether the Expectation is successfully validated. - ColumnMapExpectations may optionally provide implementations of validate_configuration, which should raise an error if the configuration will not be usable for the Expectation. By default, the validate_configuration method will return an error if column is missing from the configuration. - Raises
- InvalidExpectationConfigurationError – If column is missing from configuration. 
- Parameters
- domain_keys (tuple) – A tuple of the keys used to determine the domain of the expectation. 
- success_keys (tuple) – A tuple of the keys used to determine the success of the expectation. 
- default_kwarg_values (optional[dict]) – Optional. A dictionary that will be used to fill unspecified kwargs from the Expectation Configuration. 
 
 - domain_type: ClassVar = 'column'#
 - get_success_kwargs(configuration: Optional[great_expectations.core.expectation_configuration.ExpectationConfiguration] = None) Dict[str, Any]#
- Retrieve the success kwargs. - Parameters
- configuration – The ExpectationConfiguration that contains the kwargs. If no configuration arg is provided, the success kwargs from the configuration attribute of the Expectation instance will be returned. 
 
 - print_diagnostic_checklist(diagnostics: Optional[great_expectations.core.expectation_diagnostics.expectation_diagnostics.ExpectationDiagnostics] = None, show_failed_tests: bool = False, backends: Optional[List[str]] = None, show_debug_messages: bool = False) str# 
- Runs self.run_diagnostics and generates a diagnostic checklist. - This output from this method is a thin wrapper for ExpectationDiagnostics.generate_checklist() This method is experimental. - Parameters
- diagnostics (optional[ExpectationDiagnostics]) – If diagnostics are not provided, diagnostics will be ran on self. 
- show_failed_tests (bool) – If true, failing tests will be printed. 
- backends – list of backends to pass to run_diagnostics 
- show_debug_messages (bool) – If true, create a logger and pass to run_diagnostics 
 
 
 - run_diagnostics(raise_exceptions_for_backends: bool = False, ignore_suppress: bool = False, ignore_only_for: bool = False, for_gallery: bool = False, debug_logger: Optional[logging.Logger] = None, only_consider_these_backends: Optional[List[str]] = None, context: Optional[AbstractDataContext] = None) ExpectationDiagnostics# 
- Produce a diagnostic report about this Expectation. - The current uses for this method’s output are using the JSON structure to populate the Public Expectation Gallery and enabling a fast dev loop for developing new Expectations where the contributors can quickly check the completeness of their expectations. - The contents of the report are captured in the ExpectationDiagnostics dataclass. You can see some examples in test_expectation_diagnostics.py - Some components (e.g. description, examples, library_metadata) of the diagnostic report can be introspected directly from the Exepctation class. Other components (e.g. metrics, renderers, executions) are at least partly dependent on instantiating, validating, and/or executing the Expectation class. For these kinds of components, at least one test case with include_in_gallery=True must be present in the examples to produce the metrics, renderers and execution engines parts of the report. This is due to a get_validation_dependencies requiring expectation_config as an argument. - If errors are encountered in the process of running the diagnostics, they are assumed to be due to incompleteness of the Expectation’s implementation (e.g., declaring a dependency on Metrics that do not exist). These errors are added under “errors” key in the report. - Parameters
- raise_exceptions_for_backends – Bool object that when True will raise an Exception if a backend fails to connect. 
- ignore_suppress – Bool object that when True will ignore the suppress_test_for list on Expectation sample tests. 
- ignore_only_for – Bool object that when True will ignore the only_for list on Expectation sample tests. 
- for_gallery – Bool object that when True will create empty arrays to use as examples for the Expectation Diagnostics. 
- debug_logger (optional[logging.Logger]) – Logger object to use for sending debug messages to. 
- only_consider_these_backends (optional[List[str]]) – 
- context (optional[AbstractDataContext]) – Instance of any child of “AbstractDataContext” class. 
 
- Returns
- An Expectation Diagnostics report object 
 
 - validate(validator: Validator, configuration: Optional[ExpectationConfiguration] = None, evaluation_parameters: Optional[dict] = None, interactive_evaluation: bool = True, data_context: Optional[AbstractDataContext] = None, runtime_configuration: Optional[dict] = None) ExpectationValidationResult# 
- Validates the expectation against the provided data. - Parameters
- validator – A Validator object that can be used to create Expectations, validate Expectations, and get Metrics for Expectations. 
- configuration – Defines the parameters and name of a specific expectation. 
- evaluation_parameters – Dictionary of dynamic values used during Validation of an Expectation. 
- interactive_evaluation – Setting the interactive_evaluation flag on a DataAsset make it possible to declare expectations and store expectations without immediately evaluating them. 
- data_context – An instance of a GX DataContext. 
- runtime_configuration – The runtime configuration for the Expectation. 
 
- Returns
- An ExpectationValidationResult object