tfma.default_eval_shared_model
Stay organized with collections
Save and categorize content based on your preferences.
Returns default EvalSharedModel.
tfma.default_eval_shared_model(
eval_saved_model_path: str,
add_metrics_callbacks: Optional[List[types.AddMetricsCallbackType]] = None,
include_default_metrics: Optional[bool] = True,
example_weight_key: Optional[Union[str, Dict[str, str]]] = None,
additional_fetches: Optional[List[str]] = None,
blacklist_feature_fetches: Optional[List[str]] = None,
tags: Optional[List[str]] = None,
model_name: str = '',
eval_config: Optional[tfma.EvalConfig
] = None,
custom_model_loader: Optional[tfma.types.ModelLoader
] = None,
rubber_stamp: Optional[bool] = False,
resource_hints: Optional[Dict[str, Any]] = None,
backend_config: Optional[Any] = None
) -> tfma.types.EvalSharedModel
Used in the notebooks
Used in the guide |
Used in the tutorials |
|
|
Args |
eval_saved_model_path
|
Path to EvalSavedModel.
|
add_metrics_callbacks
|
Optional list of callbacks for adding additional
metrics to the graph (see EvalSharedModel for more information on how to
configure additional metrics). Metrics for example count and example
weights will be added automatically. Only used if EvalSavedModel used.
|
include_default_metrics
|
DEPRECATED. Use
eval_config.options.include_default_metrics.
|
example_weight_key
|
DEPRECATED. Use
eval_config.model_specs.example_weight_key or
eval_config.model_specs.example_weight_keys.
|
additional_fetches
|
Optional prefixes of additional tensors stored in
signature_def.inputs that should be fetched at prediction time. The
"features" and "labels" tensors are handled automatically and should not
be included. Only used if EvalSavedModel used.
|
blacklist_feature_fetches
|
Optional list of tensor names in the features
dictionary which should be excluded from the fetches request. This is
useful in scenarios where features are large (e.g. images) and can lead to
excessive memory use if stored. Only used if EvalSavedModel used.
|
tags
|
Optional model tags (e.g. 'serve' for serving or 'eval' for
EvalSavedModel).
|
model_name
|
Optional name of the model being created (should match
ModelSpecs.name). The name should only be provided if multiple models are
being evaluated.
|
eval_config
|
Eval config.
|
custom_model_loader
|
Optional custom model loader for non-TF models.
|
rubber_stamp
|
True when this run is a first run without a baseline model
while a baseline is configured, the diff thresholds will be ignored.
|
resource_hints
|
The beam resource hints to apply to the PTransform which
runs inference for this model.
|
backend_config
|
Optional configuration of backend running model inference
with some prediction extractors.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2024-04-26 UTC.
[null,null,["Last updated 2024-04-26 UTC."],[],[],null,["# tfma.default_eval_shared_model\n\n\u003cbr /\u003e\n\n|----------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/model-analysis/blob/v0.46.0/tensorflow_model_analysis/api/model_eval_lib.py#L364-L497) |\n\nReturns default EvalSharedModel. \n\n tfma.default_eval_shared_model(\n eval_saved_model_path: str,\n add_metrics_callbacks: Optional[List[types.AddMetricsCallbackType]] = None,\n include_default_metrics: Optional[bool] = True,\n example_weight_key: Optional[Union[str, Dict[str, str]]] = None,\n additional_fetches: Optional[List[str]] = None,\n blacklist_feature_fetches: Optional[List[str]] = None,\n tags: Optional[List[str]] = None,\n model_name: str = '',\n eval_config: Optional[../tfma/EvalConfig] = None,\n custom_model_loader: Optional[../tfma/types/ModelLoader] = None,\n rubber_stamp: Optional[bool] = False,\n resource_hints: Optional[Dict[str, Any]] = None,\n backend_config: Optional[Any] = None\n ) -\u003e ../tfma/types/EvalSharedModel\n\n### Used in the notebooks\n\n| Used in the guide | Used in the tutorials |\n|----------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| - [Using Counterfactual Logit Pairing with Keras](https://www.tensorflow.org/responsible_ai/model_remediation/counterfactual/guide/counterfactual_keras) | - [TensorFlow Model Analysis](https://www.tensorflow.org/tfx/tutorials/model_analysis/tfma_basic) - [Introduction to Fairness Indicators](https://www.tensorflow.org/responsible_ai/fairness_indicators/tutorials/Fairness_Indicators_Example_Colab) - [TensorFlow Constrained Optimization Example Using CelebA Dataset](https://www.tensorflow.org/responsible_ai/fairness_indicators/tutorials/Fairness_Indicators_TFCO_CelebA_Case_Study) |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|-----------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `eval_saved_model_path` | Path to EvalSavedModel. |\n| `add_metrics_callbacks` | Optional list of callbacks for adding additional metrics to the graph (see EvalSharedModel for more information on how to configure additional metrics). Metrics for example count and example weights will be added automatically. Only used if EvalSavedModel used. |\n| `include_default_metrics` | DEPRECATED. Use eval_config.options.include_default_metrics. |\n| `example_weight_key` | DEPRECATED. Use eval_config.model_specs.example_weight_key or eval_config.model_specs.example_weight_keys. |\n| `additional_fetches` | Optional prefixes of additional tensors stored in signature_def.inputs that should be fetched at prediction time. The \"features\" and \"labels\" tensors are handled automatically and should not be included. Only used if EvalSavedModel used. |\n| `blacklist_feature_fetches` | Optional list of tensor names in the features dictionary which should be excluded from the fetches request. This is useful in scenarios where features are large (e.g. images) and can lead to excessive memory use if stored. Only used if EvalSavedModel used. |\n| `tags` | Optional model tags (e.g. 'serve' for serving or 'eval' for EvalSavedModel). |\n| `model_name` | Optional name of the model being created (should match ModelSpecs.name). The name should only be provided if multiple models are being evaluated. |\n| `eval_config` | Eval config. |\n| `custom_model_loader` | Optional custom model loader for non-TF models. |\n| `rubber_stamp` | True when this run is a first run without a baseline model while a baseline is configured, the diff thresholds will be ignored. |\n| `resource_hints` | The beam resource hints to apply to the PTransform which runs inference for this model. |\n| `backend_config` | Optional configuration of backend running model inference with *some* prediction extractors. |\n\n\u003cbr /\u003e"]]