tfma.metrics.MultiLabelConfusionMatrixPlot
Stay organized with collections
Save and categorize content based on your preferences.
Multi-label confusion matrix.
Inherits From: Metric
tfma.metrics.MultiLabelConfusionMatrixPlot(
thresholds: Optional[List[float]] = None,
num_thresholds: Optional[int] = None,
name: str = MULTI_LABEL_CONFUSION_MATRIX_PLOT_NAME
)
For each actual class (positive label) a confusion matrix is computed for each
class based on the associated predicted values such that:
TP = positive_prediction_class_label & positive_prediction
TN = negative_prediction_class_label & negative_prediction
FP = negative_prediction_class_label & positive_prediction
FN = positive_prediction_class_label & negative_prediction
For example, given classes 0, 1 and a given threshold, the following matrices
will be computed:
Actual: class_0
Predicted: class_0
TP = is_class_0 & is_class_0 & predict_class_0
TN = is_class_0 & not_class_0 & predict_not_class_0
FN = is_class_0 & is_class_0 & predict_not_class_0
FP = is_class_0 & not_class_0 & predict_class_0
Actual: class_0
Predicted: class_1
TP = is_class_0 & is_class_1 & predict_class_1
TN = is_class_0 & not_class_1 & predict_not_class_1
FN = is_class_0 & is_class_1 & predict_not_class_1
FP = is_class_0 & not_class_1 & predict_class_1
Actual: class_1
Predicted: class_0
TP = is_class_1 & is_class_0 & predict_class_0
TN = is_class_1 & not_class_0 & predict_not_class_0
FN = is_class_1 & is_class_0 & predict_not_class_0
FP = is_class_1 & not_class_0 & predict_class_0
Actual: class_1
Predicted: class_1
TP = is_class_1 & is_class_1 & predict_class_1
TN = is_class_1 & not_class_1 & predict_not_class_1
FN = is_class_1 & is_class_1 & predict_not_class_1
FP = is_class_1 & not_class_1 & predict_class_1
Note that unlike the multi-class confusion matrix, the inputs are assumed to
be multi-label whereby the predictions may not necessarily sum to 1.0 and
multiple classes can be true as the same time.
Args |
thresholds
|
Optional thresholds. Only one of either thresholds or
num_thresholds should be used. If both are unset, then [0.5] will be
assumed.
|
num_thresholds
|
Number of thresholds to use. The thresholds will be evenly
spaced between 0.0 and 1.0 and inclusive of the boundaries (i.e. to
configure the thresholds to [0.0, 0.25, 0.5, 0.75, 1.0], the parameter
should be set to 5). Only one of either thresholds or num_thresholds
should be used.
|
name
|
Metric name.
|
Attributes |
compute_confidence_interval
|
Whether to compute confidence intervals for this metric.
Note that this may not completely remove the computational overhead
involved in computing a given metric. This is only respected by the
jackknife confidence interval method.
|
Methods
computations
View source
computations(
eval_config: Optional[tfma.EvalConfig
] = None,
schema: Optional[schema_pb2.Schema] = None,
model_names: Optional[List[str]] = None,
output_names: Optional[List[str]] = None,
sub_keys: Optional[List[Optional[SubKey]]] = None,
aggregation_type: Optional[AggregationType] = None,
class_weights: Optional[Dict[int, float]] = None,
example_weighted: bool = False,
query_key: Optional[str] = None
) -> tfma.metrics.MetricComputations
Creates computations associated with metric.
from_config
View source
@classmethod
from_config(
config: Dict[str, Any]
) -> 'Metric'
get_config
View source
get_config() -> Dict[str, Any]
Returns serializable config.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2024-04-26 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-04-26 UTC."],[],[],null,["# tfma.metrics.MultiLabelConfusionMatrixPlot\n\n\u003cbr /\u003e\n\n|-------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/model-analysis/blob/v0.46.0/tensorflow_model_analysis/metrics/multi_label_confusion_matrix_plot.py#L29-L95) |\n\nMulti-label confusion matrix.\n\nInherits From: [`Metric`](../../tfma/metrics/Metric) \n\n tfma.metrics.MultiLabelConfusionMatrixPlot(\n thresholds: Optional[List[float]] = None,\n num_thresholds: Optional[int] = None,\n name: str = MULTI_LABEL_CONFUSION_MATRIX_PLOT_NAME\n )\n\nFor each actual class (positive label) a confusion matrix is computed for each\nclass based on the associated predicted values such that:\n\nTP = positive_prediction_class_label \\& positive_prediction\nTN = negative_prediction_class_label \\& negative_prediction\nFP = negative_prediction_class_label \\& positive_prediction\nFN = positive_prediction_class_label \\& negative_prediction\n\nFor example, given classes 0, 1 and a given threshold, the following matrices\nwill be computed:\n\nActual: class_0\nPredicted: class_0\nTP = is_class_0 \\& is_class_0 \\& predict_class_0\nTN = is_class_0 \\& not_class_0 \\& predict_not_class_0\nFN = is_class_0 \\& is_class_0 \\& predict_not_class_0\nFP = is_class_0 \\& not_class_0 \\& predict_class_0\nActual: class_0\nPredicted: class_1\nTP = is_class_0 \\& is_class_1 \\& predict_class_1\nTN = is_class_0 \\& not_class_1 \\& predict_not_class_1\nFN = is_class_0 \\& is_class_1 \\& predict_not_class_1\nFP = is_class_0 \\& not_class_1 \\& predict_class_1\nActual: class_1\nPredicted: class_0\nTP = is_class_1 \\& is_class_0 \\& predict_class_0\nTN = is_class_1 \\& not_class_0 \\& predict_not_class_0\nFN = is_class_1 \\& is_class_0 \\& predict_not_class_0\nFP = is_class_1 \\& not_class_0 \\& predict_class_0\nActual: class_1\nPredicted: class_1\nTP = is_class_1 \\& is_class_1 \\& predict_class_1\nTN = is_class_1 \\& not_class_1 \\& predict_not_class_1\nFN = is_class_1 \\& is_class_1 \\& predict_not_class_1\nFP = is_class_1 \\& not_class_1 \\& predict_class_1\n\nNote that unlike the multi-class confusion matrix, the inputs are assumed to\nbe multi-label whereby the predictions may not necessarily sum to 1.0 and\nmultiple classes can be true as the same time.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `thresholds` | Optional thresholds. Only one of either thresholds or num_thresholds should be used. If both are unset, then \\[0.5\\] will be assumed. |\n| `num_thresholds` | Number of thresholds to use. The thresholds will be evenly spaced between 0.0 and 1.0 and inclusive of the boundaries (i.e. to configure the thresholds to \\[0.0, 0.25, 0.5, 0.75, 1.0\\], the parameter should be set to 5). Only one of either thresholds or num_thresholds should be used. |\n| `name` | Metric name. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Attributes ---------- ||\n|-------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `compute_confidence_interval` | Whether to compute confidence intervals for this metric. \u003cbr /\u003e Note that this may not completely remove the computational overhead involved in computing a given metric. This is only respected by the jackknife confidence interval method. |\n\n\u003cbr /\u003e\n\nMethods\n-------\n\n### `computations`\n\n[View source](https://github.com/tensorflow/model-analysis/blob/v0.46.0/tensorflow_model_analysis/metrics/metric_types.py#L862-L888) \n\n computations(\n eval_config: Optional[../../tfma/EvalConfig] = None,\n schema: Optional[schema_pb2.Schema] = None,\n model_names: Optional[List[str]] = None,\n output_names: Optional[List[str]] = None,\n sub_keys: Optional[List[Optional[SubKey]]] = None,\n aggregation_type: Optional[AggregationType] = None,\n class_weights: Optional[Dict[int, float]] = None,\n example_weighted: bool = False,\n query_key: Optional[str] = None\n ) -\u003e ../../tfma/metrics/MetricComputations\n\nCreates computations associated with metric.\n\n### `from_config`\n\n[View source](https://github.com/tensorflow/model-analysis/blob/v0.46.0/tensorflow_model_analysis/metrics/metric_types.py#L842-L847) \n\n @classmethod\n from_config(\n config: Dict[str, Any]\n ) -\u003e 'Metric'\n\n### `get_config`\n\n[View source](https://github.com/tensorflow/model-analysis/blob/v0.46.0/tensorflow_model_analysis/metrics/metric_types.py#L838-L840) \n\n get_config() -\u003e Dict[str, Any]\n\nReturns serializable config."]]