tfma.metrics.TrueNegatives
Stay organized with collections
Save and categorize content based on your preferences.
Calculates the number of true negatives.
Inherits From: Metric
tfma.metrics.TrueNegatives(
thresholds: Optional[Union[float, List[float]]] = None,
name: Optional[str] = None,
top_k: Optional[int] = None,
class_id: Optional[int] = None
)
If sample_weight
is given, calculates the sum of the weights of true
negatives.
If sample_weight
is None
, weights default to 1.
Use sample_weight
of 0 to mask values.
Args |
thresholds
|
(Optional) Defaults to [0.5]. A float value or a python
list/tuple of float threshold values in [0, 1]. A threshold is compared
with prediction values to determine the truth value of predictions
(i.e., above the threshold is true , below is false ). One metric
value is generated for each threshold value.
|
name
|
(Optional) Metric name.
|
top_k
|
(Optional) Used with a multi-class model to specify that the top-k
values should be used to compute the confusion matrix. The net effect is
that the non-top-k values are set to -inf and the matrix is then
constructed from the average TP, FP, TN, FN across the classes. When
top_k is used, metrics_specs.binarize settings must not be present. Only
one of class_id or top_k should be configured. When top_k is set, the
default thresholds are [float('-inf')].
|
class_id
|
(Optional) Used with a multi-class model to specify which class
to compute the confusion matrix for. When class_id is used,
metrics_specs.binarize settings must not be present. Only one of
class_id or top_k should be configured.
|
Attributes |
compute_confidence_interval
|
Whether to compute confidence intervals for this metric.
Note that this may not completely remove the computational overhead
involved in computing a given metric. This is only respected by the
jackknife confidence interval method.
|
Methods
computations
View source
computations(
eval_config: Optional[tfma.EvalConfig
] = None,
schema: Optional[schema_pb2.Schema] = None,
model_names: Optional[List[str]] = None,
output_names: Optional[List[str]] = None,
sub_keys: Optional[List[Optional[SubKey]]] = None,
aggregation_type: Optional[AggregationType] = None,
class_weights: Optional[Dict[int, float]] = None,
example_weighted: bool = False,
query_key: Optional[str] = None
) -> tfma.metrics.MetricComputations
Creates computations associated with metric.
from_config
View source
@classmethod
from_config(
config: Dict[str, Any]
) -> 'Metric'
get_config
View source
get_config() -> Dict[str, Any]
Returns serializable config.
result
View source
result(
tp: float, tn: float, fp: float, fn: float
) -> float
Function for computing metric value from TP, TN, FP, FN values.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2024-04-26 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-04-26 UTC."],[],[],null,["# tfma.metrics.TrueNegatives\n\n\u003cbr /\u003e\n\n|-------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/model-analysis/blob/v0.46.0/tensorflow_model_analysis/metrics/confusion_matrix_metrics.py#L975-L1018) |\n\nCalculates the number of true negatives.\n\nInherits From: [`Metric`](../../tfma/metrics/Metric) \n\n tfma.metrics.TrueNegatives(\n thresholds: Optional[Union[float, List[float]]] = None,\n name: Optional[str] = None,\n top_k: Optional[int] = None,\n class_id: Optional[int] = None\n )\n\nIf `sample_weight` is given, calculates the sum of the weights of true\nnegatives.\n\nIf `sample_weight` is `None`, weights default to 1.\nUse `sample_weight` of 0 to mask values.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|--------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `thresholds` | (Optional) Defaults to \\[0.5\\]. A float value or a python list/tuple of float threshold values in \\[0, 1\\]. A threshold is compared with prediction values to determine the truth value of predictions (i.e., above the threshold is `true`, below is `false`). One metric value is generated for each threshold value. |\n| `name` | (Optional) Metric name. |\n| `top_k` | (Optional) Used with a multi-class model to specify that the top-k values should be used to compute the confusion matrix. The net effect is that the non-top-k values are set to -inf and the matrix is then constructed from the average TP, FP, TN, FN across the classes. When top_k is used, metrics_specs.binarize settings must not be present. Only one of class_id or top_k should be configured. When top_k is set, the default thresholds are \\[float('-inf')\\]. |\n| `class_id` | (Optional) Used with a multi-class model to specify which class to compute the confusion matrix for. When class_id is used, metrics_specs.binarize settings must not be present. Only one of class_id or top_k should be configured. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Attributes ---------- ||\n|-------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `compute_confidence_interval` | Whether to compute confidence intervals for this metric. \u003cbr /\u003e Note that this may not completely remove the computational overhead involved in computing a given metric. This is only respected by the jackknife confidence interval method. |\n\n\u003cbr /\u003e\n\nMethods\n-------\n\n### `computations`\n\n[View source](https://github.com/tensorflow/model-analysis/blob/v0.46.0/tensorflow_model_analysis/metrics/metric_types.py#L862-L888) \n\n computations(\n eval_config: Optional[../../tfma/EvalConfig] = None,\n schema: Optional[schema_pb2.Schema] = None,\n model_names: Optional[List[str]] = None,\n output_names: Optional[List[str]] = None,\n sub_keys: Optional[List[Optional[SubKey]]] = None,\n aggregation_type: Optional[AggregationType] = None,\n class_weights: Optional[Dict[int, float]] = None,\n example_weighted: bool = False,\n query_key: Optional[str] = None\n ) -\u003e ../../tfma/metrics/MetricComputations\n\nCreates computations associated with metric.\n\n### `from_config`\n\n[View source](https://github.com/tensorflow/model-analysis/blob/v0.46.0/tensorflow_model_analysis/metrics/metric_types.py#L842-L847) \n\n @classmethod\n from_config(\n config: Dict[str, Any]\n ) -\u003e 'Metric'\n\n### `get_config`\n\n[View source](https://github.com/tensorflow/model-analysis/blob/v0.46.0/tensorflow_model_analysis/metrics/confusion_matrix_metrics.py#L252-L262) \n\n get_config() -\u003e Dict[str, Any]\n\nReturns serializable config.\n\n### `result`\n\n[View source](https://github.com/tensorflow/model-analysis/blob/v0.46.0/tensorflow_model_analysis/metrics/confusion_matrix_metrics.py#L1017-L1018) \n\n result(\n tp: float, tn: float, fp: float, fn: float\n ) -\u003e float\n\nFunction for computing metric value from TP, TN, FP, FN values."]]