tf.keras.metrics.FBetaScore
Stay organized with collections
Save and categorize content based on your preferences.
Computes F-Beta score.
Inherits From: Metric
tf.keras.metrics.FBetaScore(
average=None,
beta=1.0,
threshold=None,
name='fbeta_score',
dtype=None
)
b2 = beta ** 2
f_beta_score = (1 + b2) * (precision * recall) / (precision * b2 + recall)
This is the weighted harmonic mean of precision and recall.
Its output range is [0, 1]
. It works for both multi-class
and multi-label classification.
Args |
average
|
Type of averaging to be performed across per-class results
in the multi-class case.
Acceptable values are None , "micro" , "macro" and
"weighted" . Defaults to None .
If None , no averaging is performed and result() will return
the score for each class.
If "micro" , compute metrics globally by counting the total
true positives, false negatives and false positives.
If "macro" , compute metrics for each label,
and return their unweighted mean.
This does not take label imbalance into account.
If "weighted" , compute metrics for each label,
and return their average weighted by support
(the number of true instances for each label).
This alters "macro" to account for label imbalance.
It can result in an score that is not between precision and recall.
|
beta
|
Determines the weight of given to recall
in the harmonic mean between precision and recall (see pseudocode
equation above). Defaults to 1 .
|
threshold
|
Elements of y_pred greater than threshold are
converted to be 1, and the rest 0. If threshold is
None , the argmax of y_pred is converted to 1, and the rest to 0.
|
name
|
Optional. String name of the metric instance.
|
dtype
|
Optional. Data type of the metric result.
|
Returns |
F-Beta Score: float.
|
Example:
metric = keras.metrics.FBetaScore(beta=2.0, threshold=0.5)
y_true = np.array([[1, 1, 1],
[1, 0, 0],
[1, 1, 0]], np.int32)
y_pred = np.array([[0.2, 0.6, 0.7],
[0.2, 0.6, 0.6],
[0.6, 0.8, 0.0]], np.float32)
metric.update_state(y_true, y_pred)
result = metric.result()
result
[0.3846154 , 0.90909094, 0.8333334 ]
Attributes |
dtype
|
|
variables
|
|
Methods
add_variable
View source
add_variable(
shape, initializer, dtype=None, aggregation='sum', name=None
)
add_weight
View source
add_weight(
shape=(), initializer=None, dtype=None, name=None
)
from_config
View source
@classmethod
from_config(
config
)
get_config
View source
get_config()
Returns the serializable config of the metric.
reset_state
View source
reset_state()
Reset all of the metric state variables.
This function is called between epochs/steps,
when a metric is evaluated during training.
result
View source
result()
Compute the current metric value.
Returns |
A scalar tensor, or a dictionary of scalar tensors.
|
stateless_reset_state
View source
stateless_reset_state()
stateless_result
View source
stateless_result(
metric_variables
)
stateless_update_state
View source
stateless_update_state(
metric_variables, *args, **kwargs
)
update_state
View source
update_state(
y_true, y_pred, sample_weight=None
)
Accumulate statistics for the metric.
__call__
View source
__call__(
*args, **kwargs
)
Call self as a function.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2024-06-07 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-06-07 UTC."],[],[],null,["# tf.keras.metrics.FBetaScore\n\n\u003cbr /\u003e\n\n|-----------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/keras-team/keras/tree/v3.3.3/keras/src/metrics/f_score_metrics.py#L8-L247) |\n\nComputes F-Beta score.\n\nInherits From: [`Metric`](../../../tf/keras/Metric) \n\n tf.keras.metrics.FBetaScore(\n average=None,\n beta=1.0,\n threshold=None,\n name='fbeta_score',\n dtype=None\n )\n\n#### Formula:\n\n b2 = beta ** 2\n f_beta_score = (1 + b2) * (precision * recall) / (precision * b2 + recall)\n\nThis is the weighted harmonic mean of precision and recall.\nIts output range is `[0, 1]`. It works for both multi-class\nand multi-label classification.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|-------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `average` | Type of averaging to be performed across per-class results in the multi-class case. Acceptable values are `None`, `\"micro\"`, `\"macro\"` and `\"weighted\"`. Defaults to `None`. If `None`, no averaging is performed and `result()` will return the score for each class. If `\"micro\"`, compute metrics globally by counting the total true positives, false negatives and false positives. If `\"macro\"`, compute metrics for each label, and return their unweighted mean. This does not take label imbalance into account. If `\"weighted\"`, compute metrics for each label, and return their average weighted by support (the number of true instances for each label). This alters `\"macro\"` to account for label imbalance. It can result in an score that is not between precision and recall. |\n| `beta` | Determines the weight of given to recall in the harmonic mean between precision and recall (see pseudocode equation above). Defaults to `1`. |\n| `threshold` | Elements of `y_pred` greater than `threshold` are converted to be 1, and the rest 0. If `threshold` is `None`, the argmax of `y_pred` is converted to 1, and the rest to 0. |\n| `name` | Optional. String name of the metric instance. |\n| `dtype` | Optional. Data type of the metric result. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| F-Beta Score: float. ||\n\n\u003cbr /\u003e\n\n#### Example:\n\n metric = keras.metrics.FBetaScore(beta=2.0, threshold=0.5)\n y_true = np.array([[1, 1, 1],\n [1, 0, 0],\n [1, 1, 0]], np.int32)\n y_pred = np.array([[0.2, 0.6, 0.7],\n [0.2, 0.6, 0.6],\n [0.6, 0.8, 0.0]], np.float32)\n metric.update_state(y_true, y_pred)\n result = metric.result()\n result\n [0.3846154 , 0.90909094, 0.8333334 ]\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Attributes ---------- ||\n|-------------|---------------|\n| `dtype` | \u003cbr /\u003e \u003cbr /\u003e |\n| `variables` | \u003cbr /\u003e \u003cbr /\u003e |\n\n\u003cbr /\u003e\n\nMethods\n-------\n\n### `add_variable`\n\n[View source](https://github.com/keras-team/keras/tree/v3.3.3/keras/src/metrics/metric.py#L186-L202) \n\n add_variable(\n shape, initializer, dtype=None, aggregation='sum', name=None\n )\n\n### `add_weight`\n\n[View source](https://github.com/keras-team/keras/tree/v3.3.3/keras/src/metrics/metric.py#L204-L208) \n\n add_weight(\n shape=(), initializer=None, dtype=None, name=None\n )\n\n### `from_config`\n\n[View source](https://github.com/keras-team/keras/tree/v3.3.3/keras/src/metrics/metric.py#L226-L228) \n\n @classmethod\n from_config(\n config\n )\n\n### `get_config`\n\n[View source](https://github.com/keras-team/keras/tree/v3.3.3/keras/src/metrics/f_score_metrics.py#L231-L243) \n\n get_config()\n\nReturns the serializable config of the metric.\n\n### `reset_state`\n\n[View source](https://github.com/keras-team/keras/tree/v3.3.3/keras/src/metrics/f_score_metrics.py#L245-L247) \n\n reset_state()\n\nReset all of the metric state variables.\n\nThis function is called between epochs/steps,\nwhen a metric is evaluated during training.\n\n### `result`\n\n[View source](https://github.com/keras-team/keras/tree/v3.3.3/keras/src/metrics/f_score_metrics.py#L201-L229) \n\n result()\n\nCompute the current metric value.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|---|---|\n| A scalar tensor, or a dictionary of scalar tensors. ||\n\n\u003cbr /\u003e\n\n### `stateless_reset_state`\n\n[View source](https://github.com/keras-team/keras/tree/v3.3.3/keras/src/metrics/metric.py#L164-L177) \n\n stateless_reset_state()\n\n### `stateless_result`\n\n[View source](https://github.com/keras-team/keras/tree/v3.3.3/keras/src/metrics/metric.py#L148-L162) \n\n stateless_result(\n metric_variables\n )\n\n### `stateless_update_state`\n\n[View source](https://github.com/keras-team/keras/tree/v3.3.3/keras/src/metrics/metric.py#L115-L138) \n\n stateless_update_state(\n metric_variables, *args, **kwargs\n )\n\n### `update_state`\n\n[View source](https://github.com/keras-team/keras/tree/v3.3.3/keras/src/metrics/f_score_metrics.py#L158-L199) \n\n update_state(\n y_true, y_pred, sample_weight=None\n )\n\nAccumulate statistics for the metric.\n\n### `__call__`\n\n[View source](https://github.com/keras-team/keras/tree/v3.3.3/keras/src/metrics/metric.py#L217-L220) \n\n __call__(\n *args, **kwargs\n )\n\nCall self as a function."]]