tf.keras.metrics.LogCoshError
Stay organized with collections
Save and categorize content based on your preferences.
Computes the logarithm of the hyperbolic cosine of the prediction error.
Inherits From: MeanMetricWrapper
, Mean
, Metric
tf.keras.metrics.LogCoshError(
name='logcosh', dtype=None
)
error = y_pred - y_true
logcosh = mean(log((exp(error) + exp(-error))/2), axis=-1)
Args |
name
|
(Optional) string name of the metric instance.
|
dtype
|
(Optional) data type of the metric result.
|
Example:
Example:
m = keras.metrics.LogCoshError()
m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]])
m.result()
0.10844523
m.reset_state()
m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]],
sample_weight=[1, 0])
m.result()
0.21689045
Usage with compile()
API:
model.compile(optimizer='sgd',
loss='mse',
metrics=[keras.metrics.LogCoshError()])
Attributes |
dtype
|
|
variables
|
|
Methods
add_variable
View source
add_variable(
shape, initializer, dtype=None, aggregation='sum', name=None
)
add_weight
View source
add_weight(
shape=(), initializer=None, dtype=None, name=None
)
from_config
View source
@classmethod
from_config(
config
)
get_config
View source
get_config()
Return the serializable config of the metric.
reset_state
View source
reset_state()
Reset all of the metric state variables.
This function is called between epochs/steps,
when a metric is evaluated during training.
result
View source
result()
Compute the current metric value.
Returns |
A scalar tensor, or a dictionary of scalar tensors.
|
stateless_reset_state
View source
stateless_reset_state()
stateless_result
View source
stateless_result(
metric_variables
)
stateless_update_state
View source
stateless_update_state(
metric_variables, *args, **kwargs
)
update_state
View source
update_state(
y_true, y_pred, sample_weight=None
)
Accumulate statistics for the metric.
__call__
View source
__call__(
*args, **kwargs
)
Call self as a function.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2024-06-07 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-06-07 UTC."],[],[],null,["# tf.keras.metrics.LogCoshError\n\n\u003cbr /\u003e\n\n|----------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/keras-team/keras/tree/v3.3.3/keras/src/metrics/regression_metrics.py#L311-L355) |\n\nComputes the logarithm of the hyperbolic cosine of the prediction error.\n\nInherits From: [`MeanMetricWrapper`](../../../tf/keras/metrics/MeanMetricWrapper), [`Mean`](../../../tf/keras/metrics/Mean), [`Metric`](../../../tf/keras/Metric) \n\n tf.keras.metrics.LogCoshError(\n name='logcosh', dtype=None\n )\n\n#### Formula:\n\n error = y_pred - y_true\n logcosh = mean(log((exp(error) + exp(-error))/2), axis=-1)\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|---------|------------------------------------------------|\n| `name` | (Optional) string name of the metric instance. |\n| `dtype` | (Optional) data type of the metric result. |\n\n\u003cbr /\u003e\n\n#### Example:\n\n#### Example:\n\n m = keras.metrics.LogCoshError()\n m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]])\n m.result()\n 0.10844523\n m.reset_state()\n m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]],\n sample_weight=[1, 0])\n m.result()\n 0.21689045\n\nUsage with `compile()` API: \n\n model.compile(optimizer='sgd',\n loss='mse',\n metrics=[keras.metrics.LogCoshError()])\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Attributes ---------- ||\n|-------------|---------------|\n| `dtype` | \u003cbr /\u003e \u003cbr /\u003e |\n| `variables` | \u003cbr /\u003e \u003cbr /\u003e |\n\n\u003cbr /\u003e\n\nMethods\n-------\n\n### `add_variable`\n\n[View source](https://github.com/keras-team/keras/tree/v3.3.3/keras/src/metrics/metric.py#L186-L202) \n\n add_variable(\n shape, initializer, dtype=None, aggregation='sum', name=None\n )\n\n### `add_weight`\n\n[View source](https://github.com/keras-team/keras/tree/v3.3.3/keras/src/metrics/metric.py#L204-L208) \n\n add_weight(\n shape=(), initializer=None, dtype=None, name=None\n )\n\n### `from_config`\n\n[View source](https://github.com/keras-team/keras/tree/v3.3.3/keras/src/metrics/reduction_metrics.py#L215-L219) \n\n @classmethod\n from_config(\n config\n )\n\n### `get_config`\n\n[View source](https://github.com/keras-team/keras/tree/v3.3.3/keras/src/metrics/regression_metrics.py#L354-L355) \n\n get_config()\n\nReturn the serializable config of the metric.\n\n### `reset_state`\n\n[View source](https://github.com/keras-team/keras/tree/v3.3.3/keras/src/metrics/reduction_metrics.py#L150-L152) \n\n reset_state()\n\nReset all of the metric state variables.\n\nThis function is called between epochs/steps,\nwhen a metric is evaluated during training.\n\n### `result`\n\n[View source](https://github.com/keras-team/keras/tree/v3.3.3/keras/src/metrics/reduction_metrics.py#L154-L157) \n\n result()\n\nCompute the current metric value.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|---|---|\n| A scalar tensor, or a dictionary of scalar tensors. ||\n\n\u003cbr /\u003e\n\n### `stateless_reset_state`\n\n[View source](https://github.com/keras-team/keras/tree/v3.3.3/keras/src/metrics/metric.py#L164-L177) \n\n stateless_reset_state()\n\n### `stateless_result`\n\n[View source](https://github.com/keras-team/keras/tree/v3.3.3/keras/src/metrics/metric.py#L148-L162) \n\n stateless_result(\n metric_variables\n )\n\n### `stateless_update_state`\n\n[View source](https://github.com/keras-team/keras/tree/v3.3.3/keras/src/metrics/metric.py#L115-L138) \n\n stateless_update_state(\n metric_variables, *args, **kwargs\n )\n\n### `update_state`\n\n[View source](https://github.com/keras-team/keras/tree/v3.3.3/keras/src/metrics/reduction_metrics.py#L200-L207) \n\n update_state(\n y_true, y_pred, sample_weight=None\n )\n\nAccumulate statistics for the metric.\n\n### `__call__`\n\n[View source](https://github.com/keras-team/keras/tree/v3.3.3/keras/src/metrics/metric.py#L217-L220) \n\n __call__(\n *args, **kwargs\n )\n\nCall self as a function."]]