tf.AggregationMethod
Stay organized with collections
Save and categorize content based on your preferences.
A class listing aggregation methods used to combine gradients.
Computing partial derivatives can require aggregating gradient
contributions. This class lists the various methods that can
be used to combine gradients in the graph.
The following aggregation methods are part of the stable API for
aggregating gradients:
ADD_N
: All of the gradient terms are summed as part of one
operation using the "AddN" op (see tf.add_n
). This
method has the property that all gradients must be ready and
buffered separately in memory before any aggregation is performed.
DEFAULT
: The system-chosen default aggregation method.
The following aggregation methods are experimental and may not
be supported in future releases:
EXPERIMENTAL_TREE
: Gradient terms are summed in pairs using
the "AddN" op. This method of summing gradients may reduce
performance, but it can improve memory utilization because the
gradients can be released earlier.
EXPERIMENTAL_ACCUMULATE_N
: Same as EXPERIMENTAL_TREE
.
Example usage when computing gradient:
@tf.function
def example():
x = tf.constant(1.0)
y = x * 2.0
z = y + y + y + y
return tf.gradients(z, [x, y],
aggregation_method=tf.AggregationMethod.EXPERIMENTAL_ACCUMULATE_N)
example()
[<tf.Tensor: shape=(), dtype=float32, numpy=8.0>,
<tf.Tensor: shape=(), dtype=float32, numpy=4.0>]
Class Variables |
ADD_N
|
0
|
DEFAULT
|
0
|
EXPERIMENTAL_ACCUMULATE_N
|
2
|
EXPERIMENTAL_TREE
|
1
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2024-04-26 UTC.
[null,null,["Last updated 2024-04-26 UTC."],[],[],null,["# tf.AggregationMethod\n\n\u003cbr /\u003e\n\n|----------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.16.1/tensorflow/python/ops/gradients_util.py#L943-L987) |\n\nA class listing aggregation methods used to combine gradients.\n\n#### View aliases\n\n\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n[`tf.compat.v1.AggregationMethod`](https://www.tensorflow.org/api_docs/python/tf/AggregationMethod)\n\n\u003cbr /\u003e\n\nComputing partial derivatives can require aggregating gradient\ncontributions. This class lists the various methods that can\nbe used to combine gradients in the graph.\n\nThe following aggregation methods are part of the stable API for\naggregating gradients:\n\n- `ADD_N`: All of the gradient terms are summed as part of one operation using the \"AddN\" op (see [`tf.add_n`](../tf/math/add_n)). This method has the property that all gradients must be ready and buffered separately in memory before any aggregation is performed.\n- `DEFAULT`: The system-chosen default aggregation method.\n\nThe following aggregation methods are experimental and may not\nbe supported in future releases:\n\n- `EXPERIMENTAL_TREE`: Gradient terms are summed in pairs using the \"AddN\" op. This method of summing gradients may reduce performance, but it can improve memory utilization because the gradients can be released earlier.\n- `EXPERIMENTAL_ACCUMULATE_N`: Same as `EXPERIMENTAL_TREE`.\n\nExample usage when computing gradient: \n\n @tf.function\n def example():\n x = tf.constant(1.0)\n y = x * 2.0\n z = y + y + y + y\n return tf.gradients(z, [x, y],\n aggregation_method=tf.AggregationMethod.EXPERIMENTAL_ACCUMULATE_N)\n example()\n [\u003ctf.Tensor: shape=(), dtype=float32, numpy=8.0\u003e,\n \u003ctf.Tensor: shape=(), dtype=float32, numpy=4.0\u003e]\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Class Variables --------------- ||\n|---------------------------|-----|\n| ADD_N | `0` |\n| DEFAULT | `0` |\n| EXPERIMENTAL_ACCUMULATE_N | `2` |\n| EXPERIMENTAL_TREE | `1` |\n\n\u003cbr /\u003e"]]