# Explicitly pass shape and typetf.math.accumulate_n([a,b,a],shape=[2,2],tensor_dtype=tf.int32).numpy()array([[7,4],[6,14]],dtype=int32)
See Also:
tf.reduce_sum(inputs, axis=0) - This performe the same mathematical
operation, but tf.add_n may be more efficient because it sums the
tensors directly. reduce_sum on the other hand calls
tf.convert_to_tensor on the list of tensors, unncessairly stacking them
into a single tensor before summing.
tf.add_n - This is another python wrapper for the same Op. It has
nearly identical functionality.
Args
inputs
A list of Tensor objects, each with same shape and type.
shape
Expected shape of elements of inputs (optional). Also controls the
output shape of this op, which may affect type inference in other ops. A
value of None means "infer the input shape from the shapes in inputs".
tensor_dtype
Expected data type of inputs (optional). A value of None
means "infer the input dtype from inputs[0]".
name
A name for the operation (optional).
Returns
A Tensor of same shape and type as the elements of inputs.
Raises
ValueError
If inputs don't all have same shape and dtype or the shape
cannot be inferred.
[null,null,["Last updated 2024-04-26 UTC."],[],[],null,["# tf.math.accumulate_n\n\n\u003cbr /\u003e\n\n|------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.16.1/tensorflow/python/ops/math_ops.py#L3976-L4059) |\n\nReturns the element-wise sum of a list of tensors. (deprecated)\n\n#### View aliases\n\n\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n[`tf.compat.v1.accumulate_n`](https://www.tensorflow.org/api_docs/python/tf/math/accumulate_n), [`tf.compat.v1.math.accumulate_n`](https://www.tensorflow.org/api_docs/python/tf/math/accumulate_n)\n\n\u003cbr /\u003e\n\n tf.math.accumulate_n(\n inputs, shape=None, tensor_dtype=None, name=None\n )\n\n| **Deprecated:** THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use [`tf.math.add_n`](../../tf/math/add_n) Instead\n\nOptionally, pass `shape` and `tensor_dtype` for shape and type checking,\notherwise, these are inferred.\n\n#### For example:\n\n a = tf.constant([[1, 2], [3, 4]])\n b = tf.constant([[5, 0], [0, 6]])\n tf.math.accumulate_n([a, b, a]).numpy()\n array([[ 7, 4],\n [ 6, 14]], dtype=int32)\n\n # Explicitly pass shape and type\n tf.math.accumulate_n(\n [a, b, a], shape=[2, 2], tensor_dtype=tf.int32).numpy()\n array([[ 7, 4],\n [ 6, 14]], dtype=int32)\n\n| **Note:** The input must be a list or tuple. This function does not handle `IndexedSlices`\n\n#### See Also:\n\n- [`tf.reduce_sum(inputs, axis=0)`](../../tf/math/reduce_sum) - This performe the same mathematical operation, but [`tf.add_n`](../../tf/math/add_n) may be more efficient because it sums the tensors directly. `reduce_sum` on the other hand calls [`tf.convert_to_tensor`](../../tf/convert_to_tensor) on the list of tensors, unncessairly stacking them into a single tensor before summing.\n- [`tf.add_n`](../../tf/math/add_n) - This is another python wrapper for the same Op. It has nearly identical functionality.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|----------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `inputs` | A list of `Tensor` objects, each with same shape and type. |\n| `shape` | Expected shape of elements of `inputs` (optional). Also controls the output shape of this op, which may affect type inference in other ops. A value of `None` means \"infer the input shape from the shapes in `inputs`\". |\n| `tensor_dtype` | Expected data type of `inputs` (optional). A value of `None` means \"infer the input dtype from `inputs[0]`\". |\n| `name` | A name for the operation (optional). |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| A `Tensor` of same shape and type as the elements of `inputs`. ||\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ------ ||\n|--------------|----------------------------------------------------------------------------------|\n| `ValueError` | If `inputs` don't all have same shape and dtype or the shape cannot be inferred. |\n\n\u003cbr /\u003e"]]