tf.tensor_scatter_nd_sub
Stay organized with collections
Save and categorize content based on your preferences.
Subtracts sparse updates
from an existing tensor according to indices
.
tf.tensor_scatter_nd_sub(
tensor: Annotated[Any, TV_TensorScatterSub_T],
indices: Annotated[Any, TV_TensorScatterSub_Tindices],
updates: Annotated[Any, TV_TensorScatterSub_T],
name=None
) -> Annotated[Any, TV_TensorScatterSub_T]
Used in the notebooks
This operation creates a new tensor by subtracting sparse updates
from the
passed in tensor
.
This operation is very similar to tf.scatter_nd_sub
, except that the updates
are subtracted from an existing tensor (as opposed to a variable). If the memory
for the existing tensor cannot be re-used, a copy is made and updated.
indices
is an integer tensor containing indices into a new tensor of shape
shape
. The last dimension of indices
can be at most the rank of shape
:
indices.shape[-1] <= shape.rank
The last dimension of indices
corresponds to indices into elements
(if indices.shape[-1] = shape.rank
) or slices
(if indices.shape[-1] < shape.rank
) along dimension indices.shape[-1]
of
shape
. updates
is a tensor with shape
indices.shape[:-1] + shape[indices.shape[-1]:]
The simplest form of tensor_scatter_sub is to subtract individual elements
from a tensor by index. For example, say we want to insert 4 scattered elements
in a rank-1 tensor with 8 elements.
In Python, this scatter subtract operation would look like this:
indices = tf.constant([[4], [3], [1], [7]])
updates = tf.constant([9, 10, 11, 12])
tensor = tf.ones([8], dtype=tf.int32)
updated = tf.tensor_scatter_nd_sub(tensor, indices, updates)
print(updated)
The resulting tensor would look like this:
[1, -10, 1, -9, -8, 1, 1, -11]
We can also, insert entire slices of a higher rank tensor all at once. For
example, if we wanted to insert two slices in the first dimension of a
rank-3 tensor with two matrices of new values.
In Python, this scatter add operation would look like this:
indices = tf.constant([[0], [2]])
updates = tf.constant([[[5, 5, 5, 5], [6, 6, 6, 6],
[7, 7, 7, 7], [8, 8, 8, 8]],
[[5, 5, 5, 5], [6, 6, 6, 6],
[7, 7, 7, 7], [8, 8, 8, 8]]])
tensor = tf.ones([4, 4, 4],dtype=tf.int32)
updated = tf.tensor_scatter_nd_sub(tensor, indices, updates)
print(updated)
The resulting tensor would look like this:
[[[-4, -4, -4, -4], [-5, -5, -5, -5], [-6, -6, -6, -6], [-7, -7, -7, -7]],
[[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]],
[[-4, -4, -4, -4], [-5, -5, -5, -5], [-6, -6, -6, -6], [-7, -7, -7, -7]],
[[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]]]
Note that on CPU, if an out of bound index is found, an error is returned.
On GPU, if an out of bound index is found, the index is ignored.
Args |
tensor
|
A Tensor . Tensor to copy/update.
|
indices
|
A Tensor . Must be one of the following types: int32 , int64 .
Index tensor.
|
updates
|
A Tensor . Must have the same type as tensor .
Updates to scatter into output.
|
name
|
A name for the operation (optional).
|
Returns |
A Tensor . Has the same type as tensor .
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2024-04-26 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-04-26 UTC."],[],[],null,["# tf.tensor_scatter_nd_sub\n\n\u003cbr /\u003e\n\nSubtracts sparse `updates` from an existing tensor according to `indices`.\n\n#### View aliases\n\n\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n[`tf.compat.v1.tensor_scatter_nd_sub`](https://www.tensorflow.org/api_docs/python/tf/tensor_scatter_nd_sub), [`tf.compat.v1.tensor_scatter_sub`](https://www.tensorflow.org/api_docs/python/tf/tensor_scatter_nd_sub)\n\n\u003cbr /\u003e\n\n tf.tensor_scatter_nd_sub(\n tensor: Annotated[Any, TV_TensorScatterSub_T],\n indices: Annotated[Any, TV_TensorScatterSub_Tindices],\n updates: Annotated[Any, TV_TensorScatterSub_T],\n name=None\n ) -\u003e Annotated[Any, TV_TensorScatterSub_T]\n\n### Used in the notebooks\n\n| Used in the guide |\n|-------------------------------------------------------------------------------------|\n| - [Introduction to tensor slicing](https://www.tensorflow.org/guide/tensor_slicing) |\n\nThis operation creates a new tensor by subtracting sparse `updates` from the\npassed in `tensor`.\nThis operation is very similar to `tf.scatter_nd_sub`, except that the updates\nare subtracted from an existing tensor (as opposed to a variable). If the memory\nfor the existing tensor cannot be re-used, a copy is made and updated.\n\n`indices` is an integer tensor containing indices into a new tensor of shape\n`shape`. The last dimension of `indices` can be at most the rank of `shape`: \n\n indices.shape[-1] \u003c= shape.rank\n\nThe last dimension of `indices` corresponds to indices into elements\n(if `indices.shape[-1] = shape.rank`) or slices\n(if `indices.shape[-1] \u003c shape.rank`) along dimension `indices.shape[-1]` of\n`shape`. `updates` is a tensor with shape \n\n indices.shape[:-1] + shape[indices.shape[-1]:]\n\nThe simplest form of tensor_scatter_sub is to subtract individual elements\nfrom a tensor by index. For example, say we want to insert 4 scattered elements\nin a rank-1 tensor with 8 elements.\n\nIn Python, this scatter subtract operation would look like this: \n\n indices = tf.constant([[4], [3], [1], [7]])\n updates = tf.constant([9, 10, 11, 12])\n tensor = tf.ones([8], dtype=tf.int32)\n updated = tf.tensor_scatter_nd_sub(tensor, indices, updates)\n print(updated)\n\nThe resulting tensor would look like this: \n\n [1, -10, 1, -9, -8, 1, 1, -11]\n\nWe can also, insert entire slices of a higher rank tensor all at once. For\nexample, if we wanted to insert two slices in the first dimension of a\nrank-3 tensor with two matrices of new values.\n\nIn Python, this scatter add operation would look like this: \n\n indices = tf.constant([[0], [2]])\n updates = tf.constant([[[5, 5, 5, 5], [6, 6, 6, 6],\n [7, 7, 7, 7], [8, 8, 8, 8]],\n [[5, 5, 5, 5], [6, 6, 6, 6],\n [7, 7, 7, 7], [8, 8, 8, 8]]])\n tensor = tf.ones([4, 4, 4],dtype=tf.int32)\n updated = tf.tensor_scatter_nd_sub(tensor, indices, updates)\n print(updated)\n\nThe resulting tensor would look like this: \n\n [[[-4, -4, -4, -4], [-5, -5, -5, -5], [-6, -6, -6, -6], [-7, -7, -7, -7]],\n [[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]],\n [[-4, -4, -4, -4], [-5, -5, -5, -5], [-6, -6, -6, -6], [-7, -7, -7, -7]],\n [[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]]]\n\nNote that on CPU, if an out of bound index is found, an error is returned.\nOn GPU, if an out of bound index is found, the index is ignored.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|-----------|----------------------------------------------------------------------------------|\n| `tensor` | A `Tensor`. Tensor to copy/update. |\n| `indices` | A `Tensor`. Must be one of the following types: `int32`, `int64`. Index tensor. |\n| `updates` | A `Tensor`. Must have the same type as `tensor`. Updates to scatter into output. |\n| `name` | A name for the operation (optional). |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| A `Tensor`. Has the same type as `tensor`. ||\n\n\u003cbr /\u003e"]]