Currently only one dimension of the full variable can be sliced, and the
full variable can be reconstructed by the concatenation of the returned
list along that dimension.
Args
shape
List of integers. The shape of the full variable.
slicing
List of integers. How to partition the variable.
Must be of the same length as shape. Each value
indicate how many slices to create in the corresponding
dimension. Presently only one of the values can be more than 1;
that is, the variable can only be sliced along one dimension.
For convenience, The requested number of partitions does not have to
divide the corresponding dimension evenly. If it does not, the
shapes of the partitions are incremented by 1 starting from partition
0 until all slack is absorbed. The adjustment rules may change in the
future, but as you can save/restore these variables with different
slicing specifications this should not be a problem.
initializer
A Tensor of shape shape or a variable initializer
function. If a function, it will be called once for each slice,
passing the shape and data type of the slice as parameters. The
function must return a tensor with the same shape as the slice.
dtype
Type of the variables. Ignored if initializer is a Tensor.
trainable
If True also add all the variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES.
collections
List of graph collections keys to add the variables to.
Defaults to [GraphKeys.GLOBAL_VARIABLES].
name
Optional name for the full variable. Defaults to
"PartitionedVariable" and gets uniquified automatically.
reuse
Boolean or None; if True and name is set, it would reuse
previously created variables. if False it will create new variables.
if None, it would inherit the parent scope reuse.
[null,null,["Last updated 2024-04-26 UTC."],[],[],null,["# tf.compat.v1.create_partitioned_variables\n\n\u003cbr /\u003e\n\n|-----------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.16.1/tensorflow/python/ops/partitioned_variables.py#L275-L347) |\n\nCreate a list of partitioned variables according to the given `slicing`. (deprecated) \n\n tf.compat.v1.create_partitioned_variables(\n shape,\n slicing,\n initializer,\n dtype=../../../tf/dtypes#float32,\n trainable=True,\n collections=None,\n name=None,\n reuse=None\n )\n\n| **Deprecated:** THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use `tf.get_variable` with a partitioner set.\n\nCurrently only one dimension of the full variable can be sliced, and the\nfull variable can be reconstructed by the concatenation of the returned\nlist along that dimension.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|---------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `shape` | List of integers. The shape of the full variable. |\n| `slicing` | List of integers. How to partition the variable. Must be of the same length as `shape`. Each value indicate how many slices to create in the corresponding dimension. Presently only one of the values can be more than 1; that is, the variable can only be sliced along one dimension. \u003cbr /\u003e For convenience, The requested number of partitions does not have to divide the corresponding dimension evenly. If it does not, the shapes of the partitions are incremented by 1 starting from partition 0 until all slack is absorbed. The adjustment rules may change in the future, but as you can save/restore these variables with different slicing specifications this should not be a problem. |\n| `initializer` | A `Tensor` of shape `shape` or a variable initializer function. If a function, it will be called once for each slice, passing the shape and data type of the slice as parameters. The function must return a tensor with the same shape as the slice. |\n| `dtype` | Type of the variables. Ignored if `initializer` is a `Tensor`. |\n| `trainable` | If True also add all the variables to the graph collection `GraphKeys.TRAINABLE_VARIABLES`. |\n| `collections` | List of graph collections keys to add the variables to. Defaults to `[GraphKeys.GLOBAL_VARIABLES]`. |\n| `name` | Optional name for the full variable. Defaults to `\"PartitionedVariable\"` and gets uniquified automatically. |\n| `reuse` | Boolean or `None`; if `True` and name is set, it would reuse previously created variables. if `False` it will create new variables. if `None`, it would inherit the parent scope reuse. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| A list of Variables corresponding to the slicing. ||\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ------ ||\n|--------------|---------------------------------------|\n| `ValueError` | If any of the arguments is malformed. |\n\n\u003cbr /\u003e"]]