Use this when your inputs are sparse, but you want to convert them to a dense
representation (e.g., to feed to a DNN).
Args
categorical_column
A CategoricalColumn created by a
categorical_column_with_* function. This column produces the sparse IDs
that are inputs to the embedding lookup.
dimension
An integer specifying dimension of the embedding, must be > 0.
combiner
A string specifying how to reduce if there are multiple entries in
a single row. Currently 'mean', 'sqrtn' and 'sum' are supported, with
'mean' the default. 'sqrtn' often achieves good accuracy, in particular
with bag-of-words columns. Each of this can be thought as example level
normalizations on the column. For more information, see
tf.embedding_lookup_sparse.
initializer
A variable initializer function to be used in embedding
variable initialization. If not specified, defaults to
truncated_normal_initializer with mean 0.0 and standard deviation
1/sqrt(dimension).
ckpt_to_load_from
String representing checkpoint name/pattern from which to
restore column weights. Required if tensor_name_in_ckpt is not None.
tensor_name_in_ckpt
Name of the Tensor in ckpt_to_load_from from which
to restore the column weights. Required if ckpt_to_load_from is not
None.
max_norm
If not None, embedding values are l2-normalized to this value.
trainable
Whether or not the embedding is trainable. Default is True.
use_safe_embedding_lookup
If true, uses safe_embedding_lookup_sparse
instead of embedding_lookup_sparse. safe_embedding_lookup_sparse ensures
there are no empty rows and all weights and ids are positive at the
expense of extra compute cost. This only applies to rank 2 (NxM) shaped
input tensors. Defaults to true, consider turning off if the above checks
are not needed. Note that having empty rows will not trigger any error
though the output result might be 0 or omitted.
Returns
DenseColumn that converts from sparse input.
Raises
ValueError
if dimension not > 0.
ValueError
if exactly one of ckpt_to_load_from and tensor_name_in_ckpt
is specified.
[null,null,["Last updated 2024-04-26 UTC."],[],[],null,["# tf.feature_column.embedding_column\n\n\u003cbr /\u003e\n\n|------------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.16.1/tensorflow/python/feature_column/feature_column_v2.py#L435-L515) |\n\n`DenseColumn` that converts from sparse, categorical input. (deprecated)\n| **Warning:** tf.feature_column is not recommended for new code. Instead, feature preprocessing can be done directly using either [Keras preprocessing\n| layers](https://www.tensorflow.org/guide/migrate/migrating_feature_columns) or through the one-stop utility [`tf.keras.utils.FeatureSpace`](https://www.tensorflow.org/api_docs/python/tf/keras/utils/FeatureSpace) built on top of them. See the [migration guide](https://tensorflow.org/guide/migrate) for details.\n\n#### View aliases\n\n\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n[`tf.compat.v1.feature_column.embedding_column`](https://www.tensorflow.org/api_docs/python/tf/feature_column/embedding_column)\n\n\u003cbr /\u003e\n\n tf.feature_column.embedding_column(\n categorical_column,\n dimension,\n combiner='mean',\n initializer=None,\n ckpt_to_load_from=None,\n tensor_name_in_ckpt=None,\n max_norm=None,\n trainable=True,\n use_safe_embedding_lookup=True\n )\n\n### Used in the notebooks\n\n| Used in the guide | Used in the tutorials |\n|--------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------|\n| - [Migrate \\`tf.feature_column\\`s to Keras preprocessing layers](https://www.tensorflow.org/guide/migrate/migrating_feature_columns) | - [Classify structured data with feature columns](https://www.tensorflow.org/tutorials/structured_data/feature_columns) |\n\n| **Deprecated:** THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use Keras preprocessing layers instead, either directly or via the [`tf.keras.utils.FeatureSpace`](../../tf/keras/utils/FeatureSpace) utility. Each of `tf.feature_column.*` has a functional equivalent in `tf.keras.layers` for feature preprocessing when training a Keras model.\n\nUse this when your inputs are sparse, but you want to convert them to a dense\nrepresentation (e.g., to feed to a DNN).\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|-----------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `categorical_column` | A `CategoricalColumn` created by a `categorical_column_with_*` function. This column produces the sparse IDs that are inputs to the embedding lookup. |\n| `dimension` | An integer specifying dimension of the embedding, must be \\\u003e 0. |\n| `combiner` | A string specifying how to reduce if there are multiple entries in a single row. Currently 'mean', 'sqrtn' and 'sum' are supported, with 'mean' the default. 'sqrtn' often achieves good accuracy, in particular with bag-of-words columns. Each of this can be thought as example level normalizations on the column. For more information, see `tf.embedding_lookup_sparse`. |\n| `initializer` | A variable initializer function to be used in embedding variable initialization. If not specified, defaults to `truncated_normal_initializer` with mean `0.0` and standard deviation `1/sqrt(dimension)`. |\n| `ckpt_to_load_from` | String representing checkpoint name/pattern from which to restore column weights. Required if `tensor_name_in_ckpt` is not `None`. |\n| `tensor_name_in_ckpt` | Name of the `Tensor` in `ckpt_to_load_from` from which to restore the column weights. Required if `ckpt_to_load_from` is not `None`. |\n| `max_norm` | If not `None`, embedding values are l2-normalized to this value. |\n| `trainable` | Whether or not the embedding is trainable. Default is True. |\n| `use_safe_embedding_lookup` | If true, uses safe_embedding_lookup_sparse instead of embedding_lookup_sparse. safe_embedding_lookup_sparse ensures there are no empty rows and all weights and ids are positive at the expense of extra compute cost. This only applies to rank 2 (NxM) shaped input tensors. Defaults to true, consider turning off if the above checks are not needed. Note that having empty rows will not trigger any error though the output result might be 0 or omitted. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| `DenseColumn` that converts from sparse input. ||\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ------ ||\n|----------------|-------------------------------------------------------------------------------|\n| `ValueError` | if `dimension` not \\\u003e 0. |\n| `ValueError` | if exactly one of `ckpt_to_load_from` and `tensor_name_in_ckpt` is specified. |\n| `ValueError` | if `initializer` is specified and is not callable. |\n| `RuntimeError` | If eager execution is enabled. |\n\n\u003cbr /\u003e"]]