When simulated quantization is enabled, the results of the embedding lookup
are clipped and quantized according to the settings here before the combiner
is applied.
For example, to quantize input the following is done:
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-04-26 UTC."],[],[],null,["# tf.tpu.experimental.embedding.QuantizationConfig\n\n\u003cbr /\u003e\n\n|------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.16.1/tensorflow/python/tpu/tpu_embedding_v2_utils.py#L923-L984) |\n\nSettings for simulated quantization of the tpu embedding table.\n\n#### View aliases\n\n\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n[`tf.compat.v1.tpu.experimental.embedding.QuantizationConfig`](https://www.tensorflow.org/api_docs/python/tf/tpu/experimental/embedding/QuantizationConfig)\n\n\u003cbr /\u003e\n\n tf.tpu.experimental.embedding.QuantizationConfig(\n num_buckets: int, lower: float, upper: float\n )\n\nWhen simulated quantization is enabled, the results of the embedding lookup\nare clipped and quantized according to the settings here before the combiner\nis applied.\n\nFor example, to quantize `input` the following is done: \n\n if input \u003c lower\n input = lower\n if input \u003e upper\n input = upper\n quantum = (upper - lower) / (num_buckets - 1)\n input = math.floor((input - lower) / quantum + 0.5) * quantium + lower\n\nSee tensorflow/core/protobuf/tpu/optimization_parameters.proto for more\ndetails.\n| **Note:** This does not change the storage type of the embedding table, that will continue to be float32 as will the saved variable in the checkpoint. You will have to manually quantize the variable (typically with the same algorithm and settings as above) manually.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|---------------|--------------------------------------------------------|\n| `num_buckets` | The number of quantization buckets, must be atleast 2. |\n| `lower` | The lower bound for the quantization range. |\n| `upper` | The upper bound for the quantization range. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ------ ||\n|--------------|----------------------------------|\n| `ValueError` | if `num_buckets` is less than 2. |\n\n\u003cbr /\u003e"]]