Module: tf.quantization
Stay organized with collections
Save and categorize content based on your preferences.
Public API for tf._api.v2.quantization namespace
Modules
experimental
module: Public API for tf._api.v2.quantization.experimental namespace
Functions
dequantize(...)
: Dequantize the 'input' tensor into a float or bfloat16 Tensor.
fake_quant_with_min_max_args(...)
: Fake-quantize the 'inputs' tensor, type float to 'outputs' tensor of same shape and type.
fake_quant_with_min_max_args_gradient(...)
: Compute gradients for a FakeQuantWithMinMaxArgs operation.
fake_quant_with_min_max_vars(...)
: Fake-quantize the 'inputs' tensor of type float via global float scalars
fake_quant_with_min_max_vars_gradient(...)
: Compute gradients for a FakeQuantWithMinMaxVars operation.
fake_quant_with_min_max_vars_per_channel(...)
: Fake-quantize the 'inputs' tensor of type float via per-channel floats
fake_quant_with_min_max_vars_per_channel_gradient(...)
: Compute gradients for a FakeQuantWithMinMaxVarsPerChannel operation.
quantize(...)
: Quantize the 'input' tensor of type float to 'output' tensor of type 'T'.
quantize_and_dequantize(...)
: Quantizes then dequantizes a tensor. (deprecated)
quantize_and_dequantize_v2(...)
: Quantizes then dequantizes a tensor.
quantized_concat(...)
: Concatenates quantized tensors along one dimension.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2024-04-26 UTC.
[null,null,["Last updated 2024-04-26 UTC."],[],[],null,["# Module: tf.quantization\n\n\u003cbr /\u003e\n\nPublic API for tf._api.v2.quantization namespace\n\nModules\n-------\n\n[`experimental`](../tf/quantization/experimental) module: Public API for tf._api.v2.quantization.experimental namespace\n\nFunctions\n---------\n\n[`dequantize(...)`](../tf/quantization/dequantize): Dequantize the 'input' tensor into a float or bfloat16 Tensor.\n\n[`fake_quant_with_min_max_args(...)`](../tf/quantization/fake_quant_with_min_max_args): Fake-quantize the 'inputs' tensor, type float to 'outputs' tensor of same shape and type.\n\n[`fake_quant_with_min_max_args_gradient(...)`](../tf/quantization/fake_quant_with_min_max_args_gradient): Compute gradients for a FakeQuantWithMinMaxArgs operation.\n\n[`fake_quant_with_min_max_vars(...)`](../tf/quantization/fake_quant_with_min_max_vars): Fake-quantize the 'inputs' tensor of type float via global float scalars\n\n[`fake_quant_with_min_max_vars_gradient(...)`](../tf/quantization/fake_quant_with_min_max_vars_gradient): Compute gradients for a FakeQuantWithMinMaxVars operation.\n\n[`fake_quant_with_min_max_vars_per_channel(...)`](../tf/quantization/fake_quant_with_min_max_vars_per_channel): Fake-quantize the 'inputs' tensor of type float via per-channel floats\n\n[`fake_quant_with_min_max_vars_per_channel_gradient(...)`](../tf/quantization/fake_quant_with_min_max_vars_per_channel_gradient): Compute gradients for a FakeQuantWithMinMaxVarsPerChannel operation.\n\n[`quantize(...)`](../tf/quantization/quantize): Quantize the 'input' tensor of type float to 'output' tensor of type 'T'.\n\n[`quantize_and_dequantize(...)`](../tf/quantization/quantize_and_dequantize): Quantizes then dequantizes a tensor. (deprecated)\n\n[`quantize_and_dequantize_v2(...)`](../tf/quantization/quantize_and_dequantize_v2): Quantizes then dequantizes a tensor.\n\n[`quantized_concat(...)`](../tf/quantization/quantized_concat): Concatenates quantized tensors along one dimension."]]