tf.lite.experimental.QuantizationDebugger
Stay organized with collections
Save and categorize content based on your preferences.
Debugger for Quantized TensorFlow Lite debug mode models.
tf.lite.experimental.QuantizationDebugger(
quant_debug_model_path: Optional[str] = None,
quant_debug_model_content: Optional[bytes] = None,
float_model_path: Optional[str] = None,
float_model_content: Optional[bytes] = None,
debug_dataset: Optional[Callable[[], Iterable[Sequence[np.ndarray]]]] = None,
debug_options: Optional[tf.lite.experimental.QuantizationDebugOptions
] = None,
converter: Optional[TFLiteConverter] = None
) -> None
Used in the notebooks
This can run the TensorFlow Lite converted models equipped with debug ops and
collect debug information. This debugger calculates statistics from
user-defined post-processing functions as well as default ones.
Args |
quant_debug_model_path
|
Path to the quantized debug TFLite model file.
|
quant_debug_model_content
|
Content of the quantized debug TFLite model.
|
float_model_path
|
Path to float TFLite model file.
|
float_model_content
|
Content of the float TFLite model.
|
debug_dataset
|
a factory function that returns dataset generator which is
used to generate input samples (list of np.ndarray) for the model. The
generated elements must have same types and shape as inputs to the
model.
|
debug_options
|
Debug options to debug the given model.
|
converter
|
Optional, use converter instead of quantized model.
|
Raises |
ValueError
|
If the debugger was unable to be created.
|
Methods
get_debug_quantized_model
View source
get_debug_quantized_model() -> bytes
Returns an instrumented quantized model.
Convert the quantized model with the initialized converter and
return bytes for model. The model will be instrumented with numeric
verification operations and should only be used for debugging.
Returns |
Model bytes corresponding to the model.
|
Raises |
ValueError
|
if converter is not passed to the debugger.
|
get_nondebug_quantized_model
View source
get_nondebug_quantized_model() -> bytes
Returns a non-instrumented quantized model.
Convert the quantized model with the initialized converter and
return bytes for nondebug model. The model will not be instrumented with
numeric verification operations.
Returns |
Model bytes corresponding to the model.
|
Raises |
ValueError
|
if converter is not passed to the debugger.
|
layer_statistics_dump
View source
layer_statistics_dump(
file: IO[str]
) -> None
Dumps layer statistics into file, in csv format.
Args |
file
|
file, or file-like object to write.
|
run
View source
run() -> None
Runs models and gets metrics.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2024-04-26 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-04-26 UTC."],[],[],null,["# tf.lite.experimental.QuantizationDebugger\n\n\u003cbr /\u003e\n\n|------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.16.1/tensorflow/lite/tools/optimize/debugging/python/debugger.py#L120-L549) |\n\nDebugger for Quantized TensorFlow Lite debug mode models.\n\n#### View aliases\n\n\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n[`tf.compat.v1.lite.experimental.QuantizationDebugger`](https://www.tensorflow.org/api_docs/python/tf/lite/experimental/QuantizationDebugger)\n\n\u003cbr /\u003e\n\n tf.lite.experimental.QuantizationDebugger(\n quant_debug_model_path: Optional[str] = None,\n quant_debug_model_content: Optional[bytes] = None,\n float_model_path: Optional[str] = None,\n float_model_content: Optional[bytes] = None,\n debug_dataset: Optional[Callable[[], Iterable[Sequence[np.ndarray]]]] = None,\n debug_options: Optional[../../../tf/lite/experimental/QuantizationDebugOptions] = None,\n converter: Optional[TFLiteConverter] = None\n ) -\u003e None\n\n### Used in the notebooks\n\n| Used in the tutorials |\n|----------------------------------------------------------------------------------------------------------------------------------|\n| - [Inspecting Quantization Errors with Quantization Debugger](https://www.tensorflow.org/lite/performance/quantization_debugger) |\n\nThis can run the TensorFlow Lite converted models equipped with debug ops and\ncollect debug information. This debugger calculates statistics from\nuser-defined post-processing functions as well as default ones.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|-----------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `quant_debug_model_path` | Path to the quantized debug TFLite model file. |\n| `quant_debug_model_content` | Content of the quantized debug TFLite model. |\n| `float_model_path` | Path to float TFLite model file. |\n| `float_model_content` | Content of the float TFLite model. |\n| `debug_dataset` | a factory function that returns dataset generator which is used to generate input samples (list of np.ndarray) for the model. The generated elements must have same types and shape as inputs to the model. |\n| `debug_options` | Debug options to debug the given model. |\n| `converter` | Optional, use converter instead of quantized model. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ------ ||\n|--------------|-------------------------------------------|\n| `ValueError` | If the debugger was unable to be created. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Attributes ---------- ||\n|-----------|---------------|\n| `options` | \u003cbr /\u003e \u003cbr /\u003e |\n\n\u003cbr /\u003e\n\nMethods\n-------\n\n### `get_debug_quantized_model`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v2.16.1/tensorflow/lite/tools/optimize/debugging/python/debugger.py#L261-L273) \n\n get_debug_quantized_model() -\u003e bytes\n\nReturns an instrumented quantized model.\n\nConvert the quantized model with the initialized converter and\nreturn bytes for model. The model will be instrumented with numeric\nverification operations and should only be used for debugging.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|---|---|\n| Model bytes corresponding to the model. ||\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ||\n|--------------|---------------------------------------------|\n| `ValueError` | if converter is not passed to the debugger. |\n\n\u003cbr /\u003e\n\n### `get_nondebug_quantized_model`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v2.16.1/tensorflow/lite/tools/optimize/debugging/python/debugger.py#L247-L259) \n\n get_nondebug_quantized_model() -\u003e bytes\n\nReturns a non-instrumented quantized model.\n\nConvert the quantized model with the initialized converter and\nreturn bytes for nondebug model. The model will not be instrumented with\nnumeric verification operations.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|---|---|\n| Model bytes corresponding to the model. ||\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ||\n|--------------|---------------------------------------------|\n| `ValueError` | if converter is not passed to the debugger. |\n\n\u003cbr /\u003e\n\n### `layer_statistics_dump`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v2.16.1/tensorflow/lite/tools/optimize/debugging/python/debugger.py#L524-L549) \n\n layer_statistics_dump(\n file: IO[str]\n ) -\u003e None\n\nDumps layer statistics into file, in csv format.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ||\n|--------|-------------------------------------|\n| `file` | file, or file-like object to write. |\n\n\u003cbr /\u003e\n\n### `run`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v2.16.1/tensorflow/lite/tools/optimize/debugging/python/debugger.py#L326-L330) \n\n run() -\u003e None\n\nRuns models and gets metrics."]]