[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-06-07 UTC."],[],[],null,["# tf.keras.backend.set_floatx\n\n\u003cbr /\u003e\n\n|--------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/keras-team/keras/tree/v3.3.3/keras/src/backend/config.py#L37-L71) |\n\nSet the default float dtype.\n\n#### View aliases\n\n\n**Main aliases**\n\n[`tf.keras.config.set_floatx`](https://www.tensorflow.org/api_docs/python/tf/keras/backend/set_floatx)\n\n\u003cbr /\u003e\n\n tf.keras.backend.set_floatx(\n value\n )\n\n| **Note:** It is not recommended to set this to `\"float16\"` for training, as this will likely cause numeric stability issues. Instead, mixed precision, which leverages a mix of `float16` and `float32`. It can be configured by calling `keras.mixed_precision.set_dtype_policy('mixed_float16')`.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|---------|-----------------------------------------------------------------|\n| `value` | String; `'bfloat16'`, `'float16'`, `'float32'`, or `'float64'`. |\n\n\u003cbr /\u003e\n\n#### Examples:\n\n keras.config.floatx()\n 'float32'\n\n keras.config.set_floatx('float64')\n keras.config.floatx()\n 'float64'\n\n # Set it back to float32\n keras.config.set_floatx('float32')\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ------ ||\n|--------------|---------------------------|\n| `ValueError` | In case of invalid value. |\n\n\u003cbr /\u003e"]]