--- layout: default title: PyTorch ExecuTorch body-class: announcement background-class: announcement-background permalink: /executorch-overview ---
End-to-end solution for enabling on-device inference capabilities across mobile and edge devices
IMPORTANT NOTE: This is a preview version of Executorch and should be used for testing and evaluation purposes only. It is not recommended for use in production settings. We welcome any feedback, suggestions, and bug reports from the community to help us improve the technology.
ExecuTorch is an end-to-end solution for enabling on-device inference capabilities across mobile and edge devices including wearables, embedded devices and microcontrollers. It is part of the PyTorch Edge ecosystem and enables efficient deployment of PyTorch models to edge devices. Key value propositions of ExecuTorch are:
We are excited to see how the community leverages our all new on-device AI stack. You can learn more about key components of ExecuTorch and its architecture, how it works, and explore documentation page and detailed tutorials.
Supporting on-device AI presents unique challenges with diverse hardware, critical power requirements, low/no internet connectivity, and realtime processing needs. These constraints have historically prevented or slowed down the creation of scalable and performant on-device AI solutions. We designed ExecuTorch, backed by our industry leaders like Meta, Arm, Apple, and Qualcomm, to be highly portable and provide superior developer productivity without losing on performance.
PyTorch Mobile uses TorchScript to allow PyTorch models to run on devices with limited resources. ExecuTorch has a significantly smaller memory size and a dynamic memory footprint resulting in superior performance compared to PyTorch Mobile. Also ExecuTorch does not rely on TorchScript, and instead leverages PyTorch 2.0 compiler and export functionality for on-device execution of PyTorch models.