executorch
by
pytorch

Description: On-device AI across mobile, embedded and edge for PyTorch

View pytorch/executorch on GitHub ↗

Summary Information

Updated 2 hours ago
Added to GitGenius on January 31st, 2026
Created on February 25th, 2022
Open Issues/Pull Requests: 1,652 (+2)
Number of forks: 851
Total Stargazers: 4,310 (-1)
Total Subscribers: 77 (+0)
Detailed Description

Executorch is a PyTorch project focused on enabling efficient and performant execution of PyTorch models across a wide range of hardware platforms, including mobile devices, embedded systems, and resource-constrained environments. It aims to bridge the gap between model development in PyTorch and deployment on diverse hardware by providing tools for model optimization, compilation, and runtime execution.

The core functionality of Executorch revolves around several key components. First, it offers a model compiler that transforms PyTorch models into optimized representations suitable for specific target devices. This compilation process involves techniques like graph optimization, operator fusion, and quantization to reduce model size, memory footprint, and computational complexity. The compiler leverages a modular architecture, allowing for the integration of custom backends and optimizations tailored to particular hardware architectures.

Second, Executorch provides a runtime environment that executes the compiled models. This runtime is designed to be lightweight and portable, enabling deployment on devices with limited resources. The runtime supports various execution modes, including CPU, GPU, and specialized accelerators, and it is optimized for low latency and high throughput. It also includes features for managing memory, handling data transfers, and interacting with the underlying hardware.

Third, Executorch includes tools for model analysis and profiling. These tools help developers understand the performance characteristics of their models, identify bottlenecks, and guide optimization efforts. They provide insights into operator execution times, memory usage, and other relevant metrics. This information is crucial for making informed decisions about model architecture, optimization strategies, and hardware selection.

The project also emphasizes portability and extensibility. Executorch supports a variety of hardware platforms through the use of backends. These backends provide the necessary interfaces for interacting with specific hardware, such as CPUs, GPUs, and specialized accelerators. The modular design of Executorch allows developers to add support for new hardware platforms by creating custom backends. Furthermore, Executorch integrates with existing PyTorch workflows, allowing developers to seamlessly transition from model development to deployment.

In essence, Executorch empowers developers to deploy PyTorch models efficiently on a wide range of hardware. It simplifies the process of model optimization, compilation, and runtime execution, enabling the development of performant and resource-efficient AI applications for diverse use cases, particularly in edge computing and embedded systems. The project's focus on portability, extensibility, and integration with the PyTorch ecosystem makes it a valuable tool for anyone looking to deploy PyTorch models on resource-constrained devices.

executorch
by
pytorchpytorch/executorch

Repository Details

Fetching additional details & charts...