- Notifications
You must be signed in to change notification settings - Fork13
huggingface/optimum-executorch
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
Optimum ExecuTorch enables efficient deployment of transformer models using Meta's ExecuTorch framework. It provides:
- 🔄 Easy conversion of Hugging Face models to ExecuTorch format
- ⚡ Optimized inference with hardware-specific optimizations
- 🤝 Seamless integration with Hugging Face Transformers
- 📱 Efficient deployment on various devices
Installconda on your machine. Then, create a virtual environment to manage our dependencies.
conda create -n optimum-executorch python=3.11conda activate optimum-executorch
git clone https://github.com/huggingface/optimum-executorch.gitcd optimum-executorchpip install '.[dev]'
- 🔜 Install from pypi coming soon...
To access every available optimization and experiment with the newest features, run:
python install_dev.py
This script will installexecutorch
,torch
,torchao
,transformers
, etc. from nightly builds or from source to access the latest models and optimizations.
To leave an existing ExecuTorch installation untouched, runinstall_dev.py
with--skip_override_torch
to prevent it from being overwritten.
There are two ways to use Optimum ExecuTorch:
fromoptimum.executorchimportExecuTorchModelForCausalLMfromtransformersimportAutoTokenizer# Load and export the model on-the-flymodel_id="HuggingFaceTB/SmolLM2-135M-Instruct"model=ExecuTorchModelForCausalLM.from_pretrained(model_id,recipe="xnnpack",attn_implementation="custom_sdpa",# Use custom SDPA implementation for better performanceuse_custom_kv_cache=True,# Use custom KV cache for better performance**{"qlinear":True,"qembeeding":True},# Quantize linear and embedding layers)# Generate text right awaytokenizer=AutoTokenizer.from_pretrained(model_id)generated_text=model.text_generation(tokenizer=tokenizer,prompt="Once upon a time",max_seq_len=128,)print(generated_text)
Note: If an ExecuTorch model is already cached on the Hugging Face Hub, the API will automatically skip the export step and load the cached
.pte
file. To test this, replace themodel_id
in the example above with"executorch-community/SmolLM2-135M"
, where the.pte
file is pre-cached. Additionally, the.pte
file can be directly associated with the eager model, as demonstrated in thisexample.
Use the CLI tool to convert your model to ExecuTorch format:
optimum-cli export executorch \ --model "HuggingFaceTB/SmolLM2-135M-Instruct" \ --task "text-generation" \ --recipe "xnnpack" \ --use_custom_sdpa \ --use_custom_kv_cache \ --qlinear \ --qembedding \ --output_dir="hf_smollm2"
Explore the various export options by running the command:optimum-cli export executorch --help
Use the exported model for text generation:
fromoptimum.executorchimportExecuTorchModelForCausalLMfromtransformersimportAutoTokenizer# Load the exported modelmodel=ExecuTorchModelForCausalLM.from_pretrained("./hf_smollm2")# Initialize tokenizer and generate texttokenizer=AutoTokenizer.from_pretrained("HuggingFaceTB/SmolLM2-135M-Instruct")generated_text=model.text_generation(tokenizer=tokenizer,prompt="Once upon a time",max_seq_len=128)print(generated_text)
To perform on-device inference, you can use ExecuTorch’s sample runner or the example iOS/Android applications. For detailed instructions, refer to theExecuTorch Sample Runner guide.
Supported usingcustom SDPA with Hugging Face Transformers, boosting performance by3x compared to default SDPA, based on tests withHuggingFaceTB/SmolLM2-135M
.
Supported usingcustom KV cache that performs in-place cache update, boosting performance by2.5x compared to default static KV cache, based on tests withHuggingFaceTB/SmolLM2-135M
.
Currently,Optimum-ExecuTorch supports theXNNPACK Backend withcustom SDPA for efficient execution on mobile CPUs.
For a comprehensive overview of all backends supported by ExecuTorch, please refer to theExecuTorch Backend Overview.
We currently support Post-Training Quantization (PTQ) for linear layers using int8 dynamic per-token activations and int4 grouped per-channel weights (aka8da4w
), as well as int8 channelwise embedding quantization.
Batch prefill is supported now, improving the time to first generated token 20x faster by allowing prompt tokens to be processed simultaneously.
The following models have been successfully tested with Executorch. For details on the specific optimizations supported and how to use them for each model, please consult their respective test files in thetests/models/
directory.
We currently support a wide range of popular transformer models, including encoder-only, decoder-only, and encoder-decoder architectures, as well as models specialized for various tasks like text generation, translation, summarization, and mask prediction, etc. These models reflect the current trends and popularity across the Hugging Face community:
- Albert:
albert-base-v2
and its variants - Bert: Google's
bert-base-uncased
and its variants - Distilbert:
distilbert-base-uncased
and its variants - Eurobert:
EuroBERT-210m
and its variants - Roberta: FacebookAI's
xlm-roberta-base
and its variants
- Gemma:
Gemma-2b
and its variants - Gemma2:
Gemma-2-2b
and its variants - Gemma3:
Gemma-3-1b
and its variants(requirestransformers >= 4.52.0
) - Llama:
Llama-3.2-1B
and its variants - Qwen2:
Qwen2.5-0.5B
and its variants - Qwen3:
Qwen3-0.6B
,Qwen3-Embedding-0.6B
and other variants - Olmo:
OLMo-1B-hf
and its variants - Phi4:
Phi-4-mini-instruct
and its variants - Smollm: 🤗
SmolLM2-135M
and its variants - Smollm3: 🤗
SmolLM3-3B
and its variants
- T5: Google's
T5
and its variants
- Cvt: Convolutional Vision Transformer
- Deit: Distilled Data-efficient Image Transformer (base-sized)
- Dit: Document Image Transformer (base-sized)
- EfficientNet: EfficientNet (b0-b7 sized)
- Focalnet: FocalNet (tiny-sized)
- Mobilevit: Apple's MobileViT xx-small
- Mobilevit2: Apple's MobileViTv2
- Pvt: Pyramid Vision Transformer (tiny-sized)
- Swin: Swin Transformer (tiny-sized)
- Whisper: OpenAI's
Whisper
and its variants
📌 Note: This list is continuously expanding. As we continue to expand support, more models will be added.
The following benchmarks showdecode performance (tokens/sec) across Android and iOS devices for popular LLMs with compact size.
Model | Samsung Galaxy S22 5G (Android 13) | Samsung Galaxy S22 Ultra 5G (Android 14) | iPhone 15 (iOS 18.0) | iPhone 15 Plus (iOS 17.4.1) | iPhone 15 Pro (iOS 18.4.1) |
---|---|---|---|---|---|
SmolLM2-135M | 202.28 | 202.61 | 7.47 | 6.43 | 29.64 |
Qwen3-0.6B | 59.16 | 56.49 | 7.05 | 5.48 | 17.99 |
google/gemma-3-1b-it | 25.07 | 23.89 | 21.51 | 21.33 | 17.8 |
Llama-3.2-1B | 44.91 | 37.39 | 11.04 | 8.93 | 25.78 |
OLMo-1B | 44.98 | 38.22 | 14.49 | 8.72 | 20.24 |
📊View Live Benchmarks: Explore comprehensive performance data, compare models across devices, and track performance trends over time on theExecuTorch Benchmark Dashboard.
Performance measured with custom SDPA, KV-cache optimization, and 8da4w quantization. Results may vary based on device conditions and prompt characteristics.
Check ourExecuTorch GitHub repo directly for:
- More backends and performance optimization options
- Deployment guides for Android, iOS, and embedded devices
- Additional examples and benchmarks
We love your input! We want to make contributing to Optimum ExecuTorch as easy and transparent as possible. Check out our:
This project is licensed under the Apache License 2.0 - see theLICENSE file for details.
- Report bugs throughGitHub Issues
About
🤗 Optimum ExecuTorch
Resources
License
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Releases
Packages0
Contributors7
Uh oh!
There was an error while loading.Please reload this page.