Llama 4 Scout 17B-16E Stay organized with collections Save and categorize content based on your preferences.
Llama 4 Scout 17B-16E is a multimodal model that uses the Mixture-of-Experts(MoE) architecture and early fusion, delivering state-of-the-art results for itssize class.
Managed API (MaaS) specifications
Try in Vertex AIView model card in Model Garden
| Model ID | llama-4-scout-17b-16e-instruct-maas | |
|---|---|---|
| Launch stage | GA | |
| Supported inputs & outputs |
| |
| Capabilities |
| |
| Usage types |
| |
| Knowledge cutoff date | August 2024 | |
| Versions |
| |
| Supported regions | ||
Model availability |
| |
ML processing |
| |
| Quota limits | us-east5:
| |
| Pricing | SeePricing. | |
Deploy as a self-deployed model
To self-deploy the model, navigate to theLlama 4 Scout 17B-16E model card in the Model Gardenconsole and clickDeploy model. For more information about deploying andusing partner models, seeDeploy a partner model and make predictionrequests.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2026-02-19 UTC.