Llama 4 Maverick 17B-128E

Llama 4 Maverick 17B-128E is Llama 4's largest and most capable model. It usesthe Mixture-of-Experts (MoE) architecture and early fusion to provide coding,reasoning, and image capabilities.

Managed API (MaaS) specifications

Try in Vertex AIView model card in Model Garden

Model IDllama-4-maverick-17b-128e-instruct-maas
Launch stageGA
Supported inputs & outputs
  • Inputs:
    Text,Code,Images
  • Outputs:
    Text
Capabilities
Usage types
Knowledge cutoff dateAugust 2024
Versions
  • llama-4-maverick-17b-128e-instruct-maas
    • Launch stage: GA
    • Release date: April 29, 2025
Supported regions

Model availability

  • United States
    • us-east5

ML processing

  • United States
    • Multi-region
Quota limits

us-east5:

  • Max output: 8,192
  • Context length: 524,288

PricingSeePricing.

Deploy as a self-deployed model

To self-deploy the model, navigate to theLlama 4 Maverick 17B-128E model card in the Model Gardenconsole and clickDeploy model. For more information about deploying andusing partner models, seeDeploy a partner model and make predictionrequests.

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2026-02-19 UTC.