AI Inference
Scale and Serve Generative AI, Fast.
NVIDIA Dynamo is an open-source modular inference framework for serving generative AI models in distributed environments. It enables seamless scaling of inference workloads across large GPU fleets with dynamic resource scheduling, intelligent request routing, optimized memory management, and accelerated data transfer.
When serving the open-source DeepSeek-R1 671B reasoning model onNVIDIA GB200 NVL72, NVIDIA Dynamo increased the number of requests served by up to 30x, making it the ideal solution for AI factories looking to run at the lowest possible cost to maximize token revenue generation.
NVIDIA Dynamo supports all major AI inference backends and features large language model (LLM)-specific optimizations, such as disaggregated serving, accelerating and scaling AI reasoning models at the lowest cost and with the highest efficiency. It will be supported as a part of NVIDIA AI Enterprise in a future release.
Distributed inference is the process of running AI model inference across multiple computing devices or nodes to maximize throughput by parallelizing computations.
This approach enables efficient scaling for large-scale AI applications, such as generative AI, by distributing workloads across GPUs or cloud infrastructure. Distributed inference improves overall performance and resource utilization by allowing users to optimize latency and throughput for the unique requirements of each workload.
Separates LLM context (prefill) and generation (decode) phases across distinct GPUs, enabling tailored model parallelism and independent GPU allocation to increase requests served per GPU.
Monitors GPU capacity in distributed inference environments and dynamically allocates GPU workers across context and generation phases to resolve bottlenecks and optimize performance.
Routes inference traffic efficiently, minimizing costly recomputation of repeat or overlapping requests to preserve compute resources while ensuring balanced load distribution across large GPU fleets.
Accelerates data movement in distributed inference settings while simplifying transfer complexities across diverse hardware, including GPUs, CPUs, networks, and storage.
Streamline and automate GPU cluster setup with prebuilt, easy-to-deploy tools and enable dynamic autoscaling with real-time LLM-specific metrics, avoiding over or under provisioning of GPU resources.
Leverage advanced LLM inference serving optimizations like disaggregated serving to increase the number of inference requests served without compromising user experience.
Open and modular design allows you to easily pick and choose the inference-serving components that suit your unique needs, ensuring compatibility with your existing AI stack and avoiding costly migration projects.
NVIDIA Dynamo’s support for all major frameworks—including TensorRT-LLM, vLLM, SGLang, PyTorch, and more—ensures your ability to quickly deploy new generative AI models, regardless of their backend.
NVIDIA Dynamo is fully open source, giving you complete transparency and flexibility. Deploy NVIDIA Dynamo, contribute to its growth, and seamlessly integrate it into your existing stack.
Check it out onGitHub and join the community!
For individuals looking to get access to Triton Inference Server open-source code for development.
For individuals looking to access free Triton Inference Server containers for development.
Access NVIDIA-hosted infrastructure and guided hands-on labs that include step-by-step instructions and examples, available for free on NVIDIA LaunchPad.
Get a free license to try NVIDIA AI Enterprise in production for 90 days using your existing infrastructure.
Find out how you can drive innovation with NVIDIA Dynamo.
Reasoning models generate more tokens to solve complex problems, increasing inference costs. NVIDIA Dynamo optimizes these models with features like disaggregated serving. This approach separates the prefill and decode computational phases onto distinct GPUs, allowing AI inference teams to optimize each phase independently. The result is better resource utilization, more queries served per GPU, and lower inference costs.
As AI models grow too large to fit on a single node, serving them efficiently becomes a challenge. Distributed inference requires splitting models across multiple nodes, which adds complexity in orchestration, scaling, and communication. Ensuring these nodes function as a cohesive unit—especially under dynamic workloads—demands careful management. NVIDIA Dynamo simplifies this by providing prebuilt capabilities on Kubernetes, seamlessly handling scheduling, scaling, and serving so you can focus on deploying AI—not managing infrastructure.
AI agents rely on multiple models—LLMs, retrieval systems, and specialized tools—working in sync in real time. Scaling these agents is a complex challenge, requiring intelligent GPU scheduling, efficient KV cache management, and ultra-low-latency communication to maintain responsiveness.
NVIDIA Dynamo streamlines this process with built-in intelligent GPU planner, smart router, and low-latency communication library, making AI agent scaling seamless and efficient.
Code generation often requires iterative refinement to adjust prompts, clarify requirements, or debug outputs based on the model’s responses. This back-and-forth necessitates context re-computation with each user turn, increasing inference costs. NVIDIA Dynamo optimizes this process by enabling context reuse and offloading to cost-effective memory, minimizing expensive re-computation and reducing overall inference costs.
“Scaling advanced AI models requires sophisticated multi-GPU scheduling, seamless coordination and low-latency communication libraries that transfer reasoning contexts seamlessly across memory and storage. We expect Dynamo will help us deliver a premier user experience to our enterprise customers.” Saurabh Baji, Senior Vice President of Engineering at Cohere
"Handling hundreds of millions of requests monthly, we rely on NVIDIA’s GPUs and inference software to deliver the performance, reliability, and scale our business and users demand, "We'll look forward to leveraging Dynamo with its enhanced distributed serving capabilities to drive even more inference serving efficiencies and meet the compute demands of new AI reasoning models."Denis Yarats, CTO of Perplexity AI.
“Scaling reasoning models cost-effectively requires new advanced inference techniques, including disaggregated serving and context-aware routing. Together AI provides industry leading performance using our proprietary inference engine. The openness and modularity of Dynamo will allow us to seamlessly plug its components into our engine to serve more requests while optimizing resource utilization—maximizing our accelerated computing investment. " Ce Zhang, CTO of Together AI.
“Scaling advanced AI models requires sophisticated multi-GPU scheduling, seamless coordination and low-latency communication libraries that transfer reasoning contexts seamlessly across memory and storage. We expect NVIDIA Dynamo will help us deliver a premier user experience to our enterprise customers.” Saurabh Baji, Senior Vice President of Engineering at Cohere
"Handling hundreds of millions of requests monthly, we rely on NVIDIA’s GPUs and inference software to deliver the performance, reliability, and scale our business and users demand, "We'll look forward to leveraging NVIDIA Dynamo with its enhanced distributed serving capabilities to drive even more inference serving efficiencies and meet the compute demands of new AI reasoning models."Denis Yarats, CTO of Perplexity AI.
“Scaling reasoning models cost-effectively requires new advanced inference techniques, including disaggregated serving and context-aware routing. Together AI provides industry leading performance using our proprietary inference engine. The openness and modularity of NVIDIA Dynamo will allow us to seamlessly plug its components into our engine to serve more requests while optimizing resource utilization—maximizing our accelerated computing investment." Ce Zhang, CTO of Together AI.
Read about the latest inference updates and announcements for NVIDIA Dynamo Inference Server.
Read technical walkthroughs on how to get started with inference.
Get tips and best practices for deploying, running, and scaling AI models for inference for generative AI, LLMs, recommender systems, computer vision, and more.
Learn how to serve LLMs efficiently with step-by-step instructions. We’ll cover how to easily deploy an LLM across multiple backends and compare their performance, as well as how to fine-tune deployment configurations for optimal performance.
Learn what AI inference is, how it fits into your enterprise's AI deployment strategy, what key challenges in deploying enterprise-grade AI use cases are, why a full-stack AI inference solution is needed to address these challenges, the main components of a full-stack platform are, and how to deploy your first AI inferencing solution.
Explore how the NVIDIA AI inferencing platform seamlessly integrates with leading cloud service providers, simplifying deployment and expediting the launch of LLM-powered AI use cases.
New to NVIDIA Dynamo and want to deploy your model quickly? Make use of this quick-start guide to begin your NVIDIA Dynamo journey.
Getting started with NVIDIA Dynamo can lead to many questions. Explore this repository to familiarize yourself with NVIDIA Dynamo’s features and find guides and examples that can help ease migration.
In hands-on labs, experience fast and scalable AI using NVIDIA Dynamo. You’ll be able to immediately unlock the benefits of NVIDIA’s accelerated computing infrastructure and scale your AI workloads.
NVIDIA Dynamo Inference Server simplifies the deployment of AI models at scale in production, letting teams deploy trained AI models from any framework from local storage or cloud platform on any GPU- or CPU-based infrastructure.
This video showcases deploying the Stable Diffusion pipeline available through the HuggingFace diffuser library. We use NVIDIA Dynamo Inference Server to deploy and run the pipeline.
NVIDIA Dynamo is an open-source inference solution that standardizes model deployment and enables fast and scalable AI in production. Because of its many features, a natural question to ask is, where do I begin? Watch to find out.
Download onGitHub and join the community!
Explore everything you need to start developing with NVIDIA Dynamo, including the latest documentation, tutorials, technical blogs, and more.
Talk to an NVIDIA product specialist about moving from pilot to production with the security, API stability, and support ofNVIDIA AI Enterprise.
Read the Press Release | Read the Tech Blog