Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings
NVIDIA-NeMo

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings
@NVIDIA-NeMo

NVIDIA-NeMo

NeMo Framework is NVIDIA's GPU accelerated, end-to-end training framework for large language models (LLMs), multi-modal models and speech models. It enables seamless scaling of training (both pretraining and post-training) workloads from single GPU to thousand-node clusters for both 🤗Hugging Face/PyTorch and Megatron models. This GitHub organization includes a suite of libraries and recipe collections to help users train models from end to end.

NeMo Framework is also a part of the NVIDIA NeMo software suite for managing the AI agent lifecycle.

Latest 📣 announcements and 🗣️ discussions

🐳 NeMo AutoModel

🔬 NeMo RL

💬 NeMo Speech

More to come and stay tuned!

Getting Started

InstallationCheckpoint Conversion HF<>MegatronLLM example recipes and scriptsVLM example recipes and scripts
Under 1,000 GPUsNeMo Automodel,NeMo RLNo NeedPre-training,SFT,LoRA,DPO,GRPOSFT,LoRA,GRPO
Over 1,000 GPUsNeMo Megatron-Bridge,NeMo-RLConversionPretrain, SFT, and LoRA,DPO withmegatron_cfg,GRPO withmegatron_cfgSFT, LoRA,GRPO megatron config

Repo organization under NeMo Framework

Summary of key functionalities and container strategy of each repo

Visit the individual repos to find out more 🔍, raise 🐛, contribute ✍️ and participate in discussion forums 🗣️!

RepoKey Functionality & Documentation LinkTraining LoopTraining BackendsInfernece BackendsModel CoverageContainer
NeMo Megatron-BridgePretraining, LoRA, SFTPyT native loopMegatron-coreNALLM & VLMNeMo Framework Container
NeMo AutoModelPretraining, LoRA, SFTPyT native loopPyTorchNALLM, VLM, Omni, VFMNeMo AutoModel Container
Previous NeMo ->will repurpose to focus on SpeechPretraining,SFTPyTorch Lightning LoopMegatron-core & PyTorchRIVASpeechNA
NeMo RLSFT, RLPyT native loopMegatron-core & PyTorchvLLMLLM, VLMNeMo RL container
NeMo GymRL Environment, integrate with RL FrameworkNANANANANeMo RL Container (WIP)
NeMo Aligner (deprecated)SFT, RLPyT Lightning LoopMegatron-coreTRTLLMLLMNA
NeMo CuratorData curationNANANAAgnosticNeMo Curator Container
NeMo EvaluatorModel evaluationNANAAgnosticNeMo Framework Container
NeMo Export-DeployExport to ProductionNANAvLLM, TRT, TRTLLM, ONNXAgnosticNeMo Framework Container
NeMo RunExperiment launcherNANANAAgnosticNeMo Framework Container
NeMo GuardrailsGuardrail model responseNANANANA
NeMo SkillsReference pipeline for SDG & EvalNANANAAgnosticNA
NeMo Emerging OptimizersCollection of OptimizersNAAgnosticNANANA
NeMo DFM (WIP)Diffusion foundation model trainingPyT native loopMegatron-core and PyTorchPyTorchVFM, DiffusionTBD
NeMotronDeveloper asset hub for nemotron modelsNANANANemotron modelsNA
NeMo Data-designerSynthetic data generation toolkitNANANANANA
Table 1. NeMo Framework Repos

Diagram Ilustration of Repos under NeMo Framework (WIP)

image

Figure 1. NeMo Framework Repo Overview

Some background motivations and historical contexts

The NeMo GitHub Org and its repo collections are created to address the following problems

  • Need for composability: ThePrevious NeMo is monolithic and encompasses too many things, making it hard for users to find what they need. Container size is also an issue. Breaking down the Monolithic repo into a series of functional-focused repos to facilitate code discovery.
  • Need for customizability: ThePrevious NeMo uses PyTorch Lighting as the default trainer loop, which provides some out of the box functionality but making it hard to customize.NeMo Megatron-Bridge,NeMo AutoModel, andNeMo RL have adopted pytorch native custom loop to improve flexibility and ease of use for developers.

License

Apache 2.0 licensed with third-party attributions documented in each repository.

PinnedLoading

  1. CuratorCuratorPublic

    Scalable data pre processing and curation toolkit for LLMs

    Python 1.2k 191

  2. RLRLPublic

    Scalable toolkit for efficient model reinforcement

    Python 1k 173

  3. AutomodelAutomodelPublic

    Pytorch Distributed native training library for LLMs/VLMs with OOTB Hugging Face support

    Python 190 25

  4. Megatron-BridgeMegatron-BridgePublic

    HuggingFace conversion and training library for Megatron-based models

    Python 227 72

  5. GuardrailsGuardrailsPublic

    NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.

    Python 5.3k 563

  6. GymGymPublic

    Build RL environments for LLM training

    Python 67 3

Repositories

Loading
Type
Select type
Language
Select language
Sort
Select order
Showing 10 of 17 repositories

[8]ページ先頭

©2009-2025 Movatter.jp