Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

The Modular Platform (includes MAX & Mojo)

License

NotificationsYou must be signed in to change notification settings

modular/modular

Repository files navigation


Join us next Thursday, December 11th at Modular's Los Altosoffices for aModular Meetup going inside the MAX platform!

Modular Platform

A unified platform for AI development and deployment, includingMAX🧑‍🚀 andMojo🔥.

The Modular Platform is an open and fully-integrated suite of AI librariesand tools that accelerates model serving and scales GenAI deployments. Itabstracts away hardware complexity so you can run the most popular openmodels with industry-leading GPU and CPU performance without any code changes.

Get started

You don't need to clone this repo.

You can install Modular as apip orconda package and then start anOpenAI-compatible endpoint with a model of your choice.

To get started with the Modular Platform and serve a model using the MAXframework, seethe quickstart guide.

Note

Nightly vs. stable releasesIf you cloned the repo and want a stable release, rungit checkout modular/vX.X to match the version.Themain branch tracks nightly builds, while thestable branch matchesthe latest released version.

After your model endpoint is up and running, you can start sending the modelinference requests usingour OpenAI-compatible REST API.

Try running hundreds of other models fromour model repository.

Deploy our container

The MAX container is our Kubernetes-compatible Docker container for convenientdeployment, which uses the MAX framework's built-in inference server. We haveseparate containers for NVIDIA and AMD GPU environments, and a unified containerthat works with both.

For example, you can start a container for an NVIDIA GPU with this command:

docker run --gpus=1 \    -v~/.cache/huggingface:/root/.cache/huggingface \    -p 8000:8000 \    modular/max-nvidia-full:latest \    --model-path google/gemma-3-27b-it

For more information, see ourMAX containerdocs or theModular Docker Hubrepository.

About the repo

We're constantly open-sourcing more of the Modular Platform and you can findall of it in here. As of May, 2025, this repo includes over 450,000 lines ofcode from over 6000 contributors, providing developers with production-gradereference implementations and tools to extend the Modular Platform with newalgorithms, operations, and hardware targets. It is quite likelythe world'slargest repository of open source CPU and GPU kernels!

Highlights include:

This repo has two major branches:

News & Announcements

[2025/11]Modular Platform 25.7 provides a fully open MAX PythonAPI, expanded hardware support for NVIDIA Grace superchips, improved Mojo GPUprogramming experience, and much more.

[2025/11] We met with the community atPyTorch 2025 + the LLVM Developers' Meeting to solicitcommunity input into how the Modular platform can reduce fragmentation andprovide a unified AI stack.

[2025/09] [Modular raises $250M][funding] to scale AI's unified computelayer, bringing total funding to $380M at a $1.6B valuation.

[2025/09]Modular Platform 25.6 delivers a unified compute layerspanning from laptops to datacenter GPUs, with industry-leading throughput onNVIDIA Blackwell (B200) and AMD MI355X.

[2025/08]Modular Platform 25.5 introduces Large Scale BatchInference through a partnership with SF Compute + open source launch of theMAX Graph API and more.

[2025/08] We hosted ourLos Altos Meetup featuring talks fromChris Lattner on democratizing AI compute and Inworld AI on production voice AI.

[2025/06]AMD partnership announced — Modular Platform now generallyavailable across AMD's MI300 and MI325 GPU portfolio.

[2025/06]Modular Hack Weekend brought developers togetherto build custom kernels, model architectures, and PyTorch custom ops withMojo and MAX.

[2025/05] Over 100 engineers gathered at AGI House for our firstGPU Kernel Hackathon, featuring talks from Modular andAnthropic engineers.


Community & Events

We host regular meetups, hackathons, and community calls. Join us!

ChannelLink
💬 Discorddiscord.gg/modular
💬 Forumforum.modular.com
📅 Meetup Groupmeetup.com/modular-meetup-group
🎥 Community MeetingsRecordings onYouTube

Upcoming events will be posted on ourMeetup page andDiscord.

Contribute

Thanks for your interest in contributing to this repository!

We accept contributions to theMojo standard library,MAX AIkernels, code examples, and Mojo docs, but currently not to anyother parts of the repository.

Please see theContribution Guide for instructions.

We also welcome your bug reports. If you have a bug, pleasefile an issuehere.

Contact us

If you'd like to chat with the team and other community members, please send amessage to ourDiscord channel andourforum board.

License

This repository and its contributions are licensed under the Apache Licensev2.0 with LLVM Exceptions (see the LLVMLicense).Modular, MAX and Mojo usage and distribution are licensed under theModular Community License.

Third party licenses

You are entirely responsible for checking and validating the licenses ofthird parties (i.e. Huggingface) for related software and libraries that are downloaded.

Thanks to our contributors


[8]ページ先頭

©2009-2025 Movatter.jp