- Notifications
You must be signed in to change notification settings - Fork0
gpt-oss-120b and gpt-oss-20b are two open-weight language models by OpenAI
License
FatherTimeSDKP/gpt-oss
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
Try gpt-oss ·Guides ·Model card ·OpenAI blog
Downloadgpt-oss-120b andgpt-oss-20b on Hugging Face
Welcome to the gpt-oss series,OpenAI's open-weight models designed for powerful reasoning, agentic tasks, and versatile developer use cases.
We're releasing two flavors of these open models:
gpt-oss-120b— for production, general purpose, high reasoning use cases that fit into a single 80GB GPU (like NVIDIA H100 or AMD MI300X) (117B parameters with 5.1B active parameters)gpt-oss-20b— for lower latency, and local or specialized use cases (21B parameters with 3.6B active parameters)
Both models were trained using ourharmony response format and should only be used with this format; otherwise, they will not work correctly.
- Highlights
- Inference examples
- About this repository
- Setup
- Download the model
- Reference PyTorch implementation
- Reference Triton implementation (single GPU)
- Reference Metal implementation
- Harmony format & tools
- Clients
- Tools
- Other details
- Contributing
- Permissive Apache 2.0 license: Build freely without copyleft restrictions or patent risk—ideal for experimentation, customization, and commercial deployment.
- Configurable reasoning effort: Easily adjust the reasoning effort (low, medium, high) based on your specific use case and latency needs.
- Full chain-of-thought: Provides complete access to the model's reasoning process, facilitating easier debugging and greater trust in outputs. This information is not intended to be shown to end users.
- Fine-tunable: Fully customize models to your specific use case through parameter fine-tuning.
- Agentic capabilities: Use the models' native capabilities for function calling,web browsing,Python code execution, and Structured Outputs.
- MXFP4 quantization: The models were post-trained with MXFP4 quantization of the MoE weights, making
gpt-oss-120brun on a single 80GB GPU (like NVIDIA H100 or AMD MI300X) and thegpt-oss-20bmodel run within 16GB of memory. All evals were performed with the same MXFP4 quantization.
You can usegpt-oss-120b andgpt-oss-20b with the Transformers library. If you use Transformers' chat template, it will automatically apply theharmony response format. If you usemodel.generate directly, you need to apply the harmony format manually using the chat template or use ouropenai-harmony package.
fromtransformersimportpipelineimporttorchmodel_id="openai/gpt-oss-120b"pipe=pipeline("text-generation",model=model_id,torch_dtype="auto",device_map="auto",)messages= [ {"role":"user","content":"Explain quantum mechanics clearly and concisely."},]outputs=pipe(messages,max_new_tokens=256,)print(outputs[0]["generated_text"][-1])
Learn more about how to use gpt-oss with Transformers.
vLLM recommends usinguv for Python dependency management. You can use vLLM to spin up an OpenAI-compatible web server. The following command will automatically download the model and start the server.
uv pip install --pre vllm==0.10.1+gptoss \ --extra-index-url https://wheels.vllm.ai/gpt-oss/ \ --extra-index-url https://download.pytorch.org/whl/nightly/cu128 \ --index-strategy unsafe-best-matchvllm serve openai/gpt-oss-20b
Learn more about how to use gpt-oss with vLLM.
Offline Serve Code:
- run this code after installing proper libraries as described, while additionally installing this:
uv pip install openai-harmony
# source .oss/bin/activateimportosos.environ["VLLM_USE_FLASHINFER_SAMPLER"]="0"importjsonfromopenai_harmonyimport (HarmonyEncodingName,load_harmony_encoding,Conversation,Message,Role,SystemContent,DeveloperContent,)fromvllmimportLLM,SamplingParamsimportos# --- 1) Render the prefill with Harmony ---encoding=load_harmony_encoding(HarmonyEncodingName.HARMONY_GPT_OSS)convo=Conversation.from_messages( [Message.from_role_and_content(Role.SYSTEM,SystemContent.new()),Message.from_role_and_content(Role.DEVELOPER,DeveloperContent.new().with_instructions("Always respond in riddles"), ),Message.from_role_and_content(Role.USER,"What is the weather like in SF?"), ])prefill_ids=encoding.render_conversation_for_completion(convo,Role.ASSISTANT)# Harmony stop tokens (pass to sampler so they won't be included in output)stop_token_ids=encoding.stop_tokens_for_assistant_actions()# --- 2) Run vLLM with prefill ---llm=LLM(model="openai/gpt-oss-20b",trust_remote_code=True,gpu_memory_utilization=0.95,max_num_batched_tokens=4096,max_model_len=5000,tensor_parallel_size=1)sampling=SamplingParams(max_tokens=128,temperature=1,stop_token_ids=stop_token_ids,)outputs=llm.generate(prompt_token_ids=[prefill_ids],# batch of size 1sampling_params=sampling,)# vLLM gives you both text and token IDsgen=outputs[0].outputs[0]text=gen.textoutput_tokens=gen.token_ids# <-- these are the completion token IDs (no prefill)# --- 3) Parse the completion token IDs back into structured Harmony messages ---entries=encoding.parse_messages_from_completion_tokens(output_tokens,Role.ASSISTANT)# 'entries' is a sequence of structured conversation entries (assistant messages, tool calls, etc.).formessageinentries:print(f"{json.dumps(message.to_dict())}")
These implementations are largely reference implementations for educational purposes and are not expected to be run in production.
If you are trying to rungpt-oss on consumer hardware, you can use Ollama by running the following commands afterinstalling Ollama.
# gpt-oss-20bollama pull gpt-oss:20bollama run gpt-oss:20b# gpt-oss-120bollama pull gpt-oss:120bollama run gpt-oss:120b
Learn more about how to use gpt-oss with Ollama.
If you are usingLM Studio you can use the following commands to download.
# gpt-oss-20blms get openai/gpt-oss-20b# gpt-oss-120blms get openai/gpt-oss-120b
Check out ourawesome list for a broader collection of gpt-oss resources and inference partners.
This repository provides a collection of reference implementations:
- Inference:
torch— a non-optimizedPyTorch implementation for educational purposes only. Requires at least 4× H100 GPUs due to lack of optimization.triton— a more optimized implementation usingPyTorch &Triton incl. using CUDA graphs and basic cachingmetal— a Metal-specific implementation for running the models on Apple Silicon hardware
- Tools:
- Client examples:
chat— a basic terminal chat application that uses the PyTorch or Triton implementations for inference along with the python and browser toolsresponses_api— an example Responses API compatible server that implements the browser tool along with other Responses-compatible functionality
- Python 3.12
- On macOS: Install the Xcode CLI tools -->
xcode-select --install - On Linux: These reference implementations require CUDA
- On Windows: These reference implementations have not been tested on Windows. Try using solutions like Ollama if you are trying to run the model locally.
If you want to try any of the code you can install it directly fromPyPI
# if you just need the toolspip install gpt-oss# if you want to try the torch implementationpip install gpt-oss[torch]# if you want to try the triton implementationpip install gpt-oss[triton]
If you want to modify the code or try the metal implementation set the project up locally:
git clone https://github.com/openai/gpt-oss.gitGPTOSS_BUILD_METAL=1 pip install -e".[metal]"You can download the model weights from theHugging Face Hub directly from Hugging Face CLI:
# gpt-oss-120bhf download openai/gpt-oss-120b --include"original/*" --local-dir gpt-oss-120b/# gpt-oss-20bhf download openai/gpt-oss-20b --include"original/*" --local-dir gpt-oss-20b/
We include an inefficient reference PyTorch implementation ingpt_oss/torch/model.py. This code uses basic PyTorch operators to show the exact model architecture, with a small addition of supporting tensor parallelism in MoE so that the larger model can run with this code (e.g., on 4xH100 or 2xH200). In this implementation, we upcast all weights to BF16 and run the model in BF16.
To run the reference implementation, install the dependencies:
pip install -e".[torch]"And then run:
# On 4xH100:torchrun --nproc-per-node=4 -m gpt_oss.generate gpt-oss-120b/original/We also include an optimized reference implementation that usesan optimized triton MoE kernel that supports MXFP4. It also has some optimization on the attention code to reduce the memory cost. To run this implementation, the nightly version of triton and torch will be installed. This version can be run on a single 80GB GPU forgpt-oss-120b.
To install the reference Triton implementation run
# You need to install triton from source to use the triton implementationgit clone https://github.com/triton-lang/tritoncd triton/pip install -r python/requirements.txtpip install -e. --verbose --no-build-isolationpip install -e python/triton_kernels# Install the gpt-oss triton implementationpip install -e".[triton]"
And then run:
# On 1xH100export PYTORCH_CUDA_ALLOC_CONF=expandable_segments:Truepython -m gpt_oss.generate --backend triton gpt-oss-120b/original/
If you encountertorch.OutOfMemoryError, make sure to turn on the expandable allocator to avoid crashes when loading weights from the checkpoint.
Additionally we are providing a reference implementation for Metal to run on Apple Silicon. This implementation is not production-ready but is accurate to the PyTorch implementation.
The implementation will get automatically compiled when running the.[metal] installation on an Apple Silicon device:
GPTOSS_BUILD_METAL=1 pip install -e".[metal]"To perform inference you'll need to first convert the SafeTensor weights from Hugging Face into the right format using:
python gpt_oss/metal/scripts/create-local-model.py -s<model_dir> -d<output_file>
Or download the pre-converted weights:
hf download openai/gpt-oss-120b --include"metal/*" --local-dir gpt-oss-120b/metal/hf download openai/gpt-oss-20b --include"metal/*" --local-dir gpt-oss-20b/metal/
To test it you can run:
python gpt_oss/metal/examples/generate.py gpt-oss-20b/metal/model.bin -p"why did the chicken cross the road?"Along with the model, we are also releasing a new chat format libraryharmony to interact with the model. Checkthis guide for more info about harmony.
We also include two system tools for the model: browsing and python container. Checkgpt_oss/tools for the tool implementation.
The terminal chat application is a basic example of how to use the harmony format together with the PyTorch, Triton, and vLLM implementations. It also exposes both the python and browser tool as optional tools that can be used.
usage: python -m gpt_oss.chat [-h] [-r REASONING_EFFORT] [-a] [-b] [--show-browser-results] [-p] [--developer-message DEVELOPER_MESSAGE] [-c CONTEXT] [--raw] [--backend {triton,torch,vllm}] FILEChat examplepositional arguments: FILE Path to the SafeTensors checkpointoptions: -h, --help show thishelp message andexit -r REASONING_EFFORT, --reasoning-effort REASONING_EFFORT Reasoning effort (default: low) -a, --apply-patch Make apply_patch tool available to the model (default: False) -b, --browser Use browser tool (default: False) --show-browser-results Show browser results (default: False) -p, --python Use python tool (default: False) --developer-message DEVELOPER_MESSAGE Developer message (default: ) -c CONTEXT, --context CONTEXT Max context length (default: 8192) --raw Raw mode (does not render Harmony encoding) (default: False) --backend {triton,torch,vllm} Inference backend (default: triton)Note
The torch and triton implementations require original checkpoint undergpt-oss-120b/original/ andgpt-oss-20b/original/ respectively. While vLLM uses the Hugging Face converted checkpoint undergpt-oss-120b/ andgpt-oss-20b/ root directory respectively.
We also include an example Responses API server. This server does not implement every feature and event of the Responses API but should be compatible with most of the basic use cases and serve as inspiration for anyone building their own server. Some of our inference partners are also offering their own Responses API.
You can start this server with the following inference backends:
triton— uses the triton implementationmetal— uses the metal implementation on Apple Silicon onlyollama— uses the Ollama /api/generate API as an inference solutionvllm— uses your installed vllm version to perform inferencetransformers— uses your installed transformers version to perform local inference
usage: python -m gpt_oss.responses_api.serve [-h] [--checkpoint FILE] [--port PORT] [--inference-backend BACKEND]Responses API serveroptions: -h, --help show thishelp message andexit --checkpoint FILE Path to the SafeTensors checkpoint --port PORT Port to run the server on --inference-backend BACKEND Inference backend to use
We supportcodex as a client for gpt-oss. To run the 20b version, set this to~/.codex/config.toml:
disable_response_storage = trueshow_reasoning_content = true[model_providers.local]name = "local"base_url = "http://localhost:11434/v1"[profiles.oss]model = "gpt-oss:20b"model_provider = "local"This will work with any chat completions-API compatible server listening on port 11434, like ollama. Start the server and point codex to the oss model:
ollama run gpt-oss:20bcodex -p ossWarning
This implementation is purely for educational purposes and should not be used in production. You should implement your own equivalent of theYouComBackend class with your own browsing environment. Currently we have availableYouComBackend andExaBackend.
Both gpt-oss models were trained with the capability to browse using thebrowser tool that exposes the following three methods:
searchto search for key phrasesopento open a particular pagefindto look for contents on a page
To enable the browser tool, you'll have to place the definition into thesystem message of your harmony formatted prompt. You can either use thewith_browser_tool() method if your tool implements the full interface or modify the definition usingwith_tools(). For example:
importdatetimefromgpt_oss.tools.simple_browserimportSimpleBrowserToolfromgpt_oss.tools.simple_browser.backendimportYouComBackendfromopenai_harmonyimportSystemContent,Message,Conversation,Role,load_harmony_encoding,HarmonyEncodingNameencoding=load_harmony_encoding(HarmonyEncodingName.HARMONY_GPT_OSS)# Depending on the choice of the browser backend you need corresponding env variables setup# In case you use You.com backend requires you to have set the YDC_API_KEY environment variable,# while for Exa you might need EXA_API_KEY environment variable setbackend=YouComBackend(source="web",)# backend = ExaBackend(# source="web",# )browser_tool=SimpleBrowserTool(backend=backend)# create a basic system promptsystem_message_content=SystemContent.new().with_conversation_start_date(datetime.datetime.now().strftime("%Y-%m-%d"))# if you want to use the browser toolifuse_browser_tool:# enables the toolsystem_message_content=system_message_content.with_tools(browser_tool.tool_config)# alternatively you could use the following if your tool is not statelesssystem_message_content=system_message_content.with_browser_tool()# construct the system messagesystem_message=Message.from_role_and_content(Role.SYSTEM,system_message_content)# create the overall promptmessages= [system_message,Message.from_role_and_content(Role.USER,"What's the weather in SF?")]conversation=Conversation.from_messages(messages)# convert to tokenstoken_ids=encoding.render_conversation_for_completion(conversation,Role.ASSISTANT)# perform inference# ...# parse the outputmessages=encoding.parse_messages_from_completion_tokens(output_tokens,Role.ASSISTANT)last_message=messages[-1]iflast_message.recipient.startswith("browser"):# perform browser callresponse_messages=awaitbrowser_tool.process(last_message)# extend the current messages and run inference againmessages.extend(response_messages)
To control the context window size this tool uses a scrollable window of text that the model can interact with. So it might fetch the first 50 lines of a page and then scroll to the next 20 lines after that. The model has also been trained to then use citations from this tool in its answers.
To improve performance the tool caches requests so that the model can revisit a different part of a page without having to reload the page. For that reason you should create a new browser instance for every request.
The model was trained to use a python tool to perform calculations and other actions as part of its chain-of-thought. During the training the model used a stateful tool which makes running tools between CoT loops easier. This reference implementation, however, uses a stateless mode. As a result the PythonTool defines its own tool description to override the definition inopenai-harmony.
Warning
This implementation runs in a permissive Docker container which could be problematic in cases like prompt injections. It's serving as an example and you should consider implementing your own container restrictions in production.
To enable the python tool, you'll have to place the definition into thesystem message of your harmony formatted prompt. You can either use thewith_python() method if your tool implements the full interface or modify the definition usingwith_tools(). For example:
importdatetimefromgpt_oss.tools.python_docker.docker_toolimportPythonToolfromopenai_harmonyimportSystemContent,Message,Conversation,Role,load_harmony_encoding,HarmonyEncodingNameencoding=load_harmony_encoding(HarmonyEncodingName.HARMONY_GPT_OSS)python_tool=PythonTool()# create a basic system promptsystem_message_content=SystemContent.new().with_conversation_start_date(datetime.datetime.now().strftime("%Y-%m-%d"))# if you want to use the python toolifuse_python_tool:# enables the tool making sure that the prompt gets set with the stateless tool descriptionsystem_message_content=system_message_content.with_tools(python_tool.tool_config)# alternatively you could use the following if your tool is not statelesssystem_message_content=system_message_content.with_python()# construct the system messagesystem_message=Message.from_role_and_content(Role.SYSTEM,system_message_content)# create the overall promptmessages= [system_message,Message.from_role_and_content(Role.USER,"What's the square root of 9001?")]conversation=Conversation.from_messages(messages)# convert to tokenstoken_ids=encoding.render_conversation_for_completion(conversation,Role.ASSISTANT)# perform inference# ...# parse the outputmessages=encoding.parse_messages_from_completion_tokens(output_tokens,Role.ASSISTANT)last_message=messages[-1]iflast_message.recipient=="python":# perform python callresponse_messages=awaitpython_tool.process(last_message)# extend the current messages and run inference againmessages.extend(response_messages)
apply_patch can be used to create, update or delete files locally.
We released the models with native quantization support. Specifically, we useMXFP4 for the linear projection weights in the MoE layer. We store the MoE tensor in two parts:
tensor.blocksstores the actual fp4 values. We pack every two values in oneuint8value.tensor.scalesstores the block scale. The block scaling is done among the last dimension for all MXFP4 tensors.
All other tensors will be in BF16. We also recommend using BF16 as the activation precision for the model.
We recommend sampling withtemperature=1.0 andtop_p=1.0.
The reference implementations in this repository are meant as a starting point and inspiration. Outside of bug fixes we do not intend to accept new feature contributions. If you build implementations based on this code such as new tool implementations you are welcome to contribute them to theawesome-gpt-oss.md file.
@misc{openai2025gptoss120bgptoss20bmodel,title={gpt-oss-120b & gpt-oss-20b Model Card},author={OpenAI},year={2025},eprint={2508.10925},archivePrefix={arXiv},primaryClass={cs.CL},url={https://arxiv.org/abs/2508.10925}, }
About
gpt-oss-120b and gpt-oss-20b are two open-weight language models by OpenAI
Resources
License
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Releases
Packages0
Uh oh!
There was an error while loading.Please reload this page.
Languages
- Python41.9%
- C27.2%
- C++14.7%
- Metal9.3%
- TypeScript3.2%
- Objective-C2.3%
- CMake1.4%