| o3 | |
|---|---|
| Developer | OpenAI |
| Initial release |
|
| Predecessor | OpenAI o1 |
| Successor | |
| Type | |
| Part of a series on |
| OpenAI |
|---|
| Products |
| Models |
| People |
| Concepts |
OpenAI o3 is areflectivegenerative pre-trained transformer (GPT) model developed byOpenAI as a successor toOpenAI o1 forChatGPT. It is designed to devote additional deliberation time when addressing questions that requirestep-by-step logical reasoning.[1][2] On January 31, 2025, OpenAI released a smaller model, o3-mini,[3] followed on April 16 by o3 ando4-mini.[4]
The OpenAI o3 model was announced on December 20, 2024. It was called "o3" rather than "o2" to avoidtrademark conflict with the mobile carrier brand namedO2.[1] OpenAI invited safety and security researchers to apply for early access of these models until January 10, 2025.[5] Similarly to o1, there are two different models: o3 and o3-mini.[3]
On January 31, 2025, OpenAI released o3-mini to allChatGPT users (including free-tier) and someAPI users. OpenAI describes o3-mini as a "specialized alternative" to o1 for "technical domains requiring precision and speed".[6] o3-mini features three reasoning effort levels: low, medium and high. The free version uses medium. The variant using more compute is called o3-mini-high, and is available to paid subscribers.[3][7] Subscribers to ChatGPT's Pro tier have unlimited access to both o3-mini and o3-mini-high.[6]
On February 2, OpenAI launchedOpenAI Deep Research, a ChatGPT service using a version of o3 that makes comprehensive reports within 5 to 30 minutes, based onweb searches.[8]
On February 6, in response to pressure from rivals likeDeepSeek R1, OpenAI announced an update aimed at enhancing the transparency of the thought process in its o3-mini model.[9]
On February 12, OpenAI further increased rate limits for o3-mini-high to 50 requests per day (from 50 requests per week) for ChatGPT Plus subscribers, and implemented file/image upload support.[10]
On April 16, 2025, OpenAI released o3 ando4-mini, a successor of o3-mini.[4]
On June 10, OpenAI released o3-pro, which the company claims is its most capable model yet.[11] OpenAI stated: "We recommend using it for challenging questions where reliability matters more than speed, and waiting a few minutes is worth the tradeoff".[12]
Reinforcement learning was used to teach o3 to "think" before generating answers, using what OpenAI refers to as a "privatechain of thought".[13] This approach enables the model to plan ahead and reason through tasks, performing a series of intermediate reasoning steps to assist in solving the problem, at the cost of additional computing power and increasedlatency of responses.[14]
o3 demonstrates significantly better performance than o1 on complex tasks, includingcoding,mathematics, andscience.[1] OpenAI reported that o3 achieved a score of 87.7% on the GPQA Diamond benchmark, which contains expert-level science questions not publicly available online.[15]
On SWE-bench Verified, asoftware engineeringbenchmark assessing the ability to solve realGitHub issues, o3 scored 71.7%, compared to 48.9% for o1. OnCodeforces, o3 reached anElo score of 2727, whereas o1 scored 1891.[15]
On the Abstraction and Reasoning Corpus for Artificial General Intelligence (ARC-AGI) benchmark, which evaluates an AI's ability to handle new logical and skill acquisition problems, o3 attained three times the accuracy of o1.[1][16]
According to OpenAI's January 2025 report on o3-mini, adjusting "reasoning effort" significantly affects performance, especially forSTEM tasks. Moving from low to high reasoning effort raises accuracy on OpenAI'sAIME 2024 (different from the MathArena AIME benchmark), GPQA Diamond, andCodeforces, typically by 10–30%. With high effort, o3-mini (high) achieved 87.3% on AIME 2024, 79.7% on GPQA Diamond, 2130 Elo on Codeforces, and 49.3 on SWE-bench Verified.[6]