OpenAI Codex describes twoAI-assisted software development tools released byOpenAI. They translate natural language intocode, a technology described by artificial intelligence researchers as anAI agent.[1]
On August 10, 2021, OpenAI announced Codex, a codeautocompletion tool available in selectIDEs such asVisual Studio Code andNeovim. It was a modified, production version ofGPT-3,[2] finetuned on gigabytes of source code in a dozen programming languages. It was the original model poweringGitHub Copilot.[3]
On April 16, 2025, OpenAI published CodexCLI toGitHub under anApache 2.0 license, an AI agent harness that runs locally on a user's computer.[4][5] They also announced a language model,codex-mini-latest, available only behind an API. It was a fine-tuned version ofo4-mini, specifically trained for use in Codex CLI.[6]
On May 16, 2025, OpenAI announced the launch of a research preview of a distinct tool with a similar purpose, also named Codex, based on a finetuned version ofOpenAI o3.[7] It is a software agent that performs tasks in computer programming, including writing features, answering codebase questions, running tests, and proposing PRs for review. It has two versions, one running in a virtual machine in the cloud, and one where the agent runs in the cloud, but performs actions on a local machine connected viaAPI (similar in operation toCursor orClaude Code). It is available to ChatGPT Pro, Enterprise, Team, and Plus users.[8][9]
Based onGPT-3, aneural network trained on text, Codex was additionally trained on 159 gigabytes ofPython code from 54 millionGitHub repositories.[10][11] A typical use case of Codex is for a user to type a comment, such as "//compute the moving average of an array for a given window size", then use the AI to suggest a block of code that satisfies that comment prompt.[12] OpenAI stated that Codex can complete approximately 37% of requests and is meant to make human programming faster rather than to replace it. According to OpenAI's blog, Codex excels most at "mapping... simple problems to existing code", which they describe as "probably the least fun part of programming".[13][14] Co-founder of Fast.ai, Jeremy Howard ted that "Codex is a way of getting code written without having to write as much code", and that "it is not always correct, but it is just close enough".[15] According to a paper by OpenAI researchers, when Codex attempted each test case 100 times, it generated working solutions for 70.2% of prompts.[16]
OpenAI claims that Codex can create code in over a dozen programming languages, includingGo,JavaScript,Perl,PHP,Ruby,Shell,Swift, andTypeScript, though it is most effective in Python.[3] According toVentureBeat, demonstrations uploaded by OpenAI showed impressivecoreference resolution capabilities. The demonstrators were able to create abrowser game in JavaScript and generate data science charts usingmatplotlib.[14]
OpenAI showed that Codex can interface with services and apps such asMailchimp,Microsoft Word,Spotify, andGoogle Calendar.[14][17]
The Codex-1 model is trained to detect requests for malware, exploits or policy-violating content and returns a refusal with a cited policy clause. The container has no outbound internet and only whitelisted dependencies, which is intended to reduce the blast radius of any bad code.[18]
OpenAI demonstrations showcased flaws such as inefficient code and one-off quirks in code samples.[14] In an interview withThe Verge, OpenAIchief technology officerGreg Brockman said that "sometimes [Codex] doesn't quite know exactly what you're asking" and that it can require some trial and error.[17] OpenAI researchers found that Codex struggles with multi-step prompts, often failing or yielding counter-intuitive behavior. Additionally, they brought up several safety issues, such as over-reliance by novice programmers, biases based on the training data, and security impacts due to vulnerable code.[16]
VentureBeat stated that because Codex[19] is trained on public data, it could be vulnerable to "data poisoning" via intentional uploads of malicious code.[14] According to a study by researchers fromNew York University, approximately 40% of code generated byGitHub Copilot (which uses Codex) in scenarios relevant to high-riskCWEs included glitches or other exploitable design flaws.[20]
TheFree Software Foundation expressed concerns that code snippets generated by Copilot and Codex couldviolate copyright, in particular the condition of theGPL that requiresderivative works to be licensed under equivalent terms.[21] Issues they raised include whether training on public repositories falls intofair use or not, how developers could discover infringing generated code, whether trainedmachine learning models could be considered modifiable source code or a compilation of the training data, and if machine learning models could themselves be copyrighted and by whom.[21][22] An internal GitHub study found that approximately 0.1% of generated code contained direct copies from the training data. In one example the model outputted the training data code implementing thefast inverse square root algorithm, including comments and an incorrectcopyright notice.[12]
In response, OpenAI stated that "legal uncertainty on the copyright implications of training AI systems imposes substantial costs on AI developers and so should be authoritatively resolved."[12]
The copyright issues with Codex have been compared to theAuthors Guild, Inc. v. Google, Inc. court case, in which judges ruled thatGoogle Books's use of text snippets from millions ofscanned books constituted fair use.[12][23]