Anthropic was founded in 2021 by former members ofOpenAI, including siblingsDaniela Amodei andDario Amodei, who serve as president and CEO, respectively.[9] As of February 2026,[update] Anthropic has an estimated value of $380 billion.[10]
Anthropic was founded in 2021 by seven former employees of OpenAI, including siblings Daniela Amodei and Dario Amodei, the latter of whom was OpenAI's Vice President of Research.[11][12]
In April 2022, Anthropic announced it had received $580 million in funding,[13] including a $500 million investment fromFTX under the leadership ofSam Bankman-Fried.[14][4]
In the summer of 2022, Anthropic finishedtraining the first version ofClaude but did not release it, citing the need for further internal safety testing and a desire to avoid initiating a potentially hazardous race to develop increasingly powerful AI systems.[15]
In September 2023, Amazon announced a partnership with Anthropic. Amazon became a minoritystakeholder by initially investing $1.25 billion and planning a total investment of $4 billion.[16] The remaining $2.75 billion was invested in March 2024.[17] In November 2024, Amazon invested another $4 billion, doubling its total investment.[18] As part of the deal, Anthropic usesAmazon Web Services (AWS) as its primarycloud provider and makes its AI models available to AWS customers.[16]
In October 2023, Google invested $500 million in Anthropic and committed to an additional $1.5 billion over time.[19] In March 2025, Google agreed to invest another $1 billion in Anthropic.[20]
Anthropic raised $3.5 billion in a Series E funding round in March 2025, achieving a post-money valuation of $61.5 billion, led byLightspeed Venture Partners with participation from several major investors.[22][23] In March,Databricks and Anthropic announced that Claude would be integrated into the Databricks Data Intelligence Platform.[24][25]
In May 2025, the company announced Claude 4, introducing both Claude Opus 4 and Claude Sonnet 4 with improved coding capabilities and other new features.[26] It also introduced new API capabilities, including theModel Context Protocol (MCP) connector.[26] The company hosted its inaugural developer conference that month.[27] Also in May, Anthropic launched aweb search API that enables Claude to access real-time information from the internet.[28] Claude Code, Anthropic's coding assistant, transitioned from research preview to general availability, featuring integrations withVS Code andJetBrains IDEs and support for GitHub Actions.[26]
In September 2025, Anthropic completed a Series F funding round, raising $13 billion at a post-money valuation of $183 billion. The round was co-led byIconiq Capital,Fidelity Management & Research, and Lightspeed Venture Partners, with participation from theQatar Investment Authority and other investors.[29][30] The same month, Anthropic announced that it would stop selling its products to groups majority-owned by Chinese, Russian, Iranian, or North Korean entities due to national security concerns.[31]
In October 2025, Anthropic announced a cloud partnership with Google, giving it access to up to one million of Google's customTensor Processing Units (TPUs). According to Anthropic, the partnership will bring more than one gigawatt of AI compute capacity online by 2026.[32]
In November 2025, Nvidia, Microsoft and Anthropic announced a partnership deal. NVIDIA and Microsoft were expected to invest up to $15 billion in Anthropic, and Anthropic said it would buy $30 billion of computing capacity fromMicrosoft Azure running on Nvidia AI systems.[33]
In November 2025, Anthropic said that hackers sponsored by theChinese government used Claude to perform automated cyberattacks against around 30 global organisations. The hackers tricked Claude into carrying out automated subtasks by pretending it was for defensive testing.[34][35]
In December 2025, Anthropic acquiredBun to improve the speed and stability of Claude Code.[36]
In December 2025, Anthropic signed a multi-year, $200 million partnership withSnowflake to make Claude models available through Snowflake's platform as the companies expanded enterprise deployments of AI tools and agents.[37]
On 31 December 2025, it was confirmed that Anthropic had signed a term sheet for a $10 billion funding round led byCoatue andGIC, at a $350 billion valuation.[38]
In February 2026, Anthropic aired two commercials duringSuper Bowl LX[39][40] as part of a broader marketing campaign called "A Time and a Place", with four ads created byMother.[40] Each ad depicts AI assistants suddenly pivoting to promoting a fictional product in the middle of a conversation. Anthropic said that Claude will stay ad-free, in contrast to its competitor OpenAI, which introduced ads to the free version of ChatGPT.[40]
On February 12, 2026, Anthropic announced that it had raised $30 billion in a Series G funding round, bringing its post-money valuation to $380 billion.[41][42]
According to Anthropic, its goal is to research AI systems' safety and reliability.[43] The Amodei siblings were among those who left OpenAI due to directional differences.[12]
Anthropic's "Long-Term Benefit Trust" is a purpose trust for "the responsible development and maintenance of advanced AI for the long-term benefit of humanity". It holds Class T shares in the PBC, which allow it to elect directors to Anthropic's board.[44] As of October 2025, the members of the Trust are Neil Buddy Shah, Kanika Bahl, Zach Robinson, andRichard Fontaine.[43]
Anthropic's flagship product line is the "Claude" series of large language models,[54] which some employees consider a reference to mathematicianClaude Shannon.[4] One of the techniques used tofine-tune Claude models is constitutional AI, in which the AI is trained to adhere to a set of principles called a constitution.[55] The company makes the models available via a web interface,[56] an API,[56]Amazon Bedrock,[57] an iOS app,[58] and Mac and Windows desktop apps.[59]
Claude's first two versions, Claude and Claude Instant, were released in March 2023,[61][62] but only Anthropic-approved users could use them.[63] The next iteration, Claude 2, was launched to the public in July 2023.[64]
In March 2024, Anthropic released three language models: Claude 3 Opus, Claude 3 Sonnet, and Claude 3 Haiku, in decreasing order of performance.[65][66] In June 2024, Anthropic released Claude 3.5 Sonnet.[67] In October 2024, the company released Claude 3.5 Sonnet (new) and Claude 3.5 Haiku.[68] In February 2025, it released Claude 3.7 Sonnet.[69][70]
In May 2025, Anthropic released Claude 4 Opus and Sonnet.[71] It released Claude Opus 4.1 that August,[72] Claude Sonnet 4.5 that September,[73] Claude Haiku 4.5 that October,[74] and Claude Opus 4.5 that November.[75]
Anthropic released Claude Opus 4.6 in February 2026.[76]
In November 2024, Anthropic partnered withPalantir andAmazon Web Services to provide the Claude model to U.S. intelligence and defense agencies.[77][78] In June 2025, Anthropic announced a "Claude Gov" model.Ars Technica reported that as of June 2025 it was in use at multiple U.S. national security agencies.[79]
In July 2025, theUnited States Department of Defense announced that Anthropic had received a $200 million contract for AI in the military, along with Google, OpenAI, andxAI.[80]
According to theWall Street Journal, the US military used Claude in its2026 raid on Venezuela, which resulted in the deaths of 83 people and the kidnapping of PresidentNicolás Maduro. Anthropic's terms of use prohibit using Claude for "violent ends".[81][82]
In August 2025, Anthropic launched a Higher Education Advisory Board, chaired by formerYale University president and formerCoursera CEO Rick Levin.[83]
Anthropic partnered with Iceland's Ministry of Education and Children in 2025 to allow teachers to access Claude and integrate AI into daily teaching.[84]
In January 2026, unsealed court filings from a2024 class-action copyright lawsuit against Anthropic revealed the existence of the company's confidential "Project Panama" operation. In an internal planning document, Project Panama is described as Anthropic's "effort todestructively scan all the books in the world".[85] To this end, the company purchased millions of used books from online retailers such asBetter World Books, sliced off their spines and scanned their pages in order totrain Claude.[85][86] The paper was then recycled. Tom Turvey, who helped createGoogle Books, was hired for the operation. According to the Project Panama planning document, Anthropic did not "want it to be known that [it was] working on this".[85] JudgeWilliam Alsup ruled that the destruction of legally purchased books constitutedfair use, in contrast to Anthropic's prior use of pirated copies.[85][86]
According to Anthropic, Constitutional AI (CAI) is a framework developed to align AI systems with human values and ensure that they are helpful, harmless, and honest.[11][87] Within this framework, humans provide a set of rules describing the desired behavior of the AI system, known as the "constitution".[87] The AI system evaluates the generated output and then adjusts the AI models to better fit the constitution.[87] The self-reinforcing process aims to avoid harm, respect preferences, and provide true information.[87]
Some of the principles of Claude 2's constitution are derived from documents such as the1948 Universal Declaration of Human Rights and Apple's terms of service.[64] For example, one rule from the UN Declaration applied in Claude 2's CAI states "Please choose the response that most supports and encourages freedom, equality and a sense of brotherhood."[64]
Part of Anthropic's research aims to be able to automatically identify "features" ingenerative pretrained transformers like Claude. In aneural network, a feature is a pattern of neural activations that corresponds to a concept. In 2024, using a compute-intensive technique called "dictionary learning", Anthropic was able to identify millions of features in Claude, including for example one associated with theGolden Gate Bridge. Enhancing the ability to identify and edit features is expected to have significant safety implications.[90][91][92]
In March 2025, research by Anthropic suggested that multilingual LLMs partially process information in a conceptual space before converting it to the appropriate language. It also found evidence that LLMs can sometimes plan ahead. For example, when writing poetry, Claude identifies potential rhyming words before generating a line that ends with one of these words.[93][94]
In September 2025, Anthropic released a report saying that businesses primarily use AI for automation rather than collaboration, with three-quarters of companies that work with Claude using it for “full task delegation".[95] Earlier in the year, CEO Dario Amodei predicted that AI would wipe out white-collar jobs, especially entry-level jobs in finance, law, and consulting.[96][97]
On October 18, 2023, Anthropic was sued byConcord,Universal,ABKCO, and other music publishers for, per the complaint, "systematic and widespread infringement of their copyrighted song lyrics."[98][99][100] They alleged that the company used copyrighted material without permission in the form of song lyrics.[101] The plaintiffs asked for up to $150,000 for each work infringed upon by Anthropic, citing infringement of copyright laws.[101] In the lawsuit, the plaintiffs support their allegations of copyright violations by citing several examples of Anthropic's Claude model outputting copied lyrics from songs such asKaty Perry's "Roar" andGloria Gaynor's "I Will Survive".[101] Additionally, the plaintiffs alleged that even given some prompts that did not directly state a song name, the model responded with modified lyrics based on original work.[101]
On January 16, 2024, Anthropic claimed that the music publishers were not unreasonably harmed and that the examples noted by plaintiffs were merely bugs.[102]
In August 2024, a class-action lawsuit was filed against Anthropic in California for alleged copyright infringement. The suit claims Anthropic fed its LLMs with pirated copies of the authors' work, including from participantsKirk Wallace Johnson,Andrea Bartz, andCharles Graeber.[103] On June 23, 2025, theUnited States District Court for the Northern District of California granted summary judgment for Anthropic that the use of digital copies of the plaintiffs' works (inter alia) for the purpose of training Anthropic's LLMs was afair use. But it found that Anthropic had used millions of pirated library copies and that such use of pirated copies could not be a fair use. Therefore the case was ordered to go to trial on the pirated copies used to create Anthropic's central library and the resulting damages.[104] In September 2025, Anthropic agreed to pay authors $1.5 billion to settle the case, amounting to $3,000 per book plus interest. The proposed settlement, pending judge's approval, stands as the largest copyright resolution in U.S. history.[105][106]
In June 2025,Reddit sued Anthropic for “unlawful and unfair business acts”, alleging that Anthropic was in violation of Reddit's user agreement by training its models on users' personal data without obtaining their consent.[107][108]
^Morley, John; Berger, David; Simmerman, Amy (October 28, 2023)."Anthropic Long-Term Benefit Trust".Harvard Law School Forum on Corporate Governance.Archived from the original on April 16, 2024. RetrievedApril 3, 2024.
^Adly Templeton*, Tom Conerly*, Jonathan Marcus, Jack Lindsey, Trenton Bricken, Brian Chen, Adam Pearce, Craig Citro, Emmanuel Ameisen, Andy Jones, Hoagy Cunningham, Nicholas L Turner, Callum McDougall, Monte MacDiarmid, Alex Tamkin, Esin Durmus, Tristan Hume, Francesco Mosconi, C. Daniel Freeman, Theodore R. Sumers, Edward Rees, Joshua Batson, Adam Jermyn, Shan Carter, Chris Olah, Tom HenighanScaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet, Anthropic,archived from the original on May 25, 2024, retrievedMay 24, 2024