Movatterモバイル変換


[0]ホーム

URL:


Epoch AI's logoEpoch AI's logo
Latest
Publications & Commentary
Data & Resources
Projects
About
Contact
Epoch AI's logoEpoch AI's logo
Search epoch.ai
Search
Enter a query to search for results
Data on AIAI Models

AI Models

Our comprehensive database of over 3200 models tracks key factors driving machine learning progress.

Last updated November 28, 2025

Trusted by leaders at OpenAI, DeepMind, and governments worldwide.
Need deeper insights? Our team offers custom research and advisory services.
Book a consultation

Data insights

Selected insights from this dataset.

See all our insights

The training compute of notable AI models has been doubling roughly every six months

Since 2010, the training compute used to create AI models has beengrowing at a rate of 4.4x per year. Most of this growth comes fromincreased spending, althoughimprovements in hardware have also played a role.

Learn more

Training compute growth is driven by larger clusters, longer training, and better hardware

Since 2018, the most significant driver ofcompute scaling across frontier models has likely been an increase in thequantity of hardware used in training clusters. Also important have been a shift towardslonger training runs, andincreases in hardware performance.

These trends are closely linked to a massive surge in investment.AI development budgets have been expanding by around 2-3x per year, enabling vast training and inference clusters and ever-larger models.

Learn more

The power required to train frontier AI models is doubling annually

Training frontier models requires a large and growing amount of power for GPUs, servers, cooling and other equipment. This is driven by an increase in GPU count; power draw per GPU is also growing, but at only a few percent per year.

Training compute has grown even faster — around4x/year. However, hardware efficiency (a12x improvement in the last ten years), the adoption of lower precision formats (an8x improvement) and longer training runs (a4x increase) account for a roughly 2x/year decrease in power requirements relative to training compute.

Our methodology for calculating or estimating a model’s power draw during training can be foundhere.

Learn more

Training compute costs are doubling every eight months for the largest AI models

Spending on training large-scale ML models is growing at a rate of2.4x per year. The most advanced models now cost hundreds of millions of dollars, with expenses measured by amortizing cluster costs over the training period.About half of this spending is on GPUs, with the remainder on other hardware and energy.

Learn more

Over 30 AI models have been trained at the scale of GPT-4

The largest AI models today are trained with over 1025 floating-point operations (FLOP) of compute. The first model trained at this scale was GPT-4, released in March 2023. As of June 2025, we have identified over 30 publicly announced AI models from different AI developers that we believe to be over the 1025 FLOP training compute threshold.

Training a model of this scalecosts tens of millions of dollars with current hardware. Despite the high cost, we expect a proliferation of such models—we saw an average of roughly two models over this threshold announced every month during 2024. Models trained at this scale will be subject to additional requirements under the EU AI Act,coming into force in August 2025.

Learn more

FAQ

What is a notable model?

A notable model meets any of the following criteria: (i) state-of-the-art improvement on a recognized benchmark; (ii) highly cited (over 1000 citations); (iii) historical relevance; (iv) significant use.

How was the AI Models dataset created?

The dataset was originally created for the report“Compute Trends Across Three Eras of Machine Learning” and has continually grown and expanded since then.

What are notable, frontier, and large-scale models?

We flag models as notable if they advanced the state of the art, achieved many citations in an academic publication, had over a million monthly users, were highly significant historically, or were developed at a cost of over one million dollars. You can learn more about these notability criteria by reading ourAI Models Documentation.

Frontier models are models that were in the top 10 by training compute at the time of their release, a threshold that grows over time as larger models are developed.

Large-scale models are models that were trained with over 10^23 FLOP of compute, which is a static threshold that is used in some AI regulatory frameworks.

Why are the number of models in the database and the results in the explorer different?

The explorer only shows models where we have estimates to visualize, e.g. for training compute, parameter count, or dataset size. While we do our best to collect as much information as possible about the models in our databases, this process is limited by the amount of publicly available information from companies, labs, researchers, and other organizations. Further details about coverage can be found in theRecords section of the documentation.

How is the data licensed?

Epoch AI’s data is free to use, distribute, and reproduce provided the source and authors are credited under theCreative Commons Attribution license. Complete citations can be foundhere.

How do you estimate details like training compute?

Where possible, we collect details such as training compute directly from publications. Otherwise, we estimate details from information such as model architecture and training data, or training hardware and duration. Thedocumentation describes these approaches further. Per-entry notes on the estimation process can be found within the database.

How accurate is the data?

Records are labeled based on the uncertainty of their training compute, parameter count, and dataset size. “Confident” records are accurate within a factor of 3x, “Likely” records within a factor of 10x, and “Speculative” records within a factor of 30x, larger or smaller. Further details are available in thedocumentation. If you spot a mistake, please report it todata@epochai.org.

What are the question marks in some plots?

Models with the “Speculative” confidence level are indicated with a small question mark icon on the graph, to alert users not to treat this data as very precise. In some cases, numbers may be based on partial information about training hardware, reported benchmark scores, or leaked sources. In other cases, developers provide information that is consistent with a wide range of values, such as “months” of training time, or “trillions” of data points.

How up-to-date is the data?

The dataset is kept up-to-date by monitoring a variety of sources, including academic publications, press releases, and online news. An automated search process identifies newly released models each week using the Google Search API, and this is supplemented by models identified manually by Epoch staff.

The field of machine learning is highly active with frequent new releases, so there will inevitably be some models that have not yet been added. Generally, major models should be added within two weeks of their release, and others are added periodically during literature reviews. If you notice a missing model, you can notify us atdata@epochai.org.

How can I access this data?

Download the data in CSV format.
Explore the data using our interactive tools.
View the data directly in atable format.

Who can I contact with questions or comments about the data?

Feedback and questions can be directed to the data group at data@epochai.org.

Documentation

Models in this dataset have been collected from various sources, including literature reviews, Papers With Code, historical accounts, highly-cited publications, proceedings of top conferences, and suggestions from individuals. The list of models is non-exhaustive, but aims to cover most models that were state-of-the-art when released, have over 1000 citations, one million monthly active users, or an equivalent level of historical significance. Additional information about our approach to measuring parameter counts, dataset size, and training compute can be found in the accompanying documentation.

Read the complete documentation

Use this work

Licensing

Epoch AI's data is free to use, distribute, and reproduce provided the source and authors are credited under theCreative Commons Attribution license.

Citation

Epoch AI, ‘Data on AI Models’. Published online at epoch.ai. Retrieved from ‘https://epoch.ai/data/ai-models’ [online resource]. Accessed.

BibTeX Citation

@misc{EpochAIModels2025,  title = {Data on AI Models},  author = {{Epoch AI}},  year = {2025},  month = {07},  url = {https://epoch.ai/data/ai-models},  note = {Accessed:}}

Python Import

importpandasaspddata_url="https://epoch.ai/data/all_ai_models.csv"models_df=pd.read_csv(data_url)

Download this data

Notable AI Models

CSV, Updated November 26, 2025

Large-Scale AI Models

CSV, Updated November 10, 2025

Frontier Models

CSV, Updated November 28, 2025

All Models

CSV, Updated November 28, 2025

Related work

Compute Trends Across Three Eras of Machine Learning

Will We Run Out of Data? Limits of LLM Scaling Based on Human-Generated Data

Announcing Epoch AI’s Updated Parameter, Compute and Data Trends Database

Explore other databases

Data on AI Benchmarking

Our database of benchmark results, featuring the performance of leading AI models on challenging tasks. It includes results from benchmarks evaluated internally by Epoch AI as well as data collected from external sources. Explore trends across time, by benchmark, or by model.

Updated November 29, 2025

Data on GPU clusters

Our database of over 500 GPU clusters and supercomputers tracks large hardware facilities, including those used for AI training and inference.

Updated November 22, 2025

Collaborate with us

We're proud to partner with select stakeholders on projects aligned with our mission.

Epoch AI's logo
Sign up for our newsletter to read the latest updates on our research and weekly commentary on AI news and developments.Subscribe to our newsletter
Publications & Commentary
© 2025 Epoch AI
Privacy NoticeCookie Policy

We value your privacy

Our website uses cookies to enhance your browsing experience and analyze site traffic. By clicking ‘Accept All,’ you consent to our use of cookies as described in ourPrivacy Policy andCookie Policy. If you wish to withdraw your consent, you can contact us atops@epoch.ai.
Epoch AI's logo

Help us make our website better!

Please tell us about you.

Leave feedback

Have a question? Noticed something wrong? Let us know.

Please enter your feedback

If you would like a reply, please include your name and email address.

Thank you for your feedback!

Your comment will be reviewed. We may not be able to respond to every submission.

There’s been an error in submitting your feedback. Please try again later.


[8]ページ先頭

©2009-2025 Movatter.jp