Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Chinchilla (language model)

From Wikipedia, the free encyclopedia
Language model by DeepMind

Chinchilla is a family oflarge language models (LLMs) developed by the research team atGoogle DeepMind, presented in March 2022.[1]

Models

[edit]

It is named "chinchilla" because it is a further development over a previous model family named Gopher. Both model families were trained in order to investigate thescaling laws of large language models.[2]

It claimed to outperformGPT-3. It considerably simplifies downstream utilization because it requires much less computer power for inference and fine-tuning. Based on the training of previously employed language models, it has been determined that if one doubles the model size, one must also have twice the number of training tokens. This hypothesis has been used to train Chinchilla by DeepMind. Similar to Gopher in terms of cost, Chinchilla has 70B parameters and four times as much data.[3]

Chinchilla has an average accuracy of 67.5% on theMeasuring Massive Multitask Language Understanding (MMLU) benchmark, which is 7% higher than Gopher's performance. Chinchilla was still in the testing phase as of January 12, 2023.[4]

Chinchilla contributes to developing an effective training paradigm for large autoregressive language models with limited compute resources. The Chinchilla team recommends that the number of training tokens is twice for every model size doubling, meaning that using larger, higher-quality training datasets can lead to better results on downstream tasks.[5][6]

It has been used for the Flamingovision-language model.[7]

Architecture

[edit]

Both the Gopher family and Chinchilla family are families oftransformer models.

In particular, they are essentially the same asGPT-2, with different sizes and minor modifications. Gopher family usesRMSNorm instead ofLayerNorm; relative positional encoding rather than absolute positional encoding. The Chinchilla family is the same as the Gopher family, but trained with AdamW instead ofAdam optimizer.

The Gopher family contains six models of increasing size, from 44 million parameters to 280 billion parameters. They refer to the largest one as "Gopher" by default. Similar naming conventions apply for the Chinchilla family.

Table 1 of[2] shows the entire Gopher family:

Model specifications for Gopher family
Parameter countLayersNumber of headsKey/value sizeInternal dimensionMax learning rateBatch size
44M816325126 × 10−40.25M
117M1212647686 × 10−40.25M
417M12121281,5362 × 10−40.25M
1.4B24161282,0482 × 10−40.25M
7.1B32321284,0961.2 × 10−42M
Gopher 280B8012812816,3844 × 10−53M → 6M

Table 4 of[1] compares the 70-billion-parameter Chinchilla with Gopher 280B.

Comparison between Chinchilla and Gopher
Parameter countLayersNumber of headsKey/value sizeInternal dimensionMax learning rateBatch size
Gopher 280B8012812816,3844 × 10−53M → 6M
Chinchilla 70B80641288,1921 × 10−41.5M → 3M

See also

[edit]

References

[edit]
  1. ^abHoffmann, Jordan; Borgeaud, Sebastian; Mensch, Arthur; Buchatskaya, Elena; Cai, Trevor; Rutherford, Eliza; Casas, Diego de Las; Hendricks, Lisa Anne; Welbl, Johannes; Clark, Aidan; Hennigan, Tom; Noland, Eric; Millican, Katie; Driessche, George van den; Damoc, Bogdan (2022-03-29). "Training Compute-Optimal Large Language Models".arXiv:2203.15556 [cs.CL].
  2. ^abRae, Jack W.; Borgeaud, Sebastian; Cai, Trevor; Millican, Katie; Hoffmann, Jordan; Song, Francis; Aslanides, John; Henderson, Sarah; Ring, Roman; Young, Susannah; Rutherford, Eliza; Hennigan, Tom; Menick, Jacob; Cassirer, Albin; Powell, Richard (2022-01-21). "Scaling Language Models: Methods, Analysis & Insights from Training Gopher".arXiv:2112.11446 [cs.CL].
  3. ^Eliaçık, Eray (January 12, 2023)."Chinchilla AI is coming for the GPT-3's throne".Dataconomy.Archived from the original on March 26, 2023.
  4. ^Hendrycks, Dan (2023-03-14),Measuring Massive Multitask Language Understanding,archived from the original on 2023-03-15, retrieved2023-03-15
  5. ^Chaithali, G. (April 9, 2022)."Check Out This DeepMind's New Language Model, Chinchilla (70B Parameters), Which Significantly Outperforms Gopher (280B) and GPT-3 (175B) on a Large Range of Downstream Evaluation Tasks".Archived from the original on March 27, 2023. RetrievedJanuary 15, 2023.
  6. ^Wali, Kartik (April 12, 2022)."DeepMind launches GPT-3 rival, Chinchilla".Analytics India Magazine.Archived from the original on March 26, 2023. RetrievedJanuary 15, 2023.
  7. ^Alayrac, Jean-Baptiste; Donahue, Jeff; Luc, Pauline; Miech, Antoine; Barr, Iain; Hasson, Yana; Lenc, Karel; Mensch, Arthur; Millican, Katherine; Reynolds, Malcolm; Ring, Roman; Rutherford, Eliza; Cabi, Serkan; Han, Tengda; Gong, Zhitao (2022-12-06)."Flamingo: a Visual Language Model for Few-Shot Learning".Advances in Neural Information Processing Systems.35:23716–23736.arXiv:2204.14198.
Computer
programs
AlphaGo
Versions
Competitions
In popular culture
Other
Machine
learning
Neural networks
Other
Generative
AI
Chatbots
Models
Other
See also
Concepts
Applications
Implementations
Audio–visual
Text
Decisional
People
Architectures
Retrieved from "https://en.wikipedia.org/w/index.php?title=Chinchilla_(language_model)&oldid=1318865088"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp