Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

EleutherAI

From Wikipedia, the free encyclopedia
Artificial intelligence research collective

EleutherAI
Type of businessResearchco-operative
Founded3 July 2020; 4 years ago (2020-07-03)[1]
IndustryArtificial intelligence
ProductsGPT-Neo,GPT-NeoX,GPT-J, Pythia,The Pile, VQGAN-CLIP
URLeleuther.ai
Part of a series on
Artificial intelligence (AI)
Glossary

EleutherAI (/əˈlθər/[2]) is a grass-roots non-profitartificial intelligence (AI) research group. The group, considered an open-source version ofOpenAI,[3] was formed in aDiscord server in July 2020 by Connor Leahy, Sid Black, and Leo Gao[4] to organize a replication ofGPT-3. In early 2023, it formally incorporated as the EleutherAI Institute, a non-profit research institute.[5]

History

[edit]

EleutherAI began as aDiscord server on July 7, 2020, under the tentative name "LibreAI" before rebranding to "EleutherAI" later that month,[6] in reference toeleutheria, the Greek word forliberty.[3] Its founding members are Connor Leahy, Len Gao, and Sid Black. They co-wrote the code for Eleuther to serve as a collection of open source AI research, creating a machine learning model similar toGPT-3.[7]

On December 30, 2020, EleutherAI releasedThe Pile, a curated dataset of diverse text for traininglarge language models.[8] While the paper referenced the existence of the GPT-Neo models, the models themselves were not released until March 21, 2021.[9] According to a retrospective written several months later, the authors did not anticipate that "people would care so much about our 'small models.'"[1] On June 9, 2021, EleutherAI followed this up withGPT-J-6B, a six billion parameter language model that was again the largest open-source GPT-3-like model in the world.[10] These language models were released under theApache 2.0 free software license and are considered to have "fueled an entirely new wave of startups".[5]

While EleutherAI initially turned down funding offers, preferring to use Google's TPU Research Cloud Program to source their compute,[11] by early 2021 they had accepted funding fromCoreWeave (a small cloud computing company) and SpellML (a cloud infrastructure company) in the form of access to powerful GPU clusters that are necessary for large scale machine learning research. On Feb 10, 2022, they released GPT-NeoX-20B, a model similar to their prior work but scaled up thanks to the resources CoreWeave provided.[12]

In 2022, many EleutherAI members participated in the BigScience Research Workshop, working on projects including multitask finetuning,[13][14] trainingBLOOM,[15] and designing evaluation libraries.[15] Engineers at EleutherAI,Stability AI, andNVIDIA joined forces with biologists led byColumbia University andHarvard University[16]to train OpenFold, an open-source replication of DeepMind'sAlphaFold2.[17]

In early 2023, EleutherAI incorporated as a non-profit research institute run by Stella Biderman, Curtis Huebner, and Shivanshu Purohit.[5][18] This announcement came with the statement that EleutherAI's shift of focus away from training larger language models was part of a deliberate push towards doing work in interpretability, alignment, and scientific research.[18] While EleutherAI is still committed to promoting access to AI technologies, they feel that "there is substantially more interest in training and releasing LLMs than there once was," enabling them to focus on other projects.[19]

In July 2024, an investigation byProof news found that EleutherAI's The Pile dataset includes subtitles from over 170,000YouTube videos across more than 48,000 channels. The findings drew criticism and accusations of theft from YouTubers and others who had their work published on the platform.[20][21] In 2025, Stella Biderman served as executive director. Aviya Skowron served as head of policy and ethics. Nora Belrose served as head of interpretability, and Quentin Anthony was head of HPC.[22]

Research

[edit]
This articlerelies excessively onreferences toprimary sources. Please improve this article by addingsecondary or tertiary sources.
Find sources: "EleutherAI" – news ·newspapers ·books ·scholar ·JSTOR
(August 2023) (Learn how and when to remove this message)

According to their website, EleutherAI is a "decentralized grassroots collective of volunteer researchers, engineers, and developers focused onAI alignment, scaling, andopen-sourceAI research".[23] While they do not sell any of their technologies as products, they publish the results of their research in academic venues, write blog posts detailing their ideas and methodologies, and provide trained models for anyone to use for free.[citation needed]

The Pile

[edit]
Main article:The Pile (dataset)

The Pile is an 886 GB dataset designed for training large language models. It was originally developed to train EleutherAI's GPT-Neo models but has become widely used to train other models, includingMicrosoft's Megatron-Turing Natural Language Generation,[24][25]Meta AI's OpenPre-trained Transformers,[26]LLaMA,[27] and Galactica,[28]Stanford University's BioMedLM 2.7B,[29] theBeijing Academy of Artificial Intelligence's Chinese-Transformer-XL,[30] andYandex's YaLM 100B.[31] Compared to other datasets, the Pile's main distinguishing features are that it is a curated selection of data chosen by researchers at EleutherAI to contain information they thought language models should learn and that it is the only such dataset that is thoroughly documented by the researchers who developed it.[32]

GPT models

[edit]

EleutherAI's most prominent research relates to its work to train open-sourcelarge language models inspired by OpenAI'sGPT-3.[33] EleutherAI's "GPT-Neo" model series has released 125 million, 1.3 billion, 2.7 billion, 6 billion, and 20 billion parameter models.

  • GPT-Neo (125M, 1.3B, 2.7B):[34] released in March 2021, it was the largest open-source GPT-3-style language model in the world at the time of release.
  • GPT-J (6B):[35] released in March 2021, it was the largest open-source GPT-3-style language model in the world at the time of release.[36]
  • GPT-NeoX (20B):[37] released in February 2022, it was the largest open-source language model in the world at the time of release.
  • Pythia (13B):[38] While prior models focused on scaling larger to close the gap with closed-sourced models like GPT-3, the Pythia model suite goes in another direction. The Pythia suite was designed to facilitate scientific research on the capabilities of and learning processes in large language models.[38] Featuring 154 partially trained model checkpoints, fully public training data, and the ability to reproduce the exact training order, Pythia enables research on verifiable training,[39] social biases,[38] memorization,[40] and more.[41]

VQGAN-CLIP

[edit]
Anartificial intelligence art created with VQGAN-CLIP, atext-to-image model created by EleutherAI
An artificial intelligence art created with CLIP-Guided Diffusion, another text-to-image model created by Katherine Crowson of EleutherAI[42][43]

Following the release ofDALL-E by OpenAI in January 2021, EleutherAI started working ontext-to-image synthesis models. When OpenAI did not release DALL-E publicly, EleutherAI's Katherine Crowson and digital artist Ryan Murdock developed a technique for using CLIP (another model developed by OpenAI) to convert regular image generation models into text-to-image synthesis ones.[44][45][46][47] Building on ideas dating back to Google'sDeepDream,[48] they found their first major success combining CLIP with another publicly available model called VQGAN and the resulting model is called VQGAN-CLIP.[49] Crowson released the technology by tweetingnotebooks demonstrating the technique that people could run for free without any special equipment.[50][51][52] This work was credited byStability AI CEOEmad Mostaque as motivating the founding of Stability AI.[53]

Public reception

[edit]

Praise

[edit]

EleutherAI's work to democratize GPT-3 won theUNESCO Netexplo Global Innovation Award in 2021,[54] InfoWorld's Best of Open Source Software Award in 2021[55] and 2022,[56] was nominated for VentureBeat's AI Innovation Award in 2021.[57]

Gary Marcus, a cognitive scientist and noted critic of deep learning companies such as OpenAI and DeepMind,[58] has repeatedly[59][60] praised EleutherAI's dedication to open-source and transparent research.

Maximilian Gahntz, a senior policy researcher at theMozilla Foundation, applauded EleutherAI's efforts to give more researchers the ability to audit and assess AI technology. "If models are open and if data sets are open, that'll enable much more of the critical research that's pointed out many of the flaws and harms associated with generative AI and that's often far too difficult to conduct."[61]

Criticism

[edit]

Technology journalist Kyle Wiggers has raised concerns about whether EleutherAI is as independent as it claims, or "whether the involvement of commercially motivated ventures likeStability AI andHugging Face—both of which are backed by substantial venture capital—might influence EleutherAI's research."[62]

See also

[edit]

References

[edit]
  1. ^abLeahy, Connor; Hallahan, Eric; Gao, Leo; Biderman, Stella (7 July 2021)."What A Long, Strange Trip It's Been: EleutherAI One Year Retrospective".Archived from the original on 29 August 2023. Retrieved1 March 2023.
  2. ^"Talk with Stella Biderman on The Pile, GPT-Neo and MTG". The Interference Podcast. 2 April 2021. Retrieved26 March 2023.
  3. ^abSmith, Craig (21 March 2022)."EleutherAI: When OpenAI Isn't Open Enough".IEEE Spectrum.IEEE.Archived from the original on 29 August 2023. Retrieved8 August 2023.
  4. ^"About".EleutherAI. Retrieved23 May 2024.
  5. ^abcWiggers, Kyle (2 March 2023)."Stability AI, Hugging Face and Canva back new AI research nonprofit".TechCrunch.Archived from the original on 29 August 2023. Retrieved8 August 2023.
  6. ^Leahy, Connor; Hallahan, Eric; Gao, Leo; Biderman, Stella (7 July 2021)."What A Long, Strange Trip It's Been: EleutherAI One Year Retrospective".EleutherAI Blog.Archived from the original on 29 August 2023. Retrieved14 April 2023.
  7. ^"Stability AI, Hugging Face and Canva back new AI research nonprofit". 2 March 2023.
  8. ^Gao, Leo; Biderman, Stella; Black, Sid; et al. (31 December 2020).The Pile: An 800GB Dataset of Diverse Text for Language Modeling. arXiv 2101.00027.arXiv:2101.00027.
  9. ^"GPT-3's free alternative GPT-Neo is something to be excited about".VentureBeat. 15 May 2021.Archived from the original on 9 March 2023. Retrieved14 April 2023.
  10. ^"GPT-J-6B: An Introduction to the Largest Open Source GPT Model | Forefront".www.forefront.ai. Archived fromthe original on 9 March 2023. Retrieved1 March 2023.
  11. ^"EleutherAI: When OpenAI Isn't Open Enough".IEEE Spectrum.Archived from the original on 21 March 2023. Retrieved1 March 2023.
  12. ^Black, Sid; Biderman, Stella; Hallahan, Eric; et al. (14 April 2022). "GPT-NeoX-20B| An Open-Source Autoregressive Language Model".arXiv:2204.06745 [cs.CL].
  13. ^Sanh, Victor; et al. (2021). "Multitask Prompted Training Enables Zero-Shot Task Generalization".arXiv:2110.08207 [cs.LG].
  14. ^Muennighoff, Niklas; Wang, Thomas; Sutawika, Lintang; Roberts, Adam; Biderman, Stella; Teven Le Scao; M Saiful Bari; Shen, Sheng; Yong, Zheng-Xin; Schoelkopf, Hailey; Tang, Xiangru; Radev, Dragomir; Alham Fikri Aji; Almubarak, Khalid; Albanie, Samuel; Alyafeai, Zaid; Webson, Albert; Raff, Edward; Raffel, Colin (2022). "Crosslingual Generalization through Multitask Finetuning".arXiv:2211.01786 [cs.CL].
  15. ^abWorkshop, BigScience; et al. (2022). "BLOOM: A 176B-Parameter Open-Access Multilingual Language Model".arXiv:2211.05100 [cs.CL].
  16. ^"Meet OpenFold: Reimplementing AlphaFold2 to Illuminate Its Learning Mechanisms and Generalization". 21 August 2023.
  17. ^"Democratizing AI for Biology with OpenFold".
  18. ^ab"The View from 30,000 Feet: Preface to the Second EleutherAI Retrospective". 2 March 2023.
  19. ^"AI Research Lab Launches Open Source Research Nonprofit".
  20. ^Gilbertson, Annie; Reisner, Alex (16 July 2024)."Apple, Nvidia, Anthropic Used Thousands of Swiped YouTube Videos to Train AI".WIRED. Retrieved18 July 2024.
  21. ^Gilbertson, Annie (16 July 2024)."Apple, Nvidia, Anthropic Used Thousands of Swiped YouTube Videos to Train AI".Proof. Retrieved18 July 2024.
  22. ^"Staff".
  23. ^"EleutherAI Website". EleutherAI.Archived from the original on 2 July 2021. Retrieved1 July 2021.
  24. ^"Microsoft and Nvidia team up to train one of the world's largest language models". 11 October 2021.Archived from the original on 27 March 2023. Retrieved8 March 2023.
  25. ^"AI: Megatron the Transformer, and its related language models". 24 September 2021.Archived from the original on 4 March 2023. Retrieved8 March 2023.
  26. ^Zhang, Susan; Roller, Stephen; Goyal, Naman; Artetxe, Mikel; Chen, Moya; Chen, Shuohui; Dewan, Christopher; Diab, Mona; Li, Xian; Lin, Xi Victoria; Mihaylov, Todor; Ott, Myle; Shleifer, Sam; Shuster, Kurt; Simig, Daniel; Koura, Punit Singh; Sridhar, Anjali; Wang, Tianlu; Zettlemoyer, Luke (21 June 2022). "OPT: Open Pre-trained Transformer Language Models".arXiv:2205.01068 [cs.CL].
  27. ^Touvron, Hugo; Lavril, Thibaut; Izacard, Gautier; Grave, Edouard; Lample, Guillaume; et al. (27 February 2023). "LLaMA: Open and Efficient Foundation Language Models".arXiv:2302.13971 [cs.CL].
  28. ^Taylor, Ross; Kardas, Marcin; Cucurull, Guillem; Scialom, Thomas; Hartshorn, Anthony; Saravia, Elvis; Poulton, Andrew; Kerkez, Viktor; Stojnic, Robert (16 November 2022). "Galactica: A Large Language Model for Science".arXiv:2211.09085 [cs.CL].
  29. ^"Model Card for BioMedLM 2.7B".huggingface.co.Archived from the original on 5 June 2023. Retrieved5 June 2023.
  30. ^Yuan, Sha; Zhao, Hanyu; Du, Zhengxiao; Ding, Ming; Liu, Xiao; Cen, Yukuo; Zou, Xu; Yang, Zhilin; Tang, Jie (2021)."WuDaoCorpora: A super large-scale Chinese corpora for pre-training language models".AI Open.2:65–68.doi:10.1016/j.aiopen.2021.06.001.
  31. ^Grabovskiy, Ilya (2022)."Yandex publishes YaLM 100B, the largest GPT-like neural network in open source" (Press release). Yandex. Retrieved5 June 2023.
  32. ^Khan, Mehtab; Hanna, Alex (2023). "The Subjects and Stages of AI Dataset Development: A Framework for Dataset Accountability".Ohio State Technology Law Journal.19 (2):171–256.hdl:1811/103549.SSRN 4217148.
  33. ^"GPT-3's free alternative GPT-Neo is something to be excited about". 15 May 2021.Archived from the original on 9 March 2023. Retrieved10 March 2023.
  34. ^Andonian, Alex; Biderman, Stella; Black, Sid; Gali, Preetham; Gao, Leo; Hallahan, Eric; Levy-Kramer, Josh; Leahy, Connor; Nestler, Lucas; Parker, Kip; Pieler, Michael; Purohit, Shivanshu; Songz, Tri; Phil, Wang; Weinbach, Samuel (10 March 2023). GPT-NeoX: Large Scale Autoregressive Language Modeling in PyTorch (Preprint).doi:10.5281/zenodo.5879544.
  35. ^"EleutherAI/gpt-j-6B · Hugging Face".huggingface.co.Archived from the original on 12 March 2023. Retrieved10 March 2023.
  36. ^"GPT-J-6B: An Introduction to the Largest Open Source GPT Model | Forefront".www.forefront.ai. Archived fromthe original on 9 March 2023. Retrieved1 March 2023.
  37. ^Black, Sidney; Biderman, Stella; Hallahan, Eric; et al. (1 May 2022).GPT-NeoX-20B: An Open-Source Autoregressive Language Model. Proceedings of BigScience Episode #5 -- Workshop on Challenges & Perspectives in Creating Large Language Models. pp. 95–136.arXiv:2204.06745.doi:10.18653/v1/2022.bigscience-1.9. Retrieved19 December 2022 – viaAssociation for Computational Linguistics - Anthology.
  38. ^abcBiderman, Stella; Schoelkopf, Hailey; Anthony, Quentin; Bradley, Herbie; O'Brien, Kyle; Hallahan, Eric; Mohammad Aflah Khan; Purohit, Shivanshu; USVSN Sai Prashanth; Raff, Edward; Skowron, Aviya; Sutawika, Lintang; Oskar van der Wal (2023). "Pythia: A Suite for Analyzing Large Language Models Across Training and Scaling".arXiv:2304.01373 [cs.CL].
  39. ^Choi, Dami; Shavit, Yonadav; Duvenaud, David (2023). "Tools for Verifying Neural Models' Training Data".arXiv:2307.00682 [cs.LG].
  40. ^Biderman, Stella; USVSN Sai Prashanth; Sutawika, Lintang; Schoelkopf, Hailey; Anthony, Quentin; Purohit, Shivanshu; Raff, Edward (2023). "Emergent and Predictable Memorization in Large Language Models".arXiv:2304.11158 [cs.CL].
  41. ^Gupta, Kshitij; Thérien, Benjamin; Ibrahim, Adam; Richter, Mats L.; Anthony, Quentin; Belilovsky, Eugene; Rish, Irina; Lesort, Timothée (2023). "Continual Pre-Training of Large Language Models: How to (Re)warm your model?".arXiv:2308.04014 [cs.CL].
  42. ^"CLIP-Guided Diffusion".EleutherAI.Archived from the original on 29 August 2023. Retrieved20 August 2023.
  43. ^"CLIP Guided Diffusion HQ 256x256.ipynb - Colaboratory".Google Colab.Archived from the original on 29 August 2023. Retrieved20 August 2023.
  44. ^MIRANDA, LJ (8 August 2021)."The Illustrated VQGAN".ljvmiranda921.github.io.Archived from the original on 20 March 2023. Retrieved8 March 2023.
  45. ^"Inside The World of Uncanny AI Twitter Art".Nylon. 24 March 2022.Archived from the original on 29 August 2023. Retrieved8 March 2023.
  46. ^"This AI Turns Movie Text Descriptions Into Abstract Posters".Yahoo Life. 20 September 2021.Archived from the original on 27 December 2022. Retrieved8 March 2023.
  47. ^Quach, Katyanna."A man spent a year in jail on a murder charge involving disputed AI evidence. Now the case has been dropped".www.theregister.com.Archived from the original on 8 March 2023. Retrieved8 March 2023.
  48. ^"Alien Dreams: An Emerging Art Scene - ML@B Blog".Alien Dreams: An Emerging Art Scene - ML@B Blog.Archived from the original on 10 March 2023. Retrieved8 March 2023.
  49. ^"VQGAN-CLIP".EleutherAI.Archived from the original on 20 August 2023. Retrieved20 August 2023.
  50. ^"We asked an AI tool to 'paint' images of Australia. Critics say they're good enough to sell".ABC News. 14 July 2021.Archived from the original on 7 March 2023. Retrieved8 March 2023 – via www.abc.net.au.
  51. ^Nataraj, Poornima (28 February 2022)."Online tools to create mind-blowing AI art".Analytics India Magazine.Archived from the original on 8 February 2023. Retrieved8 March 2023.
  52. ^"Meet the Woman Making Viral Portraits of Mental Health on TikTok".www.vice.com. 30 November 2021.Archived from the original on 11 May 2023. Retrieved8 March 2023.
  53. ^@EMostaque (2 March 2023)."Stability AI came out of @AiEleuther and we have been delighted to incubate it as the foundation was set up" (Tweet) – viaTwitter.
  54. ^"UNESCO Netexplo Forum 2021 | UNESCO".Archived from the original on 16 October 2022. Retrieved8 March 2023.
  55. ^Yegulalp, James R. Borck, Martin Heller, Andrew C. Oliver, Ian Pointer, Matthew Tyson and Serdar (18 October 2021)."The best open source software of 2021".InfoWorld.Archived from the original on 8 March 2023. Retrieved8 March 2023.{{cite web}}: CS1 maint: multiple names: authors list (link)
  56. ^Yegulalp, James R. Borck, Martin Heller, Andrew C. Oliver, Ian Pointer, Isaac Sacolick, Matthew Tyson and Serdar (17 October 2022)."The best open source software of 2022".InfoWorld.Archived from the original on 8 March 2023. Retrieved8 March 2023.{{cite web}}: CS1 maint: multiple names: authors list (link)
  57. ^"VentureBeat presents AI Innovation Awards nominees at Transform 2021". 16 July 2021.Archived from the original on 8 March 2023. Retrieved8 March 2023.
  58. ^"What's next for AI: Gary Marcus talks about the journey toward robust artificial intelligence".ZDNET.Archived from the original on 1 March 2023. Retrieved8 March 2023.
  59. ^@GaryMarcus (10 February 2022)."GPT-NeoX-20B, 20 billion parameter large language model made freely available to public, with candid report on strengths, limits, ecological costs, etc" (Tweet) – viaTwitter.
  60. ^@GaryMarcus (19 February 2022)."incredibly important result: "our results raise the question of how much [large language] models actually generalize beyond pretraining data"" (Tweet) – viaTwitter.
  61. ^Chowdhury, Meghmala (29 December 2022)."Will Powerful AI Disrupt Industries Once Thought to be Safe in 2023?".Analytics Insight.Archived from the original on 1 January 2023. Retrieved6 April 2023.
  62. ^Wiggers, Kyle (2 March 2023)."Stability AI, Hugging Face and Canva back new AI research nonprofit".Archived from the original on 7 March 2023. Retrieved8 March 2023.
Retrieved from "https://en.wikipedia.org/w/index.php?title=EleutherAI&oldid=1288377612"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp