| PaLM | |
|---|---|
| Developer | Google AI |
| Predecessor | LaMDA |
| Successor | Google Gemini |
| Available in | English |
| Type | Large language model |
| Website | ai |
PaLM (Pathways Language Model) is a 540 billion-parameter dense decoder-onlytransformer-basedlarge language model (LLM) developed byGoogle AI.[1] Researchers also trained smaller versions of PaLM (with 8 and 62 billion parameters) to test the effects of model scale.[2]
PaLM is capable of a wide range of tasks, includingcommonsense reasoning,arithmetic reasoning,joke explanation,code generation, andtranslation.[2][3][4][5] When combined withchain-of-thought prompting, PaLM achieved significantly better performance ondatasets requiring multi-step reasoning, such asword problems andlogic-based questions.[1][2]
The model was first announced in April 2022 and remained private until March 2023, when Google launched anAPI for PaLM and several other technologies.[6] The API was initially available to a limited number of developers who joined a waitlist before it was released to the public.[7]
Google andDeepMind developed a version of PaLM 540B (with 540 billion parameters) calledMed-PaLM, which isfine-tuned on medical data and outperforms previous models on medicalquestion-answering benchmarks.[8][9] Med-PaLM was the first to obtain a passing score onU.S. medical licensing questions, and in addition to answering both multiple choice and open-ended questions accurately, it providesreasoning and is able to evaluate its own responses.[10]
Google also extended PaLM using avision transformer to createPaLM-E, avision-language model that can be used forrobotic manipulation without the need for retraining orfine-tuning.[11][12][13]
In May 2023, Google announced PaLM 2 at the annualGoogle I/O keynote.[14] PaLM 2 is reported to be a 340 billion-parameter model trained on 3.6 trillion tokens.[15]
In June 2023, Google announced AudioPaLM for speech-to-speech translation, which uses the PaLM-2 architecture and initialization.[16]
PaLM is pre-trained on a high-qualitycorpus of 780 billion tokens that comprise variousnatural language tasks and use cases. This dataset includes filtered webpages, books,Wikipedia articles, news articles, source code obtained from open source repositories onGitHub, andsocial media conversations.[1][2] It is based on the dataset used to trainGoogle'sLaMDA model.[2] The social media conversation portion of the dataset makes up 50% of the corpus, which aids the model in its conversational capabilities.[2]
PaLM 540B was trained over twoTPU v4 Pods with 3,072 TPU v4 chips in each Pod attached to 768 hosts, connected using a combination of model anddata parallelism, which was the largest TPU configuration.[2][17] This allowed for efficient training at scale, using 6,144 chips, and marked a record for the highest training efficiency achieved for LLMs at this scale: a hardwareFLOPs utilization of 57.8%.[3]