| Part ofa series on |
| Artificial intelligence (AI) |
|---|
Glossary |
Thebitter lesson is the observation inartificial intelligence that, in the long run, approaches thatscale with available computational power (such asbrute-force search orstatistical learning from large datasets) tend to outperform ones based ondomain-specific understanding because they are better at taking advantage ofMoore's law. The principle was proposed and named in a 2019 essay byRichard Sutton[1] and is now widely accepted.[2][3][4][5][6][7][8]
Sutton gives several examples that illustrate the lesson:
Sutton concludes that time is better invested in finding simple scalable solutions that can take advantage of Moore's law, rather than introducing ever-more-complex human insights, and calls this the "bitter lesson". He also cites two general-purpose techniques that have been shown to scale effectively:search andlearning. The lesson is considered "bitter" because it is lessanthropocentric than many researchers expected and so they have been slow to accept it.
The essay was published on Sutton's website incompleteideas.net in 2019, and has received hundreds of formal citations according toGoogle Scholar. Some of these provide alternative statements of the principle; for example, the 2022 paper "A Generalist Agent" fromGoogle DeepMind summarized the lesson as:[2]
Historically, generic models that are better atleveraging computation have also tended to overtake more specialized domain-specific approaches, eventually.
Another phrasing of the principle is seen in a Google paper on switchtransformers coauthored byNoam Shazeer:[3]
Simple architectures—backed by a generous computational budget, data set size and parameter count—surpass more complicated algorithms.
The principle is further referenced in many other works on artificial intelligence. For example,From Deep Learning to Rational Machines draws a connection to long-standing debates in the field, such asMoravec's paradox and the contrast betweenneats and scruffies.[9] In "Engineering a Less Artificial Intelligence", the authors concur that "flexible methods so far have always outperformed handcrafted domain knowledge in the long run" although note that "[w]ithout the right (implicit) assumptions,generalization is impossible".[5] More recently, "The Brain's Bitter Lesson: Scaling Speech Decoding With Self-Supervised Learning" continues Sutton's argument, contending that (as of 2025) the lesson has not been fully learned in the fields of speech recognition andbrain data.[6]
Other work has looked to apply the principle and validate it in new domains. For example, the 2022 paper "Beyond the Imitation Game" applies the principle tolarge language models to conclude that "it is vitally important that we understand their capabilities and limitations" to "avoid devoting research resources to problems that are likely to be solved by scale alone".[7] In 2024, "Learning the Bitter Lesson: Empirical Evidence from 20 Years of CVPR Proceedings" looked at further evidence from the field of computer vision andpattern recognition, and concludes that the previous twenty years of experience in the field shows "a strong adherence tothe core principles of the 'bitter lesson'".[4] In "Overestimation, Overfitting, and Plasticity in Actor-Critic: the Bitter Lesson of Reinforcement Learning", the authors look at generalization ofactor-critic algorithms and find that "general methods that are motivated by stabilization ofgradient-based learning significantly outperformRL-specific algorithmic improvements across a variety of environments" and note that this is consistent with the bitter lesson.[8]