Overfitting: Model complexity Stay organized with collections Save and categorize content based on your preferences.
Page Summary
Simpler models often generalize better to new data than complex models, even if they perform slightly worse on training data.
Occam's Razor favors simpler explanations and models, prioritizing them over more complex ones.
Regularization techniques help prevent overfitting by penalizing model complexity during training.
Model training aims to minimize both loss (errors on training data) and complexity for optimal performance on new data.
Model complexity can be quantified using functions of model weights, like L1 and L2 regularization.
The previous unit introduced the following model, which miscategorized a lotof trees in the test set:
The preceding model contains a lot of complex shapes. Would a simplermodel handle new data better? Suppose you replace the complex model witha ridiculously simple model--a straight line.
The simple model generalizes better than the complex model on new data. That is,the simple model made better predictions on the test set than the complex model.
Simplicity has been beating complexity for a long time. In fact, thepreference for simplicity dates back to ancient Greece. Centuries later,a fourteenth-century friar named William of Occam formalized the preferencefor simplicity in a philosophy known asOccam'srazor. This philosophyremains an essential underlying principle of many sciences, includingmachine learning.
Note: Complex models typically outperform simple models on the training set.However, simple models typically outperform complex models on the test set(which is more important).Exercises: Check your understanding
Regularization
Machine learning models must simultaneously meet two conflicting goals:
- Fit data well.
- Fit data as simply as possible.
One approach to keeping a model simple is to penalize complex models; that is,to force the model to become simpler during training. Penalizing complexmodels is one form ofregularization.
A regularization analogy:Suppose every student in a lecture hall had a little buzzer that emitteda sound that annoyed the professor.Students would press the buzzer whenever the professor's lecture became toocomplicated. Annoyed, the professor would be forced to simplify the lecture.The professor would complain, "When I simplify, I'm not being precise enough."The students would counter with, "The only goal is to explain it simplyenough that I understand it." Gradually, the buzzers would train the professorto give an appropriately simple lecture, even if the simpler lecture isn't assufficiently precise.Loss and complexity
So far, this course has suggested that the only goal when training was tominimize loss; that is:
As you've seen, models focused solely on minimizing loss tend to overfit.A better training optimization algorithm minimizes some combination ofloss and complexity:
Unfortunately, loss and complexity are typically inversely related. Ascomplexity increases, loss decreases. As complexity decreases, loss increases.You should find a reasonable middle ground where the model makes goodpredictions on both the training data and real-world data.That is, your model should find a reasonable compromisebetween loss and complexity.
What is complexity?
You've already seen a few different ways of quantifying loss. How wouldyou quantify complexity? Start your exploration through the following exercise:
Exercise: Check your intuition
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-12-03 UTC.