Machine Learning Pills

Machine Learning Pills

Extra

Extra #10 - The Regression Playbook Part 2 (code)

David Andrés's avatar
David Andrés
May 06, 2026
∙ Paid

As we mentioned in the previous issue, regression is one of the most fundamental problems in machine learning: given some inputs, predict a number.

Part 1 covered the foundations: linear models, trees, forests, and nearest neighbours. Part 2 gets into the heavier machinery.

Issue #129 - The Regression Playbook Part 1

Issue #129 - The Regression Playbook Part 1

David Andrés
·
Apr 26
Read full story
Extra #9 - The Regression Playbook Part 1 (code)

Extra #9 - The Regression Playbook Part 1 (code)

David Andrés
·
Apr 29
Read full story
Issue #130 - The Regression Playbook Part 2

Issue #130 - The Regression Playbook Part 2

David Andrés
·
May 3
Read full story

The four algorithms here are more powerful and more complex. They can learn shapes that simpler models can’t, but they come with more knobs to tune and more ways to go wrong.

  • A neural network can approximate almost any function given enough neurons, but too many and it memorises the noise.

  • XGBoost consistently tops benchmarks, but its learning rate and tree depth interact in ways that aren’t always obvious.

  • Support vector regression fits within a tolerance tube using kernel functions, elegant in theory, famously fiddly to tune in practice.

  • Polynomial regression looks like something new but turns out to be linear regression in disguise.

This part covers Neural Network Regression, XGBoost, Support Vector Regression, and Polynomial Regression: all trained on the same noisy wave dataset from Part 1, so the comparisons stay honest.

Keep reading with a 7-day free trial

Subscribe to Machine Learning Pills to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2026 MLPills · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture