Outer Loop Cookbook

Note: This cookbook has some cool ideas, but is outdated. It doesn't use best practices like normalizing inputs. I'll update the examples later.

This is a set of simple examples of automated "outer loops" for machine learning, with a focus on Bayesian Optimization.

These recipes are opinionated. They promote cooking from scratch. Many frameworks and products try to hide the details of Bayesian Optimization loops, and they enable you to do Bayesian Optimization with just a few lines of code. These recipes, instead, put all of the details in front of you, and they want you to copy-paste boilerplate code and adapt it to your needs. I think this approach is much more empowering. It is inspired by the philosophy of D3, which showed the value of cooking from scratch when creating visualizations. I want to help promote this idea within machine learning.

Bayesian optimization: How to cook from scratch

Bayesian optimization: Now look what you can do

  • Tick-Tock Bayesian Optimization
  • Include domain knowledge (Aspirational)
  • Use more powerful GP kernels (Aspirational)
  • Use a Bayesian Neural Network instead of a GP (Aspirational)

Engineering

  • Run these loops on a compute cluster (Not yet finished)

See gpytorch and botorch documentation for more examples. (Though I think botorch's tutorials are too apologetic about making you copy-paste boilerplate code, and they recommend using the Ax wrapper instead. I am pro-boilerplate and think botorch should be proud of itself.)