September, 2022. We have just released a second version of our Acme paper, which is a significant rewrite that includes many more algorithms and an additional focus on batch/offline algorithms. We also give a more deep description of the distributed backbone of Acme. And of course we have opensourced all of this work here.
April, 2021. We have just released Launchpad, a system for defining and launching distributed programs particularly tuned towards machine learning applications. This partially makes up the backbone we use for the distributed variants of RL algorithms in Acme.
June, 2020. Along with some great colleagues at DeepMind we’re releasing Acme, an RL framework that we’ve been working on and using for our own research for quite some time. You can check it out here or take a look at our whitepaper!
January, 2019. I have finally gotten around to moving and updating my website. At the moment the data here should be incredibly out-of-date, but it’s only a matter of time before the rest gets updated! (Thanks to Yannis for forcing me to do this!)
August, 2016. Our paper on Learning to learn by gradient descent by gradient descent was accepted at NIPS 2016. See you in Barcelona!
January, 2016. I have accepted a position as a research scientist at Google DeepMind and am excited to join this coming March!
September, 2015. I spoke at the Gaussian process summer school’s workshop on global optimization; at the site you can find videos for each talk presented. We also released an updated version of pybo, our code for modular Bayesian optimization.
December, 2014. Along with several colleagues I presented papers at the BayesOpt workshop on modular Bayesian optimization, a shortened version of our PES paper, PES with unknown constraints, as well as entropy-based approaches to portfolio construction.