Emtiyaz Khan: Learning-Algorithms from Bayesian Principle


In machine learning, new learning algorithms are designed by borrowing ideas from optimization and statistics followed by an extensive empirical efforts to make them practical. However, there is a lack of underlying principles to guide this process. I will present a stochastic learning algorithm derived from Bayesian principle. Using this algorithm, we can obtain a range of existing algorithms: from classical methods such as least-squares, Newton’s method, and Kalman filter to new deep-learning algorithms such as RMSprop and Adam. Surprisingly, using the same principles, new algorithms can be naturally obtained even for the challenging learning tasks such as online learning, continual learning, and reinforcement learning. This talk will summarize recent works and outline future directions on how this principle can be used to make algorithms that mimic the learning behaviour of living beings.

September 16, 2019 14:00 — 15:00
SML Seminar
Huxley LT 144, Imperial College London


Emtiyaz Khan is a team leader at the RIKEN center for Advanced Intelligence Project (AIP) in Tokyo where he leads the Approximate Bayesian Inference (ABI) Team. From April 2018, he was a visiting professor at the EE department in Tokyo University of Agriculture and Technology (TUAT). He is an Action Editor for the Journal of Machine Learning (JMLR). From 2014 to 2016, he was a scientist at EPFL in Matthias Grossglausser's lab. During his time at EPFL, he taught two large machine learning courses for which he received a teaching award. He first joined EPFL as a post-doc with Matthias Seeger in 2013 and before that he finished his PhD at UBC in 2012 under the supervision of Kevin Murphy.