Recent & Upcoming Talks

Dan Roy: Admissibility is Bayes Optimality with Infinitesimals

We give an exact characterization of admissibility in statistical decision problems in terms of Bayes optimality in a so-called nonstandard extension of the original decision problem, as introduced by Duanmu and Roy. Unlike the consideration of …

Benjamin Chamberlain: A Continuous Perspective on Graph Neural Networks

In this talk I will discuss several recent papers that develop new graph neural networks by considering their relation to continuous processes. I will discuss how graph neural networks can be arrived at as numerical schemes to solve differential …

Michalis Titsias: Functional Regularisation for Continual Learning with Gaussian Processes

We introduce a framework for Continual Learning (CL) based on Bayesian inference over the function space rather than the parameters of a deep neural network. This method, referred to as functional regularisation for Continual Learning, avoids …

Ollie Hamelijnck: Spatio-Temporal Variational Gaussian Processes

We introduce a scalable approach to Gaussian process inference that combines spatio-temporal filtering with natural gradient variational inference, resulting in a non-conjugate GP method for multivariate data that scales linearly with respect to …

Shahine Bouabid and Siu Chau: Deconditional Downscaling with Gaussian Processes

Reproducing kernel Hilbert spaces (RKHS) provide a powerful framework, termed kernel mean embeddings, for representing probability distributions, enabling nonparametric statistical inference in a variety of applications. Combining RKHS formalism with …

William Gregory: Improving Arctic Sea Ice Predictability with Gaussian Processes

Arctic sea ice is a major component of the Earth’s climate system, as well as an integral platform for travel, subsistence, and habitat. Since the late 1970s, significant advancements have been made in our ability to closely monitor the state of the …

Geoff Pleiss: Understanding Neural Networks through Gaussian Processes, and Vice Versa

Neural networks and Gaussian processes represent different learning paradigms: the former are parametric and rely on ERM-based training, while the latter are non-parametric and employ Bayesian inference. Despite these differences, I will discuss how …

Rachel Prudden: Stochastic Downscaling for Convective Regimes with Gaussian Random Fields

Downscaling aims to link the behaviour of the atmosphere at fine scales to properties measurable at coarser scales, and has the potential to provide high resolution information at a lower computational and storage cost than numerical simulation …

Willie Neiswanger: Going Beyond Global Optima with Bayesian Algorithm Execution

In many real world problems, we want to infer some property of an expensive black-box function f, given a budget of T function evaluations. One example is budget constrained global optimization of f, for which Bayesian optimization is a popular …

Florin Gogianu and Tudor Berariu: Spectral Normalisation in Deep Reinforcement Learning

We are happy to present this joint work with Mihaela Roșca, Răzvan Pascanu, Lucian Bușunoiu and Claudia Clopath on the effect of spectral normalisation in deep reinforcement learning. Most of the recent deep reinforcement learning advances take an …