Explicit stabilised Runge-Kutta methods and their application to Bayesian inverse problems

With Kostas Zygalakis, University of Edinburgh

Explicit stabilised Runge-Kutta methods and their application to Bayesian inverse problems

The concept of Bayesian inverse problems provides a coherent mathematical and algorithmic framework that enables researchers to combine mathematical models with the (often vast) datasets routinely available today in many fields of engineering science and technology. The ability to solve such inverse problems depends crucially on the efficient calculation of quantities relating to the posterior distribution, giving rise to computationally challenging high dimensional optimization and sampling problems. In this talk, we will connect the corresponding optimization and sampling problems to the large time behaviour of solutions to (stochastic) differential equations. Establishing such a connection allows utilising existing knowledge from the field of numerical analysis of differential equations. In particular, numerical stability is key for a good performing optimization or sampling algorithm since the larger the time-step used while the limiting behaviour of the underlying differential equation is preserved, the more computationally efficient an algorithm is. With this in mind we will explore the applicability of explicit stabilised Runge-Kutta methods for optimization and sampling problems; These methods are optimal in terms of their stability properties within the class of explicit integrators and we will show that when used as optimization methods they match the optimal convergence rate of the conjugate gradient method for quadratic optimization problems. Numerical investigations indicate that in the general case they are able to outperform state of the art optimization methods like Nesterov’s accelerated method. In the case of sampling, we will investigate their applicability to Bayesian inverse problems arising in computational imaging. An additional complexity arises there due to the fact that many of them contain non-differentiable terms, which when regularised lead to extra stiffness, hence making explicit stabilised methods even more suitable for these problems as illustrated by a range of numerical experiments that show that for the same computational cost as current state of the arts methods, explicit stabilised methods deliver much better MCMC samples.

This is joint work with Armin Eftekhari (EPFL), Bart Vandereycken (Geneva), Gilles Vilmart (Geneva), Marcelo Pereyra (Heriot-Watt) and Luis Vargas (Edinburgh)

Add to your calendar or Include in your list

How can mathematics help us to understand the behaviour of ants? Read more about the fanscinating work being carri… https://t.co/iCODvvxqE6 View on Twitter