Lenzerheide by
Adrian Michael
/
CC-BY-3.0

This session will be hosted by MCMSki, the joint meeting of the Institute of Mathematical Statistics and the International Society for Bayesian Analysis. The workshop is organised by Michael Osborne, Chris Oates and François-Xavier Briol.

The workshop will be held on Thursday, 7th January 2016; the room will be the Activityraum.

- 09:40-10:10: Simo Särkkä
- 10:10-10:40: François-Xavier Briol
- 10:40-11:10: Roman Garnett

Numerical algorithms, such as methods for the numerical solution of integrals, as well as optimization algorithms, can be interpreted as estimation rules. They estimate the value of a latent, intractable quantity – the value of an integral, the solution of a differential equation, the location of an extremum – given the result of tractable computations (“observations”, such as function values of the integrand, evaluations of the differential equation, function values of the gradient of an objective). So these methods perform inference, and are accessible to the formal frameworks of probability theory. They are learning machines.

Taking this observation seriously, a probabilistic numerical method is a numerical algorithm that takes in a probability distribution over its inputs, and returns a probability distribution over its output. Recent research shows that it is in fact possible to directly identify existing numerical methods, including some real classics, with specific probabilistic models.

Interpreting numerical methods as learning algorithms offers various benefits. It can offer insight into the algebraic assumptions inherent in existing methods. As a joint framework for methods developed in separate communities, it allows transfer of knowledge among these areas. But the probabilistic formulation also explicitly provides a richer output than simple convergence bounds. If the probability measure returned by a probabilistic method is well-calibrated, it can be used to monitor, propagate and control the quality of computations.