Probabilistic-Numerics.org

Literature

This page collects literature on all areas of probabilistic numerics, sorted by problem type. If you would like your publication to be featured in this list, please do not hesitate to contact us. The fastest way to get your documents onto the site is to clone our github repository, add your documents to the relevant BibTeX-file in /_bibliography, then either send us a pull-request, or an email with the updated file (see box on the frontpage for our contacts).

General and Foundational

The following papers are often cited as early works on the idea of uncertainty over the result of deterministic computations. Some entries have a “notes” field providing further information about the relevance of the cited work, or pointers to specific results therein.

1. Kanagawa, M., Hennig, P., Sejdinovic, D., & Sriperumbudur, B. K. (2018). Gaussian Processes and Kernel Methods: A Review on Connections and Equivalences. ArXiv e-Prints, 1807.02582.

This paper is an attempt to bridge the conceptual gaps between researchers working on the two widely used approaches based on positive definite kernels: Bayesian learning or inference using Gaussian processes on the one side, and frequentist kernel methods based on reproducing kernel Hilbert spaces on the other. It is widely known in machine learning that these two formalisms are closely related; for instance, the estimator of kernel ridge regression is identical to the posterior mean of Gaussian process regression. However, they have been studied and developed almost independently by two essentially separate communities, and this makes it difficult to seamlessly transfer results between them. Our aim is to overcome this potential difficulty. To this end, we review several old and new results and concepts from either side, and juxtapose algorithmic quantities from each framework to highlight close similarities. We also provide discussions on subtle philosophical and theoretical differences between the two approaches.

@article{2018arXiv180702582K,
author = {{Kanagawa}, M. and {Hennig}, P. and {Sejdinovic}, D. and {Sriperumbudur}, B.~K},
title = {{Gaussian Processes and Kernel Methods: A Review on Connections and Equivalences}},
journal = {ArXiv e-prints},
volume = {1807.02582},
year = {2018},
month = jul,
file = {https://arxiv.org/pdf/1807.02582}
}

2. Cockayne, J., Oates, C., Sullivan, T., & Girolami, M. (2017). Bayesian Probabilistic Numerical Methods. ArXiv e-Prints, stat.ME 1702.03673.

The emergent field of probabilistic numerics has thus far lacked rigorous statistical principals. This paper establishes Bayesian probabilistic numerical methods as those which can be cast as solutions to certain Bayesian inverse problems, albeit problems that are non-standard. This allows us to establish general conditions under which Bayesian probabilistic numerical methods are well-defined, encompassing both non-linear and non-Gaussian models. For general computation, a numerical approximation scheme is developed and its asymptotic convergence is established. The theoretical development is then extended to pipelines of computation, wherein probabilistic numerical methods are composed to solve more challenging numerical tasks. The contribution highlights an important research frontier at the interface of numerical analysis and uncertainty quantification, with some illustrative applications presented.

@article{2017arXiv170203673C,
author = {{Cockayne}, J. and {Oates}, C. and {Sullivan}, T. and {Girolami}, M.},
title = {{{B}ayesian Probabilistic Numerical Methods}},
journal = {ArXiv e-prints},
volume = {stat.ME 1702.03673},
year = {2017},
month = feb,
file = {https://arxiv.org/pdf/1702.03673.pdf}
}

3. Owhadi, H., & Scovel, C. (2017). Universal Scalable Robust Solvers from Computational Information Games and fast eigenspace adapted Multiresolution Analysis. ArXiv:1703.10761 [Math, Stat]. Retrieved from http://arxiv.org/abs/1703.10761

One result is that a quadratic norm defined on a Banach space defines a unique Gaussian measure that is an optimal mixed strategy for the recovery problem associated with numerical approximation/probabilistic numerics using relative error in that norm as a loss (in a repeated game). In some sense, this shows that probabilistic numerics estimates are also optimal in the (minimax) worst case setting of classical numerical analysis if the prior is the canonical Gaussian field defined by the norm used to quantify approximation errors. Using a Sobolev space endowed with an operator norm leads to applications in PDEs (N polylog N complexity solvers for operators on Sobolev spaces), using R^N leads to applications to linear algebra (e.g. multi-resolution decompositions of matrix equations). By conditioning this canonical Gaussian field on a hierarchy of measurements one obtains wavelets adapted to the norm defining this Gaussian field, i.e. operator adapted wavelets (that are scale orthogonal in operator scalar product and localized in space and eigenspace).

We show how the discovery of robust scalable numerical solvers for arbitrary bounded linear operators can be automated as a Game Theory problem by reformulating the process of computing with partial information and limited resources as that of playing underlying hierarchies of adversarial information games. When the solution space is a Banach space \B endowed with a quadratic norm {}\textbar}cdot}\textbar the optimal measure (mixed strategy) for such games (e.g. the adversarial recovery of \u}in B given partial measurements \[}phi_i, u] with {}phi_i}in B^* using relative error in {}\textbar}cdot}\textbar\-norm as a loss) is a centered Gaussian field {}xi solely determined by the norm {}\textbar}cdot}\textbar whose conditioning (on measurements) produces optimal bets. When measurements are hierarchical, the process of conditioning this Gaussian field produces a hierarchy of elementary bets (gamblets). These gamblets generalize the notion of Wavelets and Wannier functions in the sense that they are adapted to the norm {}\textbar}cdot}\textbar and induce a multi-resolution decomposition of \B that is adapted to the eigensubspaces of the operator defining the norm {}\textbar}cdot}\textbar\. When the operator is localized, we show that the resulting gamblets are localized both in space and frequency and introduce the Fast Gamblet Transform (FGT) with rigorous accuracy and (near-linear) complexity estimates. As the FFT can be used to solve and diagonalize arbitrary PDEs with constant coefficients, the FGT can be used to decompose a wide range of continuous linear operators (including arbitrary continuous linear bijections from \H^s_0 to \H^{-s} or to Ł^2\) into a sequence of independent linear systems with uniformly bounded condition numbers and leads to {}mathcal{O}(N }operatorname{polylog} N) solvers and eigenspace adapted Multiresolution Analysis (resulting in near linear complexity approximation of all eigensubspaces).

title = {Universal {Scalable} {Robust} {Solvers} from {Computational} {Information} {Games} and fast eigenspace adapted {Multiresolution} {Analysis}},
url = {http://arxiv.org/abs/1703.10761},
notes = {
One result is that a quadratic norm defined on a Banach space defines a unique Gaussian measure that is an optimal mixed strategy  for the recovery problem associated with numerical approximation/probabilistic numerics using relative error in that norm as a loss (in a repeated game). In some sense, this shows that probabilistic numerics estimates are also optimal in the (minimax) worst case setting of classical numerical analysis if the prior is the canonical Gaussian field defined by the norm used to quantify approximation errors.
Using a Sobolev space endowed with an operator norm leads to applications in PDEs (N polylog N  complexity solvers for operators on Sobolev spaces), using R^N leads to applications to linear algebra (e.g. multi-resolution decompositions of matrix equations).
By conditioning this canonical Gaussian field on a hierarchy of measurements one obtains wavelets adapted to the norm defining this Gaussian field, i.e. operator adapted wavelets (that are scale orthogonal in operator scalar product and localized in space and eigenspace).
},
urldate = {2017-09-10},
journal = {arXiv:1703.10761 [math, stat]},
author = {Owhadi, Houman and Scovel, Clint},
month = mar,
year = {2017},
note = {One result is that a quadratic norm defined on a Banach space defines a unique Gaussian measure that is an optimal mixed strategy  for the recovery problem associated with numerical approximation/probabilistic numerics using relative error in that norm as a loss. In some sense, this shows that probabilistic numerics estimates are also optimal in the worst case setting of classical numerical analysis if the prior is the canonical Gaussian field defined by the norm used to quantify approximation errors. Using a Sobolev space endowed with an operator norm leads to applications in PDEs (fast solvers for operators on Sobolev spaces), using $\R^N$ leads to applications to linear algebra (e.g. multi-resolution decompositions of matrix equations},
keywords = {68T99, 65T60, 65M55, 65N55, 65F99, 65N75, 62C99, 62C20, 62C10, 42C40, 60G42, 68Q25, 15A18, 35Q91, Mathematics - Analysis of PDEs, Mathematics - Numerical Analysis, Statistics - Machine Learning},
annote = {Comment: 142 pages. 14 Figures. Presented at AFOSR (Aug 2016), DARPA (Sep 2016), IPAM (Apr 3, 2017), Hausdorff (April 13, 2017) and ICERM (June 5, 2017)},
file = {https://arxiv.org/pdf/1703.10761.pdf}
}

4. Owhadi, H., & Scovel, C. (2016). Toward Machine Wald. In Springer Handbook of Uncertainty Quantification (pp. 1–35). Springer.

The past century has seen a steady increase in the need of estimating and predicting complex systems and making (possibly critical) decisions with limited information. Although computers have made possible the numerical evaluation of sophisticated statistical models, these models are still designed by humans because there is currently no known recipe or algorithm for dividing the design of a statistical model into a sequence of arithmetic operations. Indeed enabling computers to think as humans, especially when faced with uncertainty, is challenging in several major ways: (1) Finding optimal statistical models remains to be formulated as a well-posed problem when information on the system of interest is incomplete and comes in the form of a complex combination of sample data, partial knowledge of constitutive relations and a limited description of the distribution of input random variables. (2) The space of admissible scenarios along with the space of relevant information, assumptions, and/or beliefs, tends to be infinite dimensional, whereas calculus on a computer is necessarily discrete and finite. With this purpose, this paper explores the foundations of a rigorous framework for the scientific computation of optimal statistical estimators/models and reviews their connections with decision theory, machine learning, Bayesian inference, stochastic optimization, robust optimization, optimal uncertainty quantification, and information-based complexity.

author = {Owhadi, Houman and Scovel, Clint},
title = {Toward Machine {W}ald},
booktitle = {Springer Handbook of Uncertainty Quantification},
year = {2016},
pages = {1--35},
publisher = {Springer},
file = {http://arxiv.org/pdf/1508.02449v2.pdf}
}

5. Hennig, P., Osborne, M. A., & Girolami, M. (2015). Probabilistic numerics and uncertainty in computations. Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 471(2179).

We deliver a call to arms for probabilistic numerical methods: algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data has led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimisers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations.

@article{HenOsbGirRSPA2015,
author = {Hennig, Philipp and Osborne, Michael A. and Girolami, Mark},
title = {Probabilistic numerics and uncertainty in computations},
volume = {471},
number = {2179},
year = {2015},
publisher = {The Royal Society},
journal = {Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences},
file = {http://rspa.royalsocietypublishing.org/content/royprsa/471/2179/20150142.full.pdf}
}

6. Owhadi, H., & Scovel, C. (2015). Conditioning Gaussian measure on Hilbert space. ArXiv:1506.04208 [Math]. Retrieved from http://arxiv.org/abs/1506.04208

For a Gaussian measure on a separable Hilbert space with covariance operator \C we show that the family of conditional measures associated with conditioning on a closed subspace \S^{}perp} are Gaussian with covariance operator the short {}mathcal{S}(C) of the operator \C to \S\. We provide two proofs. The first uses the theory of Gaussian Hilbert spaces and a characterization of the shorted operator by Andersen and Trapp. The second uses recent developments by Corach, Maestripieri and Stojanoff on the relationship between the shorted operator and \C\-symmetric oblique projections onto \S^{}perp}\. To obtain the assertion when such projections do not exist, we develop an approximation result for the shorted operator by showing, for any positive operator \A how to construct a sequence of approximating operators \A^{n} which possess \A^{n}\-symmetric oblique projections onto \S^{}perp} such that the sequence of shorted operators {}mathcal{S}(A^{n}) converges to {}mathcal{S}(A) in the weak operator topology. This result combined with the martingale convergence of random variables associated with the corresponding approximations \C^{n} establishes the main assertion in general. Moreover, it in turn strengthens the approximation theorem for shorted operator when the operator is trace class; then the sequence of shorted operators {}mathcal{S}(A^{n}) converges to {}mathcal{S}(A) in trace norm.

title = {Conditioning {Gaussian} measure on {Hilbert} space},
url = {http://arxiv.org/abs/1506.04208},
urldate = {2017-09-10},
journal = {arXiv:1506.04208 [math]},
author = {Owhadi, Houman and Scovel, Clint},
month = jun,
year = {2015},
note = {arXiv: 1506.04208},
keywords = {28C20, Mathematics - Probability},
file = {https://arxiv.org/pdf/1506.04208.pdf}
}

7. Ritter, K. (2000). Average-case analysis of numerical problems. Springer.
@book{ritter2000average,
title = {Average-case analysis of numerical problems},
author = {Ritter, Klaus},
number = {1733},
year = {2000},
publisher = {Springer},
series = {Lecture Notes in Mathematics}
}

8. O’Hagan, A. (1992). Some Bayesian numerical analysis. Bayesian Statistics, 4(345–363), 4–2.
@article{o1992some,
title = {Some Bayesian numerical analysis},
author = {O’Hagan, Anthony},
journal = {Bayesian Statistics},
volume = {4},
number = {345--363},
pages = {4--2},
file = {../assets/pdf/OHagan1991.pdf},
year = {1992}
}

9. Diaconis, P. (1988). Bayesian numerical analysis. Statistical Decision Theory and Related Topics IV, 1, 163–175.

This text is arguably the first to very explicitly discuss the notion of "Bayesian Numerics".

@article{diaconis1988bayesian,
title = {Bayesian numerical analysis},
author = {Diaconis, Persi},
journal = {Statistical decision theory and related topics IV},
volume = {1},
pages = {163--175},
year = {1988},
publisher = {Springer-Verlag, New York},
file = {../assets/pdf/Diaconis_1988.pdf},
notes = {This text is arguably the first to very explicitly discuss the notion of "Bayesian Numerics".}
}

10. Kadane, J. B. (1985). Parallel and Sequential Computation: A Statistician’s View. Journal of Complexity, 1, 256–263.

I borrow themes from statistics - especially the Bayesian ideas underlying average case analysis and ideas of sequential design of experiments - to discuss when parallel computation is likely to be an attractive technique.

journal = {Journal of Complexity},
pages = {256--263},
title = {{Parallel and Sequential Computation: A Statistician's View}},
volume = {1},
year = {1985},
}

11. Kadane, J. B., & Wasilkowski, G. W. (1985). Average case epsilon-complexity in computer science: A Bayesian view. In Bayesian Statistics 2, Proceedings of the Second Valencia International Meeting (pp. 361–374).

Relations between average case epsilon-complexity and Bayesian statistics are discussed. An algorithm corresponds to a decision function, and the choice of information to the choice of an experiment. Adaptive information in epsilon-complexity theory corresponds to the concept of sequential experiment. Some results are reported, giving epsilon-complexity and minimax-Bayesian interpretations for factor analysis. Results from epsilon-complexity are used to establish the optimal sequential design is no better than optimal nonsequential design for that problem.

author = {Kadane, J. B. and Wasilkowski, G. W.},
booktitle = {Bayesian Statistics 2, Proceedings of the Second Valencia International Meeting},
number = {July},
pages = {361--374},
title = {{Average case epsilon-complexity in computer science: A Bayesian view}},
year = {1985},
}

12. Larkin, F. M. (1972). Gaussian measure in Hilbert space and applications in numerical analysis. Rocky Mountain Journal of Mathematics, 2(3), 379–422.

The numerical analyst is often called upon to estimate a function from a very limited knowledge of its properties (e.g. a finite number of ordinate values). This problem may be made well posed in a variety of ways, but an attractive approach is to regard the required function as a member of a linear space on which a probability measure is constructed, and then use established techniques of probability theory and statistics in order to infer properties of the function from the given information. This formulation agrees with established theory, for the problem of optimal linear approximation (using a Gaussian probability distribution), and also permits the estimation of nonlinear functionals, as well as extension to the case of "noisy" data.

@article{larkin1972gaussian,
title = {Gaussian measure in {H}ilbert space and applications in numerical analysis},
author = {Larkin, F. M.},
journal = {Rocky Mountain Journal of Mathematics},
volume = {2},
number = {3},
year = {1972},
pages = {379--422}
}

13. Kimeldorf, G. S., & Wahba, G. (1970). A correspondence between Bayesian estimation on stochastic processes and smoothing by splines. The Annals of Mathematical Statistics, 41(2), 495–502.

This seminal paper points out that splines can be interpreted as the posterior mean functions of certain Gaussian processes. The paper does not explicitly make the connection to the use of splines in numerics – which means that many classic numerical methods in various areas can be interpreted as average-case Gaussian inference rules – but later authors, most notably Diaconis, make it clear that they got the idea.

@article{kimeldorf1970correspondence,
title = {A correspondence between Bayesian estimation on stochastic processes and smoothing by splines},
author = {Kimeldorf, George S and Wahba, Grace},
journal = {The Annals of Mathematical Statistics},
volume = {41},
number = {2},
pages = {495--502},
year = {1970},
notes = {This seminal paper points out that splines can be interpreted as the posterior mean functions of certain Gaussian processes. The paper does not explicitly make the connection to the use of splines in numerics -- which means that many classic numerical methods in various areas can be interpreted as average-case Gaussian inference rules -- but later authors, most notably Diaconis, make it clear that they got the idea.}
}

14. Sard, A. (1963). Linear approximation. Providence, R.I.: American Mathematical Society.

Towards the end of this book, Sard introduces the probability concepts of his time (Wiener and Kolmogorov) into the classical theory of linear approximation. [Many thanks to Houman Owhadi for this reference.]

@book{sard1963linear,
title = {Linear approximation},
author = {Sard, Arthur},
year = {1963},
publisher = {American Mathematical Society},
notes = {Towards the end of this book, Sard introduces the probability concepts of his time (Wiener and Kolmogorov) into the classical theory of linear approximation. [Many thanks to Houman Owhadi for this reference.]}
}

15. Ajna, B., & Dalenius, T. (1960). Några tillämpningar av statistiska idéer på numerisk integration. Nordisk Matematisk Tidskrift, 145–152.

[translation by JSTOR]: The usual approximation of a definite integral by a finite sum is applied in the more general case \underset-\mathrm∞\overset\mathrm∞∫\mathrmg\left(\mathrmt\right)\mathrmd\mathrmF\left(\mathrmt\right)≈∑_\mathrmk=1^\mathrmn\mathrmw_\mathrmk\mathrmg\left(\mathrmt_\mathrmk\right), where F(t) is a known distribution function. Interpreting \mathrmW\left(\mathrmt\right)=∑_\mathrmt_\mathrmk≦\mathrmt\mathrmw_\mathrmk as a discrete probability distribution, this can be adjusted to the given distribution F(t) by well known statistical methods. It turns out that the criterion of Kolmogorov-Smirnov and Smirnov’s \omega_2-test both lead to the so-called tangent-method approximation of ordinary integrals, whereas the moment-method leads to Gaussian numerical integration.

@article{ajne1960naagra,
title = {N{\aa}gra till{\"a}mpningar av statistiska id{\'e}er p{\aa} numerisk integration},
author = {Ajna, Bj{\"o}rn and Dalenius, Tore},
journal = {Nordisk Matematisk Tidskrift},
pages = {145--152},
year = {1960},
}

16. Sul’din, A. V. (1960). Wiener measure and its applications to approximation methods. II. Izvestiya Vysshikh Uchebnykh Zavedenii. Matematika, (5), 165–179.
@article{sul1960wiener,
title = {Wiener measure and its applications to approximation methods. II},
author = {Sul'din, Al'bert Valentinovich},
journal = {Izvestiya Vysshikh Uchebnykh Zavedenii. Matematika},
number = {5},
pages = {165--179},
year = {1960},
publisher = {Kazan (Volga region) Federal University}
}

17. Sul’din, A. V. (1959). Wiener measure and its applications to approximation methods. I. Izvestiya Vysshikh Uchebnykh Zavedenii. Matematika, (6), 145–158.
@article{sul1959wiener,
title = {Wiener measure and its applications to approximation methods. I},
author = {Sul'din, Al'bert Valentinovich},
journal = {Izvestiya Vysshikh Uchebnykh Zavedenii. Matematika},
number = {6},
pages = {145--158},
year = {1959},
publisher = {Kazan (Volga region) Federal University}
}

18. Kac, M. (1949). On distributions of certain Wiener functionals. Transactions of the AMS, 65(1), 1–13.
@article{kac1949distributions,
title = {On distributions of certain Wiener functionals},
author = {Kac, M.},
journal = {Transactions of the AMS},
volume = {65},
number = {1},
pages = {1--13},
year = {1949},
}

19. Erdös, P., & Kac, M. (1940). The Gaussian law of errors in the theory of additive number theoretic functions. American Journal of Mathematics, 738–742.
@article{kac1940gaussian,
title = {The Gaussian law of errors in the theory of additive number
theoretic functions},
author = {Erd{\"o}s, P. and Kac, M},
journal = {American Journal of Mathematics},
pages = {738--742},
year = {1940},
}

20. Poincaré, H. (1896). Calcul des probabilités. Paris: Gauthier-Villars.

Likely the first text explicitly discussing the notion of computation as inference. The introduction (§1.7) contains the following insightful, albeit informal, sentence: Une question de probabilités ne se pose que par suite de notre ignorance: il n’y aurait place que pour la certitude si nous connaissions toutes les données du problème. D’autre part, notre ignorance ne doit pas être complète, sans quoi nous ne pourrions rien évaluer. Une classification s’opérerait donc suivant le plus ou moins de profondeur de notre ignorance. Ainsi la probabilité pour que la sixième décimale d’un nombre dans une table de logarithmes soit égale à 6 est a priori de 1/10; en réalité, toutes les données du problème sont bien déterminées, et, si nous voulions nous en donner la peine, nous connaîtrions exactement cette probabilité. De même, dans les interpolations, dans le calcul des intégrales définies par la méthode de Cotes ou celle de Gauss, etc. Further along, in chapter XV, page 292, he constructs a (basic) version of the Wiener integral from scratch and applies it to interpolation problems (in the beginning of that chapter he reviews the problem of interpolating an unknown function over a known basis of polynomials using classical deterministic methods, in the second part he assumes that the coefficients in the basis are Gaussian and treats interpolation points as data in an inference problem). [Many thanks to Houman Owhadi for pointing out this second reference.]

@book{poincare1896,
author = {Poincar{\'e}, H.},
publisher = {Gauthier-Villars},
title = {{Calcul des probabilit{\'e}s}},
year = {1896},
notes = {Likely the first text explicitly discussing the notion of computation as inference. The introduction (§1.7) contains the following insightful, albeit informal, sentence:

Une question de probabilités ne se pose que par suite de notre ignorance: il n’y aurait place que pour la certitude si nous connaissions toutes les données du problème. D’autre part, notre ignorance ne doit pas être complète, sans quoi nous ne pourrions rien évaluer. Une classification s’opérerait donc suivant le plus ou moins de profondeur de notre ignorance.

Ainsi la probabilité pour que la sixième décimale d’un nombre dans une table de logarithmes soit égale à 6 est a priori de 1/10; en réalité, toutes les données du problème sont bien déterminées, et, si nous voulions nous en donner la peine, nous connaîtrions exactement cette probabilité. De même, dans les interpolations, dans le calcul des intégrales définies par la méthode de Cotes ou celle de Gauss, etc.

Further along, in chapter XV, page 292, he constructs a (basic) version of the Wiener integral from scratch and applies it to interpolation problems (in the beginning of that chapter he reviews the problem of interpolating an unknown function over a known basis of polynomials using classical deterministic methods, in the second part he assumes that the coefficients in the basis are Gaussian and treats interpolation points as data in an inference problem). [Many thanks to Houman Owhadi for pointing out this second reference.]
}
}

1. Xi, X., Briol, F.-X., & Girolami, M. (2018). Bayesian Quadrature for Multiple Related Integrals. ArXiv e-Prints.

Bayesian probabilistic numerical methods are a set of tools providing posterior distributions on the output of numerical methods. The use of these methods is usually motivated by the fact that they can represent our uncertainty due to incomplete/finite information about the continuous mathematical problem being approximated. In this paper, we demonstrate that this paradigm can provide additional advantages, such as the possibility of transferring information between several numerical methods. This allows users to represent uncertainty in a more faithfully manner and, as a by-product, provide increased numerical efficiency. We propose the first such numerical method by extending the well-known Bayesian quadrature algorithm to the case where we are interested in computing the integral of several related functions. We then demonstrate its efficiency in the context of multi-fidelity models for complex engineering systems, as well as a problem of global illumination in computer graphics.

@article{2018arXiv180104153X,
author = {{Xi}, X. and {Briol}, F.-X. and {Girolami}, M.},
title = {{Bayesian Quadrature for Multiple Related Integrals}},
journal = {ArXiv e-prints},
archiveprefix = {arXiv},
eprint = {1801.04153},
primaryclass = {stat.CO},
keywords = {Statistics - Computation, Computer Science - Numerical Analysis, Mathematics - Numerical Analysis, Statistics - Machine Learning},
year = {2018},
month = jan,
file = {https://arxiv.org/pdf/1801.04153.pdf}
}

2. Karvonen, T., & Särkkä, S. (2017). Fully symmetric kernel quadrature. ArXiv:1703.06359 [Cs, Math, Stat]. Retrieved from http://arxiv.org/abs/1703.06359

Kernel quadratures and other kernel-based approximation methods typically suffer from prohibitive cubic time and quadratic space complexity in the number of function evaluations. The problem arises because a system of linear equations needs to be solved. In this article we show that the weights of a kernel quadrature rule can be computed efficiently and exactly for up to tens of millions of nodes if the kernel, integration domain, and measure are fully symmetric and the node set is a union of fully symmetric sets. This is based on the observations that in such a setting there are only as many distinct weights as there are fully symmetric sets and that these weights can be solved from a linear system of equations constructed out of row sums of certain submatrices of the full kernel matrix. We present several numerical examples that show feasibility, both for a large number of nodes and in high dimensions, of the developed fully symmetric kernel quadrature rules. Most prominent of the fully symmetric kernel quadrature rules we propose are those that use sparse grids.

@article{karvonen_fully_2017,
title = {Fully symmetric kernel quadrature},
url = {http://arxiv.org/abs/1703.06359},
urldate = {2017-09-11},
journal = {arXiv:1703.06359 [cs, math, stat]},
author = {Karvonen, Toni and S{\"a}rkk{\"a}, Simo},
month = mar,
year = {2017},
note = {arXiv: 1703.06359},
keywords = {Computer Science - Numerical Analysis, Mathematics - Numerical Analysis, Statistics - Computation},
file = {https://arxiv.org/pdf/1703.06359.pdf}
}

3. Karvonen, T., & Särkkä, S. (2017). Classical quadrature rules via Gaussian processes.
@article{karvonen_classical_2017,
title = {Classical quadrature rules via {Gaussian} processes},
file = {https://users.aalto.fi/~karvont2/pdf/KarvonenSarkka2017-MLSP.pdf},
author = {Karvonen, Toni and S{\"a}rkk{\"a}, Simo},
year = {2017}
}

4. Kanagawa, M., Sriperumbudur, B. K., & Fukumizu, K. (2017). Convergence Analysis of Deterministic Kernel-Based Quadrature Rules in Misspecified Settings. ArXiv:1709.00147 [Cs, Math, Stat]. Retrieved from http://arxiv.org/abs/1709.00147

This paper presents convergence analysis of kernel-based quadrature rules in misspecified settings, focusing on deterministic quadrature in Sobolev spaces. In particular, we deal with misspecified settings where a test integrand is less smooth than a Sobolev RKHS based on which a quadrature rule is constructed. We provide convergence guarantees based on two different assumptions on a quadrature rule: one on quadrature weights, and the other on design points. More precisely, we show that convergence rates can be derived (i) if the sum of absolute weights remains constant (or does not increase quickly), or (ii) if the minimum distance between distance design points does not decrease very quickly. As a consequence of the latter result, we derive a rate of convergence for Bayesian quadrature in misspecified settings. We reveal a condition on design points to make Bayesian quadrature robust to misspecification, and show that, under this condition, it may adaptively achieve the optimal rate of convergence in the Sobolev space of a lesser order (i.e., of the unknown smoothness of a test integrand), under a slightly stronger regularity condition on the integrand.

@article{kanagawa_convergence_2017,
title = {Convergence {Analysis} of {Deterministic} {Kernel}-{Based} {Quadrature} {Rules} in {Misspecified} {Settings}},
url = {http://arxiv.org/abs/1709.00147},
file = {https://arxiv.org/pdf/1709.00147.pdf},
urldate = {2017-09-12},
journal = {arXiv:1709.00147 [cs, math, stat]},
author = {Kanagawa, Motonobu and Sriperumbudur, Bharath K. and Fukumizu, Kenji},
month = sep,
year = {2017},
note = {arXiv: 1709.00147},
keywords = {Computer Science - Numerical Analysis, Mathematics - Numerical Analysis, Statistics - Machine Learning, 65D30 (Primary), 65D32, 65D05, 46E35, 46E22 (Secondary)},
annote = {Comment: 34 pages}
}

5. Cockayne, J., Oates, C., Sullivan, T., & Girolami, M. (2017). Bayesian Probabilistic Numerical Methods. ArXiv:1702.03673 [Cs, Math, Stat]. Retrieved from http://arxiv.org/abs/1702.03673

The emergent field of probabilistic numerics has thus far lacked rigorous statistical principals. This paper establishes Bayesian probabilistic numerical methods as those which can be cast as solutions to certain Bayesian inverse problems, albeit problems that are non-standard. This allows us to establish general conditions under which Bayesian probabilistic numerical methods are well-defined, encompassing both non-linear and non-Gaussian models. For general computation, a numerical approximation scheme is developed and its asymptotic convergence is established. The theoretical development is then extended to pipelines of computation, wherein probabilistic numerical methods are composed to solve more challenging numerical tasks. The contribution highlights an important research frontier at the interface of numerical analysis and uncertainty quantification, with some illustrative applications presented.

@article{cockayne_bayesian_2017,
title = {Bayesian {Probabilistic} {Numerical} {Methods}},
url = {http://arxiv.org/abs/1702.03673},
urldate = {2017-03-06},
journal = {arXiv:1702.03673 [cs, math, stat]},
author = {Cockayne, Jon and Oates, Chris and Sullivan, Tim and Girolami, Mark},
month = feb,
year = {2017},
keywords = {Computer Science - Numerical Analysis, Mathematics - Numerical Analysis, Mathematics - Statistics Theory, Statistics - Computation, Statistics - Methodology},
file = {https://arxiv.org/pdf/1702.03673.pdf}
}

6. Ehler, M., Graef, M., & Oates, C. J. (2017). Optimal Monte Carlo integration on closed manifolds. ArXiv e-Prints.

The worst case integration error in reproducing kernel Hilbert spaces of standard Monte Carlo methods with n random points decays as 1/sqrt(n). However, re-weighting of random points can sometimes be used to improve the convergence order. This paper contributes general theoretical results for Sobolev spaces on closed Riemannian manifolds, where we verify that such re-weighting yields optimal approximation rates up to a logarithmic factor. We also provide numerical experiments matching the theoretical results for some Sobolev spaces on the unit sphere and on the Grassmannian manifold. Our theoretical findings also cover function spaces on more general sets such as the unit ball, the cube, and the simplex.

@article{2017arXiv170704723E,
author = {{Ehler}, M. and {Graef}, M. and {Oates}, C.~J.},
title = {{Optimal Monte Carlo integration on closed manifolds}},
journal = {ArXiv e-prints},
archiveprefix = {arXiv},
eprint = {1707.04723},
primaryclass = {math.NA},
year = {2017},
month = jul,
file = {https://arxiv.org/pdf/1707.04723.pdf}
}

7. Briol, F.-X., Oates, C. J., Cockayne, J., Chen, W. Y., & Girolami, M. (2017). On the Sampling Problem for Kernel Quadrature. In Thirty-fourth International Conference on Machine Learning (ICML 2017). Retrieved from http://arxiv.org/abs/1706.03369

The standard Kernel Quadrature method for numerical integration with random point sets (also called Bayesian Monte Carlo) is known to converge in root mean square error at a rate determined by the ratio \s/d where \s and \d encode the smoothness and dimension of the integrand. However, an empirical investigation reveals that the rate constant \C is highly sensitive to the distribution of the random points. In contrast to standard Monte Carlo integration, for which optimal importance sampling is well-understood, the sampling distribution that minimises \C for Kernel Quadrature does not admit a closed form. This paper argues that the practical choice of sampling distribution is an important open problem. One solution is considered; a novel automatic approach based on adaptive tempering and sequential Monte Carlo. Empirical results demonstrate a dramatic reduction in integration error of up to 4 orders of magnitude can be achieved with the proposed method.

@inproceedings{briol_sampling_2017,
title = {On the {Sampling} {Problem} for {Kernel} {Quadrature}},
url = {http://arxiv.org/abs/1706.03369},
urldate = {2017-09-11},
booktitle = {Thirty-fourth {International} {Conference} on {Machine} {Learning} ({ICML} 2017)},
author = {Briol, François-Xavier and Oates, Chris J. and Cockayne, Jon and Chen, Wilson Ye and Girolami, Mark},
month = jun,
year = {2017},
note = {arXiv: 1706.03369},
keywords = {Computer Science - Learning, Mathematics - Numerical Analysis, Statistics - Computation, Statistics - Machine Learning},
file = {https://arxiv.org/pdf/1706.03369.pdf}
}

8. Oates, C. J., Briol, F.-X., & Girolami, M. (2016). Probabilistic Integration and Intractable Distributions. ArXiv e-Prints, stat.ME 1606.06841.

This paper considers numerical approximation for integrals of the form $∫f(x) p(\mathrmdx) in the case where f(x) is an expensive black-box function and p(\mathrmdx) is an intractable distribution (meaning that it is accessible only through a finite collection of samples). Our proposal extends previous work that treats numerical integration as a problem of statistical inference, in that we model both f as an a priori unknown random function and p$ as an a priori unknown random distribution. The result is a posterior distribution over the value of the integral that accounts for these dual sources of approximation error. This construction is designed to enable the principled quantification and propagation of epistemic uncertainty due to numerical error through a computational pipeline. The work is motivated by such problems that occur in the Bayesian calibration of computer models.

@article{2016arXiv160606841O,
author = {{Oates}, C.~J. and {Briol}, F.-X. and {Girolami}, M.},
title = {{Probabilistic Integration and Intractable Distributions}},
journal = {ArXiv e-prints},
volume = {stat.ME 1606.06841},
year = {2016},
month = jun,
file = {http://arxiv.org/pdf/1606.06841v2},
}

9. Oates, C. J., Niederer, S., Lee, A., Briol, F.-X., & Girolami, M. (2016). Probabilistic Models for Integration Error in the Assessment of Functional Cardiac Models. ArXiv:1606.06841 [Stat]. Retrieved from https://arxiv.org/pdf/1606.06841.pdf

This paper studies the numerical computation of integrals, representing estimates or predictions, over the output \f(x) of a computational model with respect to a distribution \p(}mathrm{d}x) over uncertain inputs \x to the model. For the functional cardiac models that motivate this work, neither \f nor \p possess a closed-form expression and evaluation of either requires {}approx 100 CPU hours, precluding standard numerical integration methods. Our proposal is to treat integration as an estimation problem, with a joint model for both the a priori unknown function \f and the a priori unknown distribution \p\. The result is a posterior distribution over the integral that explicitly accounts for dual sources of numerical approximation error due to a severely limited computational budget. This construction is applied to account, in a statistically principled manner, for the impact of numerical errors that (at present) are confounding factors in functional cardiac model assessment.

@article{oates_probabilistic_2016,
title = {Probabilistic {Models} for {Integration} {Error} in the {Assessment} of {Functional} {Cardiac} {Models}},
url = {https://arxiv.org/pdf/1606.06841.pdf},
urldate = {2017-09-11},
journal = {arXiv:1606.06841 [stat]},
author = {Oates, Chris J. and Niederer, Steven and Lee, Angela and Briol, François-Xavier and Girolami, Mark},
month = jun,
year = {2016},
note = {arXiv: 1606.06841},
keywords = {Statistics - Methodology}
}

10. Kanagawa, M., Sriperumbudur, B. K., & Fukumizu, K. (2016). Convergence guarantees for kernel-based quadrature rules in misspecified settings. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, & R. Garnett (Eds.), Advances in Neural Information Processing Systems 29 (pp. 3288–3296). Curran Associates, Inc.

Kernel-based quadrature rules are becoming important in machine learning and statistics, as they achieve super-\sqrtn convergence rates in numerical integration, and thus provide alternatives to Monte Carlo integration in challenging settings where integrands are expensive to evaluate or where integrands are high dimensional. These rules are based on the assumption that the integrand has a certain degree of smoothness, which is expressed as that the integrand belongs to a certain reproducing kernel Hilbert space (RKHS). However, this assumption can be violated in practice (e.g., when the integrand is a black box function), and no general theory has been established for the convergence of kernel quadratures in such misspecified settings. Our contribution is in proving that kernel quadratures can be consistent even when the integrand does not belong to the assumed RKHS, i.e., when the integrand is less smooth than assumed. Specifically, we derive convergence rates that depend on the (unknown) lesser smoothness of the integrand, where the degree of smoothness is expressed via powers of RKHSs or via Sobolev spaces.

@incollection{NIPS2016-6174,
title = {Convergence guarantees for kernel-based quadrature rules in misspecified settings},
author = {Kanagawa, Motonobu and Sriperumbudur, Bharath K. and Fukumizu, Kenji},
booktitle = {Advances in Neural Information Processing Systems 29},
editor = {Lee, D. D. and Sugiyama, M. and Luxburg, U. V. and Guyon, I. and Garnett, R.},
pages = {3288--3296},
year = {2016},
publisher = {Curran Associates, Inc.},
}

11. Prüher, J., & Šimandl, M. (2016). Bayesian Quadrature Variance in Sigma-Point Filtering. In J. Filipe, K. Madani, O. Gusikhin, & J. Sasiadek (Eds.), International Conference on Informatics in Control, Automation and Robotics (ICINCO) Revised Selected Papers (Vol. 12, pp. 355–370). Colmar, France: Springer International Publishing.

Sigma-point filters are algorithms for recursive state estimation of the stochastic dynamic systems from noisy measurements, which rely on moment integral approximations by means of various numerical quadrature rules. In practice, however, it is hardly guaranteed that the system dynamics or measurement functions will meet the restrictive requirements of the classical quadratures, which inevitably results in approximation errors that are not accounted for in the current state-of-the-art sigma-point filters. We propose a method for incorporating information about the integral approximation error into the filtering algorithm by exploiting features of a Bayesian quadrature—an alternative to classical numerical integration. This is enabled by the fact that the Bayesian quadrature treats numerical integration as a statistical estimation problem, where the posterior distribution over the values of the integral serves as a model of numerical error. We demonstrate superior performance of the proposed filters on a simple univariate benchmarking example.

@incollection{Pruher2016,
author = {Pr{\"u}her, Jakub and {\v{S}}imandl, Miroslav},
title = {{Bayesian Quadrature Variance in Sigma-Point Filtering}},
editor = {Filipe, Joaquim and Madani, Kurosh and Gusikhin, Oleg and Sasiadek, Jurek},
booktitle = {International Conference on Informatics in Control, Automation and Robotics (ICINCO) Revised Selected Papers},
volume = {12},
pages = {355--370},
publisher = {Springer International Publishing},
year = {2016},
code = {https://github.com/jacobnzw/icinco-code}
}

12. Jitkrittum, W., Gretton, A., Heess, N., Eslami, S. M. A., Lakshminarayanan, B., Sejdinovic, D., & Szabó, Z. (2015). Kernel-Based Just-In-Time Learning for Passing Expectation Propagation Messages. In Uncertainty in Artificial Intelligence (UAI) 31.

We propose an efficient nonparametric strategy for learning a message operator in expectation propagation (EP), which takes as input the set of incoming messages to a factor node, and produces an outgoing message as output. This learned operator replaces the multivariate integral required in classical EP, which may not have an analytic expression. We use kernel-based regression, which is trained on a set of probability distributions representing the incoming messages, and the associated outgoing messages. The kernel approach has two main advantages: first, it is fast, as it is implemented using a novel two-layer random feature representation of the input message distributions; second, it has principled uncertainty estimates, and can be cheaply updated online, meaning it can request and incorporate new training data when it encounters inputs on which it is uncertain. In experiments, our approach is able to solve learning problems where a single message operator is required for multiple, substantially different data sets (logistic regression for a variety of classification problems), where it is essential to accurately assess uncertainty and to efficiently and robustly update the message operator.

@inproceedings{Jitkrittum1503,
author = {{Jitkrittum}, W. and {Gretton}, A. and {Heess}, N. and {Eslami}, S.~M.~A. and {Lakshminarayanan}, B. and {Sejdinovic}, D. and {Szab{\'o}}, Z.},
title = {{Kernel-Based Just-In-Time Learning for Passing Expectation Propagation Messages}},
booktitle = {Uncertainty in Artificial Intelligence (UAI) 31},
year = {2015},
file = {http://auai.org/uai2015/proceedings/papers/235.pdf}
}

13. Särkkä, S., Hartikainen, J., Svensson, L., & Sandblom, F. (2015). On the relation between Gaussian process quadratures and sigma-point methods. ArXiv Preprint Stat.ME 1504.05994.

This article is concerned with Gaussian process quadratures, which are numerical integration methods based on Gaussian process regression methods, and sigma-point methods, which are used in advanced non-linear Kalman filtering and smoothing algorithms. We show that many sigma-point methods can be interpreted as Gaussian quadrature based methods with suitably selected covariance functions. We show that this interpretation also extends to more general multivariate Gauss–Hermite integration methods and related spherical cubature rules. Additionally, we discuss different criteria for selecting the sigma-point locations: exactness for multivariate polynomials up to a given order, minimum average error, and quasi-random point sets. The performance of the different methods is tested in numerical experiments.

@article{2015arXiv150405994S,
author = {{S{\"a}rkk{\"a}}, S. and {Hartikainen}, J. and {Svensson}, L. and {Sandblom}, F.},
title = {{On the relation between Gaussian process quadratures and sigma-point methods}},
journal = {arXiv preprint stat.ME 1504.05994},
year = {2015},
month = apr,
file = {http://arxiv.org/pdf/1504.05994v1.pdf}
}

14. Briol, F.-X., Oates, C. J., Girolami, M., & Osborne, M. A. (2015). Frank-Wolfe Bayesian Quadrature: Probabilistic Integration with Theoretical Guarantees. In Advances in Neural Information Processing Systems (NIPS). Retrieved from http://arxiv.org/abs/1506.02681

There is renewed interest in formulating integration as an inference problem, motivated by obtaining a full distribution over numerical error that can be propagated through subsequent computation. Current methods, such as Bayesian Quadrature, demonstrate impressive empirical performance but lack theoretical analysis. An important challenge is to reconcile these probabilistic integrators with rigorous convergence guarantees. In this paper, we present the first probabilistic integrator that admits such theoretical treatment, called Frank-Wolfe Bayesian Quadrature (FWBQ). Under FWBQ, convergence to the true value of the integral is shown to be exponential and posterior contraction rates are proven to be superexponential. In simulations, FWBQ is competitive with state-of-the-art methods and out-performs alternatives based on Frank-Wolfe optimisation. Our approach is applied to successfully quantify numerical error in the solution to a challenging model choice problem in cellular biology.

@inproceedings{briol_frank-wolfe_2015,
title = {Frank-{Wolfe} {Bayesian} {Quadrature}: {Probabilistic} {Integration} with {Theoretical} {Guarantees}},
url = {http://arxiv.org/abs/1506.02681},
booktitle = {Advances in Neural Information Processing Systems (NIPS)},
author = {Briol, François-Xavier and Oates, Chris J. and Girolami, Mark and Osborne, Michael A.},
year = {2015},
keywords = {Statistics - Machine Learning},
file = {http://arxiv.org/pdf/1506.02681v1.pdf}
}

15. Bach, F. (2015). On the Equivalence between Quadrature Rules and Random Features. ArXiv Preprint ArXiv:1502.06800.

We show that kernel-based quadrature rules for computing in tegrals can be seen as a special case of random feature expansions for positive definite kernels, for a particular decomposition that always exists for such kernels. We provide a theoretical analysis of the number of required samples for a given approximation error, leading to both upper and lower bounds that are based solely on the eigenvalues of the associated integral operator and match up to logarithmic terms. In particular, we show that the upper bound may be obtained from independent an d identically distributed samples from a specific non-uniform distribution, while the lower bo und if valid for any set of points. Applying our results to kernel-based quadrature, while our results are fairly general, we recover known upper and lower bounds for the special cases of Sobolev spaces. Moreover, our results extend to the more general problem of full function approxim ations (beyond simply computing an integral), with results in L2- and L∞-norm that match known results for special cases. Applying our results to random features, we show an improvement of the number of random features needed to preserve the generalization guarantees for learning with Lipshitz-continuous losses.

@article{bach2015equivalence,
title = {On the Equivalence between Quadrature Rules and Random Features},
author = {Bach, Francis},
journal = {arXiv preprint arXiv:1502.06800},
file = {http://arxiv.org/pdf/1502.06800v2.pdf},
year = {2015}
}

16. Briol, F.-X., Oates, C. J., Girolami, M., Osborne, M. A., & Sejdinovic, D. (2015). Probabilistic Integration: A Role for Statisticians in Numerical Analysis? ArXiv:1512.00933 [Cs, Math, Stat]. Retrieved from http://arxiv.org/abs/1512.00933

A research frontier has emerged in scientific computation, founded on the principle that numerical error entails epistemic uncertainty that ought to be subjected to statistical analysis. This viewpoint raises several interesting challenges, including the design of statistical methods that enable the coherent propagation of probabilities through a (possibly deterministic) computational pipeline. This paper examines thoroughly the case for probabilistic numerical methods in statistical computation and a specific case study is presented for Markov chain and Quasi Monte Carlo methods. A probabilistic integrator is equipped with a full distribution over its output, providing a measure of epistemic uncertainty that is shown to be statistically valid at finite computational levels, as well as in asymptotic regimes. The approach is motivated by expensive integration problems, where, as in krigging, one is willing to expend, at worst, cubic computational effort in order to gain uncertainty quantification. There, probabilistic integrators enjoy the "best of both worlds", leveraging the sampling efficiency of Monte Carlo methods whilst providing a principled route to assessment of the impact of numerical error on scientific conclusions. Several substantial applications are provided for illustration and critical evaluation, including examples from statistical modelling, computer graphics and uncertainty quantification in oil reservoir modelling.

@article{briol_probabilistic_2015,
title = {Probabilistic {Integration}: A Role for Statisticians in Numerical Analysis?},
url = {http://arxiv.org/abs/1512.00933},
urldate = {2015-07-22},
journal = {arXiv:1512.00933 [cs, math, stat]},
author = {Briol, François-Xavier and Oates, Chris J. and Girolami, Mark and Osborne, Michael A. and Sejdinovic, Dino},
year = {2015},
note = {arXiv: 1512.00933},
file = {http://arxiv.org/pdf/1512.00933.pdf}
}

17. Särkkä, S., Hartikainen, J., Svensson, L., & Sandblom, F. (2015). On the relation between Gaussian process quadratures and sigma-point methods. ArXiv:1504.05994 [Math, Stat]. Retrieved from http://arxiv.org/abs/1504.05994

This article is concerned with Gaussian process quadratures, which are numerical integration methods based on Gaussian process regression methods, and sigma-point methods, which are used in advanced non-linear Kalman filtering and smoothing algorithms. We show that many sigma-point methods can be interpreted as Gaussian quadrature based methods with suitably selected covariance functions. We show that this interpretation also extends to more general multivariate Gauss–Hermite integration methods and related spherical cubature rules. Additionally, we discuss different criteria for selecting the sigma-point locations: exactness for multivariate polynomials up to a given order, minimum average error, and quasi-random point sets. The performance of the different methods is tested in numerical experiments.

@article{sarkka_relation_2015,
title = {On the relation between {Gaussian} process quadratures and sigma-point methods},
url = {http://arxiv.org/abs/1504.05994},
urldate = {2017-09-11},
journal = {arXiv:1504.05994 [math, stat]},
author = {S{\"a}rkk{\"a}, Simo and Hartikainen, Jouni and Svensson, Lennart and Sandblom, Fredrik},
month = apr,
year = {2015},
note = {arXiv: 1504.05994},
keywords = {Mathematics - Dynamical Systems, Statistics - Machine Learning, Statistics - Methodology},
file = {https://arxiv.org/pdf/1504.05994.pdf}
}

18. Gunter, T., Osborne, M. A., Garnett, R., Hennig, P., & Roberts, S. (2014). Sampling for Inference in Probabilistic Models with Fast Bayesian Quadrature. In C. Cortes & N. Lawrence (Eds.), Advances in Neural Information Processing Systems (NIPS).

We propose a novel sampling framework for inference in probabilistic models: an active learning approach that converges more quickly (in wall-clock time) than Markov chain Monte Carlo (MCMC) benchmarks. The central challenge in probabilistic inference is numerical integration, to average over ensembles of models or unknown (hyper-)parameters (for example to compute marginal likelihood or a partition function). MCMC has provided approaches to numerical integration that deliver state-of-the-art inference, but can suffer from sample inefficiency and poor convergence diagnostics. Bayesian quadrature techniques offer a model-based solution to such problems, but their uptake has been hindered by prohibitive computation costs. We introduce a warped model for probabilistic integrands (likelihoods) that are known to be non-negative, permitting a cheap active learning scheme to optimally select sample locations. Our algorithm is demonstrated to offer faster convergence (in seconds) relative to simple Monte Carlo and annealed importance sampling on both synthetic and real-world examples.

author = {Gunter, Tom and Osborne, Michael A. and Garnett, Roman and Hennig, Philipp and Roberts, Stephen},
title = {Sampling for Inference in Probabilistic Models with Fast
booktitle = {Advances in Neural Information Processing Systems (NIPS)},
year = {2014},
editor = {Cortes, C. and Lawrence, N.},
code = {https://github.com/OxfordML/wsabi}
}

19. Eslami, S. M. A., Tarlow, D., Kohli, P., & Winn, J. (2014). Just-In-Time Learning for Fast and Flexible Inference. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, & K. Q. Weinberger (Eds.), Advances in Neural Information Processing Systems (NIPS) 27 (pp. 154–162).

Much of research in machine learning has centered around the search for inference algorithms that are both general-purpose and efficient. The problem is extremely challenging and general inference remains computationally expensive. We seek to address this problem by observing that in most specific applications of a model, we typically only need to perform a small subset of all possible inference computations. Motivated by this, we introduce just-in-time learning, a framework for fast and flexible inference that learns to speed up inference at run-time. Through a series of experiments, we show how this framework can allow us to combine the flexibility of sampling with the efficiency of deterministic message-passing.

@inproceedings{NIPS2014_5595,
title = {Just-In-Time Learning for Fast and Flexible Inference},
author = {Eslami, S. M. Ali and Tarlow, Daniel and Kohli, Pushmeet and Winn, John},
booktitle = {Advances in Neural Information Processing Systems (NIPS) 27},
editor = {Ghahramani, Z. and Welling, M. and Cortes, C. and Lawrence, N.D. and Weinberger, K.Q.},
pages = {154--162},
year = {2014},
file = {http://papers.nips.cc/paper/5595-just-in-time-learning-for-fast-and-flexible-inference.pdf}
}

20. Oates, C. J., Girolami, M., & Chopin, N. (2014). Control functionals for Monte Carlo integration. ArXiv Preprint 1410.2392.

This paper introduces a novel class of estimators for Monte Carlo integration, that leverage gradient information in order to provide the improved estimator performance demanded by contemporary statistical applications. The proposed estimators, called "control functionals", achieve sub-root-n convergence and often require orders of magnitude fewer simulations, compared with existing approaches, in order to achieve a fixed level of precision. We focus on a particular sub-class of estimators that permit an elegant analytic form and study their properties, both theoretically and empirically. Results are presented on Bayes-Hermite quadrature, hierarchical Gaussian process models and non-linear ordinary differential equation models, where in each case our estimators are shown to offer state of the art performance.

@article{oates14-contr-monte-carlo,
author = {Oates, Chris J. and Girolami, Mark and Chopin, Nicolas},
title = {Control functionals for Monte Carlo integration},
journal = {arXiv preprint 1410.2392},
year = {2014},
file = {http://arxiv.org/pdf/1410.2392v1}
}

21. Särkkä, S., Hartikainen, J., Svensson, L., & Sandblom, F. (2014). Gaussian Process Quadratures in Nonlinear Sigma-Point Filtering and Smoothing. In FUSION.

This paper is concerned with the use of Gaussian process regression based quadrature rules in the context of sigma- point-based nonlinear Kalman filtering and smoothing. We show how Gaussian process (i.e., Bayesian or Bayes–Hermite) quadratures can be used for numerical solving of the Gaussian integrals arising in the filters and smoothers. An interesting additional result is that with suitable selections of Hermite polynomial covariance functions the Gaussian process quadratures can be reduced to unscented transforms, spherical cubature rules, and to Gauss- Hermite rules previously proposed for approximate nonlinear Kalman filter and smoothing. Finally, the performance of the Gaussian process quadratures in this context is evaluated with numerical simulations.

@inproceedings{sarkkagaussian,
title = {Gaussian Process Quadratures in Nonlinear Sigma-Point
Filtering and Smoothing},
author = {S{\"a}rkk{\"a}, Simo and Hartikainen, Jouni and Svensson, Lennart and Sandblom, Fredrik},
booktitle = {FUSION},
year = {2014},
}

22. Osborne, M. A., Duvenaud, D. K., Garnett, R., Rasmussen, C. E., Roberts, S. J., & Ghahramani, Z. (2012). Active Learning of Model Evidence Using Bayesian Quadrature. In Advances in Neural Information Processing Systems (NIPS) (pp. 46–54).

Numerical integration is a key component of many problems in scientific computing, statistical modelling, and machine learning. Bayesian Quadrature is a model-based method for numerical integration which, relative to standard Monte Carlo methods, offers increased sample efficiency and a more robust estimate of the uncertainty in the estimated integral. We propose a novel Bayesian Quadrature approach for numerical integration when the integrand is non-negative, such as the case of computing the marginal likelihood, predictive distribution, or normalising constant of a probabilistic model. Our approach approximately marginalises the quadrature model’s hyperparameters in closed form, and introduces an ac- tive learning scheme to optimally select function evaluations, as opposed to using Monte Carlo samples. We demonstrate our method on both a number of synthetic benchmarks and a real scientific problem from astronomy.

@inproceedings{osborne2012active,
author = {Osborne, M.A. and Duvenaud, D.K. and Garnett, R. and Rasmussen, C.E. and Roberts, S.J. and Ghahramani, Z.},
booktitle = {{Advances in Neural Information Processing Systems (NIPS)}},
pages = {46--54},
title = {{Active Learning of Model Evidence Using Bayesian
year = {2012},
file = {../assets/pdf/Osborne2012active.pdf
}
}

23. Osborne, M. A., Garnett, R., Roberts, S. J., Hart, C., Aigrain, S., & Gibson, N. (2012). Bayesian quadrature for ratios. In International Conference on Artificial Intelligence and Statistics (pp. 832–840).

We describe a novel approach to quadrature for ratios of probabilistic integrals, such as are used to compute posterior probabilities. This approach offers performance superior to Monte Carlo methods by exploiting a Bayesian quadrature framework. We improve upon previous Bayesian quadrature techniques by explicitly modelling the non-negativity of our integrands, and the correlations that exist between them. It offers most where the integrand is multi-modal and expensive to evaluate. We demonstrate the efficacy of our method on data from the Kepler space telescope.

@inproceedings{osborne2012bayesian,
author = {Osborne, M.A. and Garnett, R. and Roberts, S.J. and Hart, C. and Aigrain, S. and Gibson, N.},
booktitle = {{International Conference on Artificial Intelligence and
Statistics}},
pages = {832--840},
title = {{Bayesian quadrature for ratios}},
year = {2012},
file = {../assets/pdf/Osborne2012Bayesian.pdf}
}

24. Kong, A., McCullagh, P., Meng, X.-L., & Nicolae, D. L. (2007). Further Explorations of Likelihood Theory for Monte Carlo Integration. Advances in Statistical Modelling and Inference, (March), 563–592.

Monte-Carlo estimation of an integral is usually based on the method of moments or an estimating equation. Recently, Kong, McCullagh, Meng, Nicolae and Tan (2003) proposed a likelihood based theory, which puts Monte-Carlo estimation of integrals on a firmer, less ad hoc, basis by formulating the problem as a likelihood inference problems for the baseline measure with simulated observations as data. In this paper, we provide further exploration and development of this theory. After an overview of the likelihood formulation, we first demonstrate the power of the likelihood-based method by presenting a universally improved importance sampling estimator. We then prove that the formal, infinite-dimensional Fisher-information based variance calculation given in Kong et al. (2003) is asymptotically the same as the sampling based "sandwich" variance estimator. Next, we explore the gain in Monte Carlo efficiency when the baseline measure can be parameterized. Furthermore, we show how the Monte Carlo integration problem can also be dealt with by the method of empirical likelihood, and how the baseline measure parameter can be properly profiled out to form a profile likelihood for the integrals only. As a byproduct, we obtain four equivalent conditions for the existence of unique maximum likelihood estimate for mixture models with known components. We also discuss an apparent paradox for Bayesian inference with Monte Carlo integration.

@article{Kong2007,
author = {Kong, Augustine and McCullagh, Peter and Meng, Xiao-Li and Nicolae, Dan L.},
journal = {Advances in Statistical Modelling and Inference},
number = {March},
pages = {563--592},
title = {{Further Explorations of Likelihood Theory for Monte Carlo Integration}},
year = {2007},
}

25. Tan, Z. (2004). On a Likelihood Approach for Monte Carlo Integration. Journal of the American Statistical Association, 99(468), 1027–1036.

The use of estimating equations has been a common approach for constructing Monte Carlo estimators. Recently, Kong et al. proposed a formulation of Monte Carlo integration as a statistical model, making explicit what information is ignored and what is retained about the baseline measure. From simulated data, the baseline measure is estimated by maximum likelihood, and then integrals of interest are estimated by substituting the estimated measure. For two different situations in which independent observations are simulated from multiple distributions, we show that this likelihood approach achieves the lowest asymptotic variance possible by using estimating equations. In the first situation, the normalizing constants of the design distributions are estimated, and Meng andWong’s bridge sampling estimating equation is considered. In the second situation, the values of the normalizing constants are known, thereby imposing linear constraints on the baseline measure. Estimating equations including Hesterberg’s stratified importance sampling estimator, Veach and Guibas’s multiple importance sampling estimator, and Owen and Zhou’s method of control variates are considered.

@article{Tan2004,
author = {Tan, Zhiqiang},
journal = {Journal of the American Statistical Association},
number = {468},
pages = {1027--1036},
title = {{On a Likelihood Approach for Monte Carlo Integration}},
volume = {99},
year = {2004},
}

26. Kong, A., McCullagh, P., Meng, X.-L., Nicolae, D. L., & Tan, Z. (2003). A theory of statistical models for Monte Carlo integration. Journal of the Royal Statistical Society, Series B (Statistical Methodology), 65(3), 585–618.

The task of estimating an integral by Monte Carlo methods is formulated as a statistical model using simulated observations as data. The difficulty in this exercise is that we ordinarily have at our disposal all of the information required to compute integrals exactly by calculus or numerical integration, but we choose to ignore some of the information for simplicity or computational feasibility. Our proposal is to use a semiparametric statistical model that makes explicit what information is ignored and what information is retained. The parameter space in this model is a set of measures on the sample space, which is ordinarily an infinite dimensional object. None-the-less, from simulated data the base-line measure can be estimated by maximum likelihood, and the required integrals computed by a simple formula previously derived by Vardi and by Lindsay in a closely related model for biased sampling. The same formula was also suggested by Geyer and by Meng and Wong using entirely different arguments. By contrast with Geyer’s retrospective likelihood, a correct estimate of simulation error is available directly from the Fisher information. The principal advantage of the semiparametric model is that variance reduction techniques are associated with submodels in which the maximum likelihood estima-tor in the submodel may have substantially smaller variance than the traditional estimator. The method is applicable to Markov chain and more general Monte Carlo sampling schemes with multiple samplers.

@article{Kong2003,
author = {Kong, Augustine and McCullagh, Peter and Meng, Xiao-Li and Nicolae, Dan L. and Tan, Zhiquiang},
journal = {Journal of the Royal Statistical Society, Series B (Statistical Methodology)},
number = {3},
pages = {585--618},
title = {{A theory of statistical models for Monte Carlo integration}},
volume = {65},
year = {2003},
}

27. Ghahramani, Z., & Rasmussen, C. E. (2002). Bayesian Monte Carlo. In Advances in neural information processing systems (pp. 489–496).

We investigate Bayesian alternatives to classical Monte Carlo methods for evaluating integrals. Bayesian Monte Carlo (BMC) allows the in- corporation of prior knowledge, such as smoothness of the integrand, into the estimation. In a simple problem we show that this outperforms any classical importance sampling method. We also attempt more chal- lenging multidimensional integrals involved in computing marginal like- lihoods of statistical models (a.k.a. partition functions and model evi- dences). We find that Bayesian Monte Carlo outperformed Annealed Importance Sampling, although for very high dimensional problems or problems with massive multimodality BMC may be less adequate. One advantage of the Bayesian approach to Monte Carlo is that samples can be drawn from any distribution. This allows for the possibility of active design of sample points so as to maximise information gain.

@inproceedings{ghahramani2002bayesian,
title = {Bayesian {Monte Carlo}},
author = {Ghahramani, Zoubin and Rasmussen, Carl E},
booktitle = {Advances in neural information processing systems},
pages = {489--496},
file = {http://machinelearning.wustl.edu/mlpapers/paper_files/AA01.pdf},
year = {2002}
}

28. Minka, T. P. (2000). Deriving quadrature rules from Gaussian processes. Statistics Department, Carnegie Mellon University.

Quadrature rules are often designed to achieve zero error on a small set of functions, e.g. polynomials of specified degree. A more robust method is to minimize average error over a large class or distribution of functions. If functions are distributed according to a Gaussian process, then designing an average-case quadrature rule reduces to solving a system of 2n equations, where n is the number of nodes in the rule (O’Hagan, 1991). It is shown how this very general technique can be used to design customized quadrature rules, in the style of Yarvin & Rokhlin (1998), without the need for singular value decomposition and in any number of dimensions. It is also shown how classical Gaussian quadrature rules, trigonometric lattice rules, and spline rules can be extended to the average-case and to multiple dimensions by deriving them from Gaussian processes. In addition to being more robust, multidimensional quadrature rules designed for the average-case are found to be much less ambiguous than those designed for a given polynomial degree.

@techreport{minka2000deriving,
author = {Minka, T.P.},
institution = {Statistics Department, Carnegie Mellon University},
title = {{Deriving quadrature rules from {G}aussian processes}},
year = {2000},
}

29. Kennedy, M. (1998). Bayesian quadrature with non-normal approximating functions. Statistics and Computing, 8(4), 365–375.
@article{kennedy1998bayesian,
title = {Bayesian quadrature with non-normal approximating functions},
author = {Kennedy, Marc},
journal = {Statistics and Computing},
volume = {8},
number = {4},
pages = {365--375},
year = {1998},
publisher = {Springer}
}

30. Kennedy, M. C., & O’Hagan, A. (1996). Iterative rescaling for Bayesian quadrature. Bayesian Statistics, 5, 639–645.
@article{kennedy1996iterative,
title = {Iterative rescaling for Bayesian quadrature},
author = {Kennedy, MC and O’Hagan, A},
journal = {Bayesian Statistics},
volume = {5},
pages = {639--645},
year = {1996},
publisher = {Oxford University Press Oxford}
}

31. O’Hagan, A. (1991). Bayes–Hermite quadrature. Journal of Statistical Planning and Inference, 29(3), 245–260.

Bayesian quadrature treats the problem of numerical integration as one of statistical inference. A prior Gaussian process distribution is assumed for the integrand, observations arise from evaluating the integrand at selected points, and a posterior distribution is derived for the integrand and the integral. Methods are developed for quadrature in p. A particular application is integrating the posterior density arising from some other Bayesian analysis. Simulation results are presented, to show that the resulting Bayes–Hermite quadrature rules may perform better than the conventional Gauss–Hermite rules for this application. A key result is derived for product designs, which makes Bayesian quadrature practically useful for integrating in several dimensions. Although the method does not at present provide a solution to the more difficult problem of quadrature in high dimensions, it does seem to offer real improvements over existing methods in relatively low dimensions.

@article{ohagan1991bayes,
author = {O'Hagan, A.},
journal = {Journal of statistical planning and inference},
volume = {29},
number = {3},
pages = {245--260},
year = {1991}
}

32. O’Hagan, A. (1987). Monte Carlo is Fundamentally Unsound. Journal of the Royal Statistical Society. Series D (The Statistician), 36(2/3), pp. 247–249.

We present some fundamental objections to the Monte Carlo method of numerical integration.

@article{OHagan1987,
jstor_articletype = {research-article},
title = {Monte Carlo is Fundamentally Unsound},
author = {O'Hagan, A.},
journal = {Journal of the Royal Statistical Society. Series D (The Statistician)},
jstor_issuetitle = {Special Issue: Practical Bayesian Statistics},
volume = {36},
number = {2/3},
jstor_formatteddate = {1987},
pages = {pp. 247-249},
issn = {00390526},
language = {English},
year = {1987},
publisher = {Wiley for the Royal Statistical Society},
}

Linear Algebra

1. Cockayne, J., Oates, C., & Girolami, M. (2018). A Bayesian Conjugate Gradient Method. ArXiv e-Prints.

A fundamental task in numerical computation is the solution of large linear systems. The conjugate gradient method is an iterative method which offers rapid convergence to the solution, particularly when an effective preconditioner is employed. However, for more challenging systems a substantial error can be present even after many iterations have been performed. The estimates obtained in this case are of little value unless further information can be provided about the numerical error. In this paper we propose a novel statistical model for this numerical error set in a Bayesian framework. Our approach is a strict generalisation of the conjugate gradient method, which is recovered as the posterior mean for a particular choice of prior. The estimates obtained are analysed with Krylov subspace methods and a contraction result for the posterior is presented. The method is then analysed in a simulation study as well as being applied to a challenging problem in medical imaging.

@article{2018arXiv180105242X,
author = {Cockayne, Jon and Oates, Chris and Girolami, Mark},
title = {{A {B}ayesian Conjugate Gradient Method}},
journal = {ArXiv e-prints},
archiveprefix = {arXiv},
eprint = {1801.05242},
primaryclass = {stat.ME},
year = {2018},
month = jan,
file = {https://arxiv.org/pdf/1801.05242}
}

2. Fitzsimons, J., Cutajar, K., Osborne, M., Roberts, S., & Filippone, M. (2017). Bayesian Inference of Log Determinants. In Uncertainty in Artificial Intelligence. Retrieved from https://arxiv.org/abs/1704.01445

The log-determinant of a kernel matrix appears in a variety of machine learning problems, ranging from determinantal point processes and generalized Markov random fields, through to the training of Gaussian processes. Exact calculation of this term is often intractable when the size of the kernel matrix exceeds a few thousand. In the spirit of probabilistic numerics, we reinterpret the problem of computing the log-determinant as a Bayesian inference problem. In particular, we combine prior knowledge in the form of bounds from matrix theory and evidence derived from stochastic trace estimation to obtain probabilistic estimates for the log-determinant and its associated uncertainty within a given computational budget. Beyond its novelty and theoretic appeal, the performance of our proposal is competitive with state-of-the-art approaches to approximating the log-determinant, while also quantifying the uncertainty due to budget-constrained evidence.

@inproceedings{fitzsimons_bayesian_2017,
title = {Bayesian {Inference} of {Log} {Determinants}},
url = {https://arxiv.org/abs/1704.01445},
urldate = {2017-06-21},
booktitle = {Uncertainty in {Artificial} {Intelligence}},
author = {Fitzsimons, Jack and Cutajar, Kurt and Osborne, Michael and Roberts, Stephen and Filippone, Maurizio},
year = {2017},
file = {http://probabilistic-numerics.org/assets/pdf/Fitzsimons et al. - 2017 - Bayesian Inference of Log Determinants.pdf}
}

3. Schäfer, F., Sullivan, T. J., & Owhadi, H. (2017). Compression, inversion, and approximate PCA of dense kernel matrices at near-linear computational complexity. ArXiv:1706.02205 [Cs, Math]. Retrieved from http://arxiv.org/abs/1706.02205

Dense kernel matrices {}Theta }in }mathbb{R}^{N }times N} obtained from point evaluations of a covariance function \G at locations {}{ x_{i} }}_{1 }leq i }leq N} arise in statistics, machine learning, and numerical analysis. For covariance functions that are Green’s functions elliptic boundary value problems and approximately equally spaced sampling points, we show how to identify a subset \S }subset }{ 1 , }dots , N }} }times }{ 1 , }dots , N }} with {}# S = O ( N }log (N) }log^{d} ( N / }epsilon ) ) such that the zero fill-in block-incomplete Cholesky decomposition of {}Theta_{i,j} 1_{( i,j ) }in S} is an {}epsilon\-approximation of {}Theta\. This block-factorisation can provably be obtained in Ø}left(N }log^{2} ( N ) }left( }log (1/}epsilon ) + }log^{2} ( N ) }right)^{4d+1} }right) complexity in time. Numerical evidence further suggests that element-wise Cholesky decomposition with the same ordering constitutes an Ø}left( N }log^{2} ( N ) }log^{2d} ( N/}epsilon ) }right) solver. The algorithm only needs to know the spatial configuration of the \x_{i} and does not require an analytic representation of \G\. Furthermore, an approximate PCA with optimal rate of convergence in the operator norm can be easily read off from this decomposition. Hence, by using only subsampling and the incomplete Cholesky decomposition, we obtain at nearly linear complexity the compression, inversion and approximate PCA of a large class of covariance matrices. By inverting the order of the Cholesky decomposition we also obtain a near-linear-time solver for elliptic PDEs.

@article{schafer_compression_2017,
title = {Compression, inversion, and approximate {PCA} of dense kernel matrices at near-linear computational complexity},
url = {http://arxiv.org/abs/1706.02205},
urldate = {2017-09-10},
journal = {arXiv:1706.02205 [cs, math]},
author = {Schäfer, Florian and Sullivan, T. J. and Owhadi, Houman},
month = jun,
year = {2017},
note = {arXiv: 1706.02205},
keywords = {65F30, 42C40, 65F50, 65N55, 65N75, 60G42, 68Q25, 68W40, Computer Science - Computational Complexity, Computer Science - Data Structures and Algorithms, Mathematics - Numerical Analysis, Mathematics - Probability},
file = {https://arxiv.org/pdf/1706.02205.pdf}
}

4. Bartels, S., & Hennig, P. (2016). Probabilistic Approximate Least-Squares. In Proceedings of the 19th International Conference on Artificial Intelligence and Statistics (AISTATS 2016) (Vol. 51, pp. 676–684).

Least-squares and kernel-ridge / Gaussian process regression are among the foundational algorithms of statistics and machine learning. Famously, the worst-case cost of exact nonparametric regression grows cubically with the data-set size; but a growing number of approximations have been developed that estimate good solutions at lower cost. These algorithms typically return point estimators, without measures of uncertainty. Leveraging recent results casting elementary linear algebra operations as probabilistic inference, we propose a new approximate method for nonparametric least-squares that affords a probabilistic uncertainty estimate over the error between the approximate and exact least-squares solution (this is not the same as the posterior variance of the associated Gaussian process regressor). This allows estimating the error of the least-squares solution on a subset of the data relative to the full-data solution. The uncertainty can be used to control the computational effort invested in the approximation. Our algorithm has linear cost in the data-set size, and a simple formal form, so that it can be implemented with a few lines of code in programming languages with linear algebra functionality.

@conference{BarHen16,
title = {Probabilistic Approximate Least-Squares},
author = {Bartels, S. and Hennig, P.},
booktitle = {Proceedings of the 19th International Conference on Artificial Intelligence and Statistics (AISTATS 2016)},
volume = {51},
pages = {676--684},
series = {JMLR Workshop and Conference Proceedings},
editors = {Gretton, A. and Robert, C. C. },
year = {2016},
file = {http://jmlr.org/proceedings/papers/v51/bartels16.pdf}
}

5. Hennig, P. (2015). Probabilistic Interpretation of Linear Solvers. SIAM J on Optimization, 25(1).

This paper proposes a probabilistic framework for algorithms that iteratively solve unconstrained linear problems Bx = b with positive definite B for x. The goal is to replace the point estimates returned by existing methods with a Gaussian posterior belief over the elements of the inverse of B, which can be used to estimate errors. Recent probabilistic interpretations of the secant family of quasi-Newton optimization algorithms are extended. Combined with properties of the conjugate gradient algorithm, this leads to uncertainty-calibrated methods with very limited cost overhead over conjugate gradients, a self-contained novel interpretation of the quasi-Newton and conjugate gradient algorithms, and a foundation for new nonlinear optimization methods.

@article{2014arXiv14022058H,
author = {{Hennig}, P.},
journal = {SIAM J on Optimization},
month = jan,
title = {{Probabilistic Interpretation of Linear Solvers}},
year = {2015},
volume = {25},
issue = {1},
file = {http://probabilistic-numerics.org/assets/pdf/HennigLinear2015.pdf}
}

6. Dorn, S., & Enßlin, T. A. (2015). Stochastic determination of matrix determinants. Phys. Rev. E, 92(1), 013302.

Matrix determinants play an important role in data analysis, in particular when Gaussian processes are involved. Due to currently exploding data volumes, linear operations—matrices—acting on the data are often not accessible directly but are only represented indirectly in form of a computer routine. Such a routine implements the transformation a data vector undergoes under matrix multiplication. While efficient probing routines to estimate a matrix’s diagonal or trace, based solely on such computationally affordable matrix-vector multiplications, are well known and frequently used in signal inference, there is no stochastic estimate for its determinant. We introduce a probing method for the logarithm of a determinant of a linear operator. Our method rests upon a reformulation of the log-determinant by an integral representation and the transformation of the involved terms into stochastic expressions. This stochastic determinant determination enables large-size applications in Bayesian inference, in particular evidence calculations, model comparison, and posterior determination.

@article{PhysRevE92013302,
title = {Stochastic determination of matrix determinants},
author = {Dorn, Sebastian and En\ss{}lin, Torsten A.},
journal = {Phys. Rev. E},
volume = {92},
issue = {1},
pages = {013302},
numpages = {8},
year = {2015},
month = jul,
publisher = {American Physical Society},
}

7. Selig, M., Oppermann, N., & Enßlin, T. A. (2012). Improving stochastic estimates with inference methods: Calculating matrix diagonals. Phys. Rev. E, 85(2), 021134.

Estimating the diagonal entries of a matrix, that is not directly accessible but only available as a linear operator in the form of a computer routine, is a common necessity in many computational applications, especially in image reconstruction and statistical inference. Here, methods of statistical inference are used to improve the accuracy or the computational costs of matrix probing methods to estimate matrix diagonals. In particular, the generalized Wiener filter methodology, as developed within information field theory, is shown to significantly improve estimates based on only a few sampling probes, in cases in which some form of continuity of the solution can be assumed. The strength, length scale, and precise functional form of the exploited autocorrelation function of the matrix diagonal is determined from the probes themselves. The developed algorithm is successfully applied to mock and real world problems. These performance tests show that, in situations where a matrix diagonal has to be calculated from only a small number of computationally expensive probes, a speedup by a factor of 2 to 10 is possible with the proposed method.

@article{PhysRevE85021134,
title = {Improving stochastic estimates with inference methods: Calculating matrix diagonals},
author = {Selig, Marco and Oppermann, Niels and En\ss{}lin, Torsten A.},
journal = {Phys. Rev. E},
volume = {85},
issue = {2},
pages = {021134},
numpages = {7},
year = {2012},
month = feb,
publisher = {American Physical Society},
}

Optimization

1. Deniz Akyildiz, Ö., Elvira, V., & Miguez, J. (2018). The Incremental Proximal Method: A Probabilistic Perspective. ArXiv e-Prints, 1807.04594.

In this work, we highlight a connection between the incremental proximal method and stochastic filters. We begin by showing that the proximal operators coincide, and hence can be realized with, Bayes updates. We give the explicit form of the updates for the linear regression problem and show that there is a one-to-one correspondence between the proximal operator of the least-squares regression and the Bayes update when the prior and the likelihood are Gaussian. We then carry out this observation to a general sequential setting: We consider the incremental proximal method, which is an algorithm for large-scale optimization, and show that, for a linear-quadratic cost function, it can naturally be realized by the Kalman filter. We then discuss the implications of this idea for nonlinear optimization problems where proximal operators are in general not realizable. In such settings, we argue that the extended Kalman filter can provide a systematic way for the derivation of practical procedures.

@article{2018arXiv180704594D,
author = {{Deniz Akyildiz}, {\"O}. and {Elvira}, V. and {Miguez}, J.},
title = {{The Incremental Proximal Method: A Probabilistic Perspective}},
journal = {ArXiv e-prints},
volume = {1807.04594},
year = {2018},
month = jul,
file = {https://arxiv.org/pdf/1807.04594.pdf}
}

2. Mahsereci, M., & Hennig, P. (2015). Probabilistic Line Searches for Stochastic Optimization. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, & R. Garnett (Eds.), Advances in Neural Information Processing Systems 28 (pp. 181–189). Curran Associates, Inc.

In deterministic optimization, line searches are a standard tool ensuring stability and efficiency. Where only stochastic gradients are available, no direct equivalent has so far been formulated, because uncertain gradients do not allow for a strict sequence of decisions collapsing the search space. We construct a probabilistic line search by combining the structure of existing deterministic methods with notions from Bayesian optimization. Our method retains a Gaussian process surrogate of the univariate optimization objective, and uses a probabilistic belief over the Wolfe conditions to monitor the descent. The algorithm has very low computational cost, and no user-controlled parameters. Experiments show that it effectively removes the need to define a learning rate for stochastic gradient descent.

@incollection{NIPS2015_5753,
title = {Probabilistic Line Searches for Stochastic Optimization},
author = {Mahsereci, Maren and Hennig, Philipp},
booktitle = {Advances in Neural Information Processing Systems 28},
editor = {Cortes, C. and Lawrence, N. D. and Lee, D. D. and Sugiyama, M. and Garnett, R.},
pages = {181--189},
year = {2015},
publisher = {Curran Associates, Inc.},
file = {http://papers.nips.cc/paper/5753-probabilistic-line-searches-for-stochastic-optimization.pdf},
}

3. Hennig, P., & Kiefel, M. (2013). Quasi-Newton Methods – a new direction. Journal of Machine Learning Research, 14, 834–865.

Four decades after their invention, quasi-Newton methods are still state of the art in unconstrained numerical optimization. Although not usually interpreted thus, these are learning algorithms that fit a local quadratic approximation to the objective function. We show that many, including the most popular, quasi-Newton methods can be interpreted as approximations of Bayesian linear regression under varying prior assumptions. This new notion elucidates some shortcomings of classical algorithms, and lights the way to a novel nonparametric quasi-Newton method, which is able to make more efficient use of available information at computational cost similar to its predecessors.

@article{hennig13_quasi_newton_method,
author = {Hennig, P. and Kiefel, M.},
journal = {Journal of Machine Learning Research},
month = mar,
pages = {834--865},
title = {Quasi-{N}ewton Methods -- a new direction},
volume = {14},
year = {2013},
file = {http://jmlr.org/papers/volume14/hennig13a/hennig13a.pdf}
}

4. Hennig, P. (2013). Fast Probabilistic Optimization from Noisy Gradients. In International Conference on Machine Learning (ICML).

Stochastic gradient descent remains popular in large-scale machine learning, on account of its very low computational cost and robust- ness to noise. However, gradient descent is only linearly efficient and not transformation invariant. Scaling by a local measure can substantially improve its performance. One natural choice of such a scale is the Hessian of the objective function: Were it available, it would turn linearly efficient gradient descent into the quadratically efficient Newton-Raphson optimization. Existing covariant methods, though, are either super-linearly expensive or do not address noise. Generalising recent results, this paper constructs a nonparametric Bayesian quasi-Newton algorithm that learns gradient and Hessian from noisy evaluations of the gradient. Importantly, the resulting algorithm, like stochastic gradient descent, has cost linear in the number of input dimensions

@inproceedings{StochasticNewton,
author = {Hennig, P.},
booktitle = {{International Conference on Machine Learning (ICML)}},
title = {{Fast Probabilistic Optimization from Noisy Gradients}},
year = {2013},
file = {../assets/pdf/hennig13noisy.pdf}
}

5. Hennig, P., & Kiefel, M. (2012). Quasi-Newton methods – a new direction. In International Conference on Machine Learning (ICML).

Four decades after their invention, quasi- Newton methods are still state of the art in unconstrained numerical optimization. Al- though not usually interpreted thus, these are learning algorithms that fit a local quadratic approximation to the objective function. We show that many, including the most popular, quasi-Newton methods can be interpreted as approximations of Bayesian linear regression under varying prior assumptions. This new notion elucidates some shortcomings of clas- sical algorithms, and lights the way to a novel nonparametric quasi-Newton method, which is able to make more efficient use of available information at computational cost similar to its predecessors.

@inproceedings{HennigKiefel,
author = {Hennig, P. and Kiefel, M.},
booktitle = {{International Conference on Machine Learning (ICML)}},
title = {{Quasi-{N}ewton methods -- a new direction}},
year = {2012},
video = {http://techtalks.tv/talks/quasi-newton-methods-a-new-direction/57289/},
file = {../assets/pdf/hennig13quasiNewton.pdf}
}

Ordinary Differential Equations

To avoid a frequent initial confusion for new readers, it may be helpful to point out that there are two common ways in which probabilistic methods are used in combination with ordinary differential equations: The “classic” problem of numerics is to infer the solution to an initial value problem given access to the differential equation. Below, we term this problem “solving ODEs”. The reverse problem, in some sense, has also found interest in machine learning: inferring a differential equation from (noisy) observations of trajectories that are assumed to be governed by this ODE. Below, this is listed under “inferring ODEs”.

Work regarding "solving ODEs"

1. Kersting, H., Sullivan, T. J., & Hennig, P. (2018). Convergence Rates of Gaussian ODE Filters. ArXiv e-Prints, 1807.09737.

A recently-introduced class of probabilistic (uncertainty-aware) solvers for ordinary differential equations (ODEs) applies Gaussian (Kalman) filtering to initial value problems. These methods model the true solution x and its first q derivatives a priori as a Gauss–Markov process X, which is then iteratively conditioned on information about x’. We prove worst-case local convergence rates of order h^q+1 for a wide range of versions of this Gaussian ODE filter, as well as global convergence rates of order h^q in the case of q=1 and an integrated Brownian motion prior, and analyze how inaccurate information on x’ coming from approximate evaluations of f affects these rates. Moreover, we present explicit formulas for the steady states and show that the posterior confidence intervals are well calibrated in all considered cases that exhibit global convergence—in the sense that they globally contract at the same rate as the truncation error.

@article{2018arXiv180709737K,
author = {{Kersting}, H. and {Sullivan}, T.~J. and {Hennig}, P.},
title = {{Convergence Rates of Gaussian ODE Filters}},
journal = {ArXiv e-prints},
volume = {1807.09737},
year = {2018},
month = jul,
file = {https://arxiv.org/pdf/1807.09737}
}

2. Abdulle, A., & Garegnani, G. (2018). Random time step probabilistic methods for uncertainty quantification in chaotic and geometric numerical integration. ArXiv e-Prints, 1801.01340.

A novel probabilistic numerical method for quantifying the uncertainty induced by the time integration of ordinary differential equations (ODEs) is introduced. Departing from the classical strategy to randomize ODE solvers by adding a random forcing term, we show that a probability measure over the numerical solution of ODEs can be obtained by introducing suitable random time-steps in a classical time integrator. This intrinsic randomization allows for the conservation of geometric properties of the underlying deterministic integrator such as mass conservation, symplecticity or conservation of first integrals. Weak and mean-square convergence analysis are derived. We also analyze the convergence of the Monte Carlo estimator for the proposed random time step method and show that the measure obtained with repeated sampling converges in mean-square sense independently of the number of samples. Numerical examples including chaotic Hamiltonian systems, chemical reactions and Bayesian inferential problems illustrate the accuracy, robustness and versatility of our probabilistic numerical method.

@article{2018arXiv180101340A,
author = {{Abdulle}, A. and {Garegnani}, G.},
title = {{Random time step probabilistic methods for uncertainty quantification in chaotic and geometric numerical integration}},
journal = {ArXiv e-prints},
volume = {1801.01340},
year = {2018},
month = jan,
file = {https://arxiv.org/pdf/1801.01340},
}

3. Raissi, M., & Karniadakis, G. E. (2017). Machine Learning of Linear Differential Equations using Gaussian Processes. ArXiv:1701.02440 [Cs, Math, Stat].

This work leverages recent advances in probabilistic machine learning to discover conservation laws expressed by parametric linear equations. Such equations involve, but are not limited to, ordinary and partial differential, integro-differential, and fractional order operators. Here, Gaussian process priors are modified according to the particular form of such operators and are employed to infer parameters of the linear equations from scarce and possibly noisy observations. Such observations may come from experiments or "black-box" computer simulations.

@article{raissi_machine_2017,
title = {Machine {Learning} of {Linear} {Differential} {Equations} using {Gaussian} {Processes}},
file = {https://arxiv.org/pdf/1701.02440.pdf},
journal = {arXiv:1701.02440 [cs, math, stat]},
author = {Raissi, Maziar and Karniadakis, George Em},
month = jan,
year = {2017},
note = {arXiv: 1701.02440},
keywords = {Computer Science - Learning, Mathematics - Numerical Analysis, Statistics - Machine Learning}
}

4. Owhadi, H., & Zhang, L. (2017). Gamblets for opening the complexity-bottleneck of implicit schemes for hyperbolic and parabolic ODEs/PDEs with rough coefficients. Journal of Computational Physics, 347, 99–128. https://doi.org/10.1016/j.jcp.2017.06.037

Implicit schemes are popular methods for the integration of time dependent PDEs such as hyperbolic and parabolic PDEs. However the necessity to solve corresponding linear systems at each time step constitutes a complexity bottleneck in their application to PDEs with rough coefficients. We present a generalization of gamblets introduced in }cite{OwhadiMultigrid:2015} enabling the resolution of these implicit systems in near-linear complexity and provide rigorous a-priori error bounds on the resulting numerical approximations of hyperbolic and parabolic PDEs. These generalized gamblets induce a multiresolution decomposition of the solution space that is adapted to both the underlying (hyperbolic and parabolic) PDE (and the system of ODEs resulting from space discretization) and to the time-steps of the numerical scheme.

title = {Gamblets for opening the complexity-bottleneck of implicit schemes for hyperbolic and parabolic {ODEs}/{PDEs} with rough coefficients},
volume = {347},
issn = {00219991},
url = {http://arxiv.org/abs/1606.07686},
doi = {10.1016/j.jcp.2017.06.037},
urldate = {2017-09-10},
journal = {Journal of Computational Physics},
author = {Owhadi, Houman and Zhang, Lei},
month = oct,
year = {2017},
note = {arXiv: 1606.07686},
file = {https://arxiv.org/pdf/1606.07686.pdf},
keywords = {65T60, 65N55, 65N75, 62C99, 42C40, 62M86, Mathematics - Numerical Analysis, Statistics - Machine Learning},
pages = {99--128}
}

5. Kersting, H. P., & Hennig, P. (2016). Active Uncertainty Calibration in Bayesian ODE Solvers. In Janzing & Ihlers (Eds.), Uncertainty in Artificial Intelligence (UAI) (Vol. 32).

There is resurging interest, in statistics and machine learning, in solvers for ordinary differential equations (ODEs) that return probability measures instead of point estimates. Recently, Conrad et al. introduced a sampling-based class of methods that are ’well-calibrated’ in a specific sense. But the computational cost of these methods is significantly above that of classic methods. On the other hand, Schober et al. pointed out a precise connection between classic Runge-Kutta ODE solvers and Gaussian filters, which gives only a rough probabilistic calibration, but at negligible cost overhead. By formulating the solution of ODEs as approximate inference in linear Gaussian SDEs, we investigate a range of probabilistic ODE solvers, that bridge the trade-off between computational cost and probabilistic calibration, and identify the inaccurate gradient measurement as the crucial source of uncertainty. We propose the novel filtering-based method Bayesian Quadrature filtering (BQF) which uses Bayesian quadrature to actively learn the imprecision in the gradient measurement by collecting multiple gradient evaluations.

@inproceedings{KerstingHennigUAI2016,
author = {Kersting, Hans P. and Hennig, Philipp},
title = {Active Uncertainty Calibration in {B}ayesian {ODE} Solvers},
editor = {Janzing and Ihlers},
booktitle = {Uncertainty in Artificial Intelligence (UAI)},
volume = {32},
year = {2016},
file = {http://arxiv.org/pdf/1605.03364v2.pdf}
}

6. Schober, M., Särkkä, S., & Hennig, P. (2016). A probabilistic model for the numerical solution of initial value problems. ArXiv e-Prints.

Like many numerical methods, solvers for initial value problems (IVPs) on ordinary differential equations estimate an analytically intractable quantity, using the results of tractable computations as inputs. This structure is closely connected to the notion of inference on latent variables in statistics. We describe a class of algorithms that formulate the solution to an IVP as inference on a latent path that is a draw from a Gaussian process probability measure (or equivalently, the solution of a linear stochastic differential equation). We then show that certain members of this class are connected precisely to generalized linear methods for ODEs, a number of Runge–Kutta methods, and Nordsieck methods. This probabilistic formulation of classic methods is valuable in two ways: analytically, it highlights implicit prior assumptions favoring certain approximate solutions to the IVP over others, and gives a precise meaning to the old observation that these methods act like filters. Practically, it endows the classic solvers with ‘docking points’ for notions of uncertainty and prior information about the initial value, the value of the ODE itself, and the solution of the problem.

@article{2016arXiv161005261S,
author = {{Schober}, M. and {S{\"a}rkk{\"a}}, S. and {Hennig}, P.},
title = {A probabilistic model for the numerical solution of initial value problems},
journal = {ArXiv e-prints},
eprint = {1610.05261},
year = {2016},
month = oct,
file = {https://arxiv.org/pdf/1610.05261.pdf}
}

7. Conrad, P. R., Girolami, M., Särkkä, S., Stuart, A., & Zygalakis, K. (2015). Probability Measures for Numerical Solutions of Differential Equations. ArXiv:1506.04592 [Stat].

In this paper, we present a formal quantification of epistemic uncertainty induced by numerical solutions of ordinary and partial differential equation models. Numerical solutions of differential equations contain inherent uncertainties due to the finite dimensional approximation of an unknown and implicitly defined function. When statistically analysing models based on differential equations describing physical, or other naturally occurring, phenomena, it is therefore important to explicitly account for the uncertainty introduced by the numerical method. This enables objective determination of its importance relative to other uncertainties, such as those caused by data contaminated with noise or model error induced by missing physical or inadequate descriptors. To this end we show that a wide variety of existing solvers can be randomised, inducing a probability measure over the solutions of such differential equations. These measures exhibit contraction to a Dirac measure around the true unknown solution, where the rates of convergence are consistent with the underlying deterministic numerical method. Ordinary differential equations and elliptic partial differential equations are used to illustrate the approach to quantifying uncertainty in both the statistical analysis of the forward and inverse problems.

title = {Probability {Measures} for {Numerical} {Solutions} of {Differential} {Equations}},
file = {http://arxiv.org/pdf/1506.04592v1.pdf},
urldate = {2015-06-16},
journal = {arXiv:1506.04592 [stat]},
author = {Conrad, Patrick R. and Girolami, Mark and Särkkä, Simo and Stuart, Andrew and Zygalakis, Konstantinos},
month = jun,
year = {2015},
note = {arXiv: 1506.04592},
keywords = {Statistics - Methodology}
}

8. Hauberg, S., Schober, M., Liptrot, M., Hennig, P., & Feragen, A. (2015). A Random Riemannian Metric for Probabilistic Shortest-Path Tractography. In Medical Image Computing and Computer-Assisted Intervention (MICCAI) (Vol. 18). Munich, Germany.

Shortest-path tractography (SPT) algorithms solve global optimization problems defined from local distance functions. As diffusion MRI data is inherently noisy, so are the voxelwise tensors from which local distances are derived. We extend Riemannian SPT by modeling the stochasticity of the diffusion tensor as a “random Riemannian metric”, where a geodesic is a distribution over tracts. We approximate this distribution with a Gaussian process and present a probabilistic numerics algorithm for computing the geodesic distribution. We demonstrate SPT improvements on data from the Human Connectome Project.

@inproceedings{Hauberg_MICCAI_2015,
title = {A Random Riemannian Metric for Probabilistic Shortest-Path Tractography},
author = {Hauberg, S{\o}ren and Schober, Michael and Liptrot, Matthew and Hennig, Philipp and Feragen, Aasa},
booktitle = {Medical Image Computing and Computer-Assisted Intervention (MICCAI)},
volume = {18},
month = sep,
year = {2015},
file = {http://www2.compute.dtu.dk/~sohau/papers/miccai2015/MICCAI2015.pdf},
}

9. Schober, M., Duvenaud, D. K., & Hennig, P. (2014). Probabilistic ODE Solvers with Runge-Kutta Means. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, & K. Q. Weinberger (Eds.), Advances in Neural Information Processing Systems 27 (pp. 739–747). Curran Associates, Inc.

Runge-Kutta methods are the classic family of solvers for ordinary differential equations (ODEs), and the basis for the state of the art. Like most numerical methods, they return point estimates. We construct a family of probabilistic numerical methods that instead return a Gauss-Markov process defining a probability distribution over the ODE solution. In contrast to prior work, we construct this family such that posterior means match the outputs of the Runge-Kutta family exactly, thus inheriting their proven good properties. Remaining degrees of freedom not identified by the match to Runge-Kutta are chosen such that the posterior probability measure fits the observed structure of the ODE. Our results shed light on the structure of Runge-Kutta solvers from a new direction, provide a richer, probabilistic output, have low computational cost, and raise new research questions.

@incollection{schober2014nips,
title = {Probabilistic {ODE} Solvers with {R}unge-{K}utta Means},
author = {Schober, Michael and Duvenaud, David K and Hennig, Philipp},
booktitle = {Advances in Neural Information Processing Systems 27},
editor = {Ghahramani, Z. and Welling, M. and Cortes, C. and Lawrence, N.D. and Weinberger, K.Q.},
pages = {739--747},
year = {2014},
publisher = {Curran Associates, Inc.},
file = {http://papers.nips.cc/paper/5451-probabilistic-ode-solvers-with-runge-kutta-means.pdf},
supplements = {http://papers.nips.cc/paper/5451-probabilistic-ode-solvers-with-runge-kutta-means-supplemental.zip}
}

10. Barber, D. (2014). On solving Ordinary Differential Equations using Gaussian Processes. ArXiv Pre-Print 1408.3807.

We describe a set of Gaussian Process based approaches that can be used to solve non-linear Ordinary Differential Equations. We suggest an explicit probabilistic solver and two implicit methods, one analogous to Picard iteration and the other to gradient matching. All methods have greater accuracy than previously suggested Gaussian Process approaches. We also suggest a general approach that can yield error estimates from any standard ODE solver.

@article{2014arXiv14083807B,
author = {{Barber}, D.},
title = {{On solving Ordinary Differential Equations using Gaussian
Processes}},
journal = {ArXiv pre-print 1408.3807},
year = {2014},
month = aug,
}

11. Hennig, P., & Hauberg, S. (2014). Probabilistic Solutions to Differential Equations and their Application to Riemannian Statistics. In Proc. of the 17th int. Conf. on Artificial Intelligence and Statistics (AISTATS) (Vol. 33). JMLR, W&CP.

We study a probabilistic numerical method for the solution of both boundary and initial value problems that returns a joint Gaussian process posterior over the solution. Such methods have concrete value in the statistics on Riemannian manifolds, where non-analytic ordinary differential equations are involved in virtually all computations. The probabilistic formulation permits marginalising the uncertainty of the numerical solution such that statistics are less sensitive to inaccuracies. This leads to new Riemannian algorithms for mean value computations and principal geodesic analysis. Marginalisation also means results can be less precise than point estimates, enabling a noticeable speed-up over the state of the art. Our approach is an argument for a wider point that uncertainty caused by numerical calculations should be tracked throughout the pipeline of machine learning algorithms.

@inproceedings{HennigAISTATS2014,
author = {Hennig, Philipp and Hauberg, S{\o}ren},
booktitle = {{Proc. of the 17th int. Conf. on Artificial Intelligence and
Statistics ({AISTATS})}},
publisher = {JMLR, W\&CP},
title = {{Probabilistic Solutions to Differential Equations and their
Application to Riemannian Statistics}},
volume = {33},
year = {2014},
file = {http://jmlr.org/proceedings/papers/v33/hennig14.pdf},
code = {http://www.probabilistic-numerics.org/GP_ODE_Solver.zip},
supplements = {http://www.probabilistic-numerics.org/22-supp.zip}
}

12. Schober, M., Kasenburg, N., Feragen, A., Hennig, P., & Hauberg, S. (2014). Probabilistic shortest path tractography in DTI using Gaussian Process ODE solvers. In P. Golland, N. Hata, C. Barillot, J. Hornegger, & R. Howe (Eds.), Medical Image Computing and Computer-Assisted Intervention – MICCAI 2014 (Vol. 8675, pp. 265–272). Springer.

Tractography in diffusion tensor imaging estimates connectivity in the brain through observations of local diffusivity. These observations are noisy and of low resolution and, as a consequence, connections cannot be found with high precision. We use probabilistic numerics to estimate connectivity between regions of interest and contribute a Gaussian Process tractography algorithm which allows for both quantification and visualization of its posterior uncertainty. We use the uncertainty both in visualization of individual tracts as well as in heat maps of tract locations. Finally, we provide a quantitative evaluation of different metrics and algorithms showing that the adjoint metric combined with our algorithm produces paths which agree most often with experts.

@inproceedings{LNCS86750265,
author = {Schober, Michael and Kasenburg, Niklas and Feragen, Aasa and Hennig, Philipp and Hauberg, S{\o}ren},
editor = {Golland, Polina and Hata, Nobuhiko and Barillot, Christian and Hornegger, Joachim and Howe, Robert},
booktitle = {Medical Image Computing and Computer-Assisted Intervention --
MICCAI 2014},
publisher = {Springer},
location = {Heidelberg},
series = {Lecture Notes in Computer Science},
volume = {8675},
year = {2014},
pages = {265--272},
title = {{Probabilistic shortest path tractography in {DTI} using
{G}aussian {P}rocess {ODE} solvers}},
file = {../assets/pdf/Schober2014MICCAI.pdf},
supplements = {http://www.probabilistic-numerics.org/MICCAI2014-supp.zip}
}

13. Chkrebtii, O., Campbell, D. A., Girolami, M. A., & Calderhead, B. (2013). Bayesian Uncertainty Quantification for Differential Equations. Bayesin Analysis (Discussion Paper), in press.

This paper advocates expansion of the role of Bayesian statistical inference when formally quantifying uncertainty in computer models defined by systems of ordinary or partial differential equations. We adopt the perspective that implicitly defined infinite dimensional functions representing model states are objects to be inferred probabilistically. We develop a general methodology for the probabilistic integration of differential equations via model based updating of a joint prior measure on the space of functions and their temporal and spatial derivatives. This results in a posterior measure over functions reflecting how well they satisfy the system of differential equations and corresponding initial and boundary values. We show how this posterior measure can be naturally incorporated within the Kennedy and O’Hagan framework for uncertainty quantification and provides a fully Bayesian approach to model calibration. By taking this probabilistic viewpoint, the full force of Bayesian inference can be exploited when seeking to coherently quantify and propagate epistemic uncertainty in computer models of complex natural and physical systems. A broad variety of examples are provided to illustrate the potential of this framework for characterising discretization uncertainty, including initial value, delay, and boundary value differential equations, as well as partial differential equations. We also demonstrate our methodology on a large scale system, by modeling discretization uncertainty in the solution of the Navier-Stokes equations of fluid flow, reduced to over 16,000 coupled and stiff ordinary differential equations. Finally, we discuss the wide range of open research themes that follow from the work presented.

@article{13_bayes_uncer_quant_differ_equat,
author = {Chkrebtii, O. and Campbell, D.A. and Girolami, M.A. and Calderhead, B.},
journal = {Bayesin Analysis (discussion paper)},
title = {{B}ayesian Uncertainty Quantification for Differential
Equations},
year = {2013},
pages = {in press},
}

14. Ramalho, T., Selig, M., Gerland, U., & Enßlin, T. A. (2013). Simulation of stochastic network dynamics via entropic matching. Phys. Rev. E, 87(2), 022719. https://doi.org/10.1103/PhysRevE.87.022719

The simulation of complex stochastic network dynamics arising, for instance, from models of coupled biomolecular processes remains computationally challenging. Often, the necessity to scan a model’s dynamics over a large parameter space renders full-fledged stochastic simulations impractical, motivating approximation schemes. Here we propose an approximation scheme which improves upon the standard linear noise approximation while retaining similar computational complexity. The underlying idea is to minimize, at each time step, the Kullback-Leibler divergence between the true time evolved probability distribution and a Gaussian approximation (entropic matching). This condition leads to ordinary differential equations for the mean and the covariance matrix of the Gaussian. For cases of weak nonlinearity, the method is more accurate than the linear method when both are compared to stochastic simulations.

@article{PhysRevE87022719,
title = {Simulation of stochastic network dynamics via entropic matching},
author = {Ramalho, Tiago and Selig, Marco and Gerland, Ulrich and En\ss{}lin, Torsten A.},
journal = {Phys. Rev. E},
volume = {87},
issue = {2},
pages = {022719},
numpages = {9},
year = {2013},
month = feb,
publisher = {American Physical Society},
doi = {10.1103/PhysRevE.87.022719},
}

15. Mosbach, S., & Turner, A. G. (2009). A quantitative probabilistic investigation into the accumulation of rounding errors in numerical ODE solution. Computers & Mathematics with Applications, 57(7), 1157–1167.

We examine numerical rounding errors of some deterministic solvers for systems of ordinary differential equations (ODEs). We show that the accumulation of rounding errors results in a solution that is inherently random and we obtain the theoretical distribution of the trajectory as a function of time, the step size and the numerical precision of the computer. We consider, in particular, systems which amplify the effect of the rounding errors so that over long time periods the solutions exhibit divergent behaviour. By performing multiple repetitions with different values of the time step size, we observe numerically the random distributions predicted theoretically. We mainly focus on the explicit Euler and RK4 methods but also briefly consider more complex algorithms such as the implicit solvers VODE and RADAU5.

@article{Mosbach2009,
author = {Mosbach, Sebastian and Turner, Amanda G.},
journal = {Computers {\&} Mathematics with Applications},
number = {7},
pages = {1157--1167},
title = {{A quantitative probabilistic investigation into the accumulation of rounding errors in numerical ODE solution}},
volume = {57},
year = {2009},
}

16. Graepel, T. (2003). Solving noisy linear operator equations by Gaussian processes: Application to ordinary and partial differential equations. In ICML (pp. 234–241).
@inproceedings{graepel2003solving,
title = {Solving noisy linear operator equations by Gaussian
processes: Application to ordinary and partial differential
equations},
author = {Graepel, Thore},
booktitle = {ICML},
pages = {234--241},
year = {2003},
file = {http://www.aaai.org/Papers/ICML/2003/ICML03-033.pdf}
}

17. Skilling, J. (1991). Bayesian solution of ordinary differential equations. Maximum Entropy and Bayesian Methods, Seattle.

In the numerical solution of ordinary differential equations, a function y(x) is to be reconstructed from knowledge of the functional form of its derivative: dy/dx=f(x,y), together with an appropriate boundary condition. The derivative f is evaluated at a sequence of suitably chosen points (x_k,y_k), from which the form of y(.) is estimated. This is an inference problem, which can and perhaps should be treated by Bayesian techniques. As always, the inference appears as a probability distribution prob(y(.)), from which random samples show the probabilistic reliability of the results. Examples are given.

@article{skilling1991bayesian,
author = {Skilling, J.},
journal = {Maximum Entropy and Bayesian Methods, Seattle},
title = {{Bayesian solution of ordinary differential equations}},
year = {1991}
}

18. Kuki, H., & Cody, W. J. (1973). A Statistical Study Of The Accuracy Of Floating Point Number Systems. Communications of the ACM, 16(1), 223–230.

This paper presents the statistical results of tests of the accuracy of certain arithmetic systems in evaluating sums, products and inner products, and analytic error estimates for some of the computations. The arithmetic systems studied are 6-digit hexadecimal and 22-digit binary floating point number representations combined with the usual chop and round modes of arithmetic with various numbers of guard digits, and with a modified round mode with guard digits. In a certain sense, arithmetic systems differing only in their use of binary or hexadecimal number representations are shown to be approximately statistically equivalent in accuracy. Further, the usual round mode with guard digits is shown to be statistically superior in accuracy to the usual chop mode in all cases save one. The modified round mode is found to be superior to the chop mode in all cases.

@article{Kuki1973,
author = {Kuki, H. and Cody, W. J.},
journal = {Communications of the ACM},
number = {1},
pages = {223--230},
title = {{A Statistical Study Of The Accuracy Of Floating Point Number Systems}},
volume = {16},
year = {1973},
}

19. Hull, T. E., & Swenson, J. R. (1966). Test of Probabilistic Models for the Propagation of Roundoff Errors. Communications of the ACM, 9(2), 108–113.

In any prolonged computation it is generally assumed that the accumulated effect of roundoff errors is in some sense statistical. The purpose of this paper is to give precise descriptions of certain probabilistic models for roundoff error, and then to described a series of experiments for testing the validity of these models. It is concluded that the models are in general very good. Discrepancies are both rare and mild. The test techniques can also be used to experiment with various types of special arithmetic.

@article{Hull1966,
author = {Hull, T. E. and Swenson, J. R.},
journal = {Communications of the ACM},
number = {2},
pages = {108--113},
title = {{Test of Probabilistic Models for the Propagation of Roundoff Errors}},
volume = {9},
year = {1966},
}

20. Teymur, O., Zygalakis, K., & Calderhead, B. (2016). Probabilistic Linear Multistep Methods. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, & R. Garnett (Eds.), Advances in Neural Information Processing Systems 29 (pp. 4314–4321). Curran Associates, Inc.

We present a derivation and theoretical investigation of the Adams-Bashforth and Adams-Moulton family of linear multistep methods for solving ordinary differential equations, starting from a Gaussian process (GP) framework. In the limit, this formulation coincides with the classical deterministic methods, which have been used as higher-order initial value problem solvers for over a century. Furthermore, the natural probabilistic framework provided by the GP formulation allows us to derive probabilistic versions of these methods, in the spirit of a number of other probabilistic ODE solvers presented in the recent literature. In contrast to higher-order Runge-Kutta methods, which require multiple intermediate function evaluations per step, Adams family methods make use of previous function evaluations, so that increased accuracy arising from a higher-order multistep approach comes at very little additional computational cost. We show that through a careful choice of covariance function for the GP, the posterior mean and standard deviation over the numerical solution can be made to exactly coincide with the value given by the deterministic method and its local truncation error respectively. We provide a rigorous proof of the convergence of these new methods, as well as an empirical investigation (up to fifth order) demonstrating their convergence rates in practice.

@incollection{teymur16,
title = {Probabilistic {{Linear Multistep Methods}}},
booktitle = {Advances in {{Neural Information Processing Systems}} 29},
publisher = {{Curran Associates, Inc.}},
date = {2016},
pages = {4314--4321},
author = {Teymur, Onur and Zygalakis, Kostas and Calderhead, Ben},
editor = {Lee, D. D. and Sugiyama, M. and Luxburg, U. V. and Guyon, I. and Garnett, R.},
file = {https://papers.nips.cc/paper/6356-probabilistic-linear-multistep-methods.pdf}
}

21. Teymur, O., Lie, H. C., Sullivan, T., & Calderhead, B. (2018). Implicit Probabilistic Integrators for ODEs. In Advances in Neural Information Processing Systems 31. Curran Associates, Inc.

We introduce a family of implicit probabilistic integrators for initial value problems (IVPs) taking as a starting point the multistep Adams–Moulton method. The implicit construction allows for dynamic feedback from the forthcoming time-step, by contrast with previous probabilistic integrators, all of which are based on explicit methods. We begin with a concise survey of the rapidly-expanding field of probabilistic ODE solvers. We then introduce our method, which builds on and adapts the work of Conrad et al. (2016) and Teymur et al. (2016), and provide a rigorous proof of its well-definedness and convergence. We discuss the problem of the calibration of such integrators and suggest one approach. We give an illustrative example highlighting the effect of the use of probabilistic integrators – including our new method – in the setting of parameter inference within an inverse problem.

@incollection{teymur18,
title = {{{Implicit Probabilistic Integrators for ODEs}}},
booktitle = {Advances in {{Neural Information Processing Systems}} 31},
publisher = {{Curran Associates, Inc.}},
date = {2018},
author = {Teymur, Onur and Lie, Han Cheng and Sullivan, Tim and Calderhead, Ben},
file = {https://arxiv.org/pdf/1805.07970.pdf}
}

Work regarding "inferring ODEs"

1. Gorbach, N. S., Bauer, S., & Buhmann, J. M. (2017). Scalable Variational Inference for Dynamical Systems. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, & R. Garnett (Eds.), Advances in Neural Information Processing Systems 30 (pp. 4809–4818). Curran Associates, Inc.

Gradient matching is a promising tool for learning parameters and state dynamics of ordinary differential equations. It is a grid free inference approach, which, for fully observable systems is at times competitive with numerical integration. However, for many real-world applications, only sparse observations are available or even unobserved variables are included in the model description. In these cases most gradient matching methods are difficult to apply or simply do not provide satisfactory results. That is why, despite the high computational cost, numerical integration is still the gold standard in many applications. Using an existing gradient matching approach, we propose a scalable variational inference framework which can infer states and parameters simultaneously, offers computational speedups, improved accuracy and works well even under model misspecifications in a partially observable system.

@incollection{NIPS2017_7066,
title = {Scalable Variational Inference for Dynamical Systems},
author = {Gorbach, Nico S and Bauer, Stefan and Buhmann, Joachim M},
booktitle = {Advances in Neural Information Processing Systems 30},
editor = {Guyon, I. and Luxburg, U. V. and Bengio, S. and Wallach, H. and Fergus, R. and Vishwanathan, S. and Garnett, R.},
pages = {4809--4818},
year = {2017},
publisher = {Curran Associates, Inc.},
file = {http://papers.nips.cc/paper/7066-scalable-variational-inference-for-dynamical-systems.pdf}
}

2. Wang, Y., & Barber, D. (2014). Gaussian Processes for Bayesian Estimation in Ordinary Differential Equations. In International Conference on Machine Learning – ICML.

Bayesian parameter estimation in coupled ordinary differential equations (ODEs) is challenging due to the high computational cost of numerical integration. In gradient matching a separate data model is introduced with the property that its gradient may be calculated easily. Parameter estimation is then achieved by requiring consistency between the gradients computed from the data model and those specified by the ODE. We propose a Gaussian process model that directly links state derivative information with system observations, simplifying previous approaches and improving estimation accuracy.

@inproceedings{wang-barber-ICML-14,
author = {Wang, Yali and Barber, David},
title = {{G}aussian Processes for {B}ayesian Estimation in Ordinary
Differential Equations},
booktitle = {International Conference on Machine Learning -- ICML},
year = {2014},
file = {http://web4.cs.ucl.ac.uk/staff/d.barber/publications/gpodeICML2014.pdf}
}

3. Korostil, I. A., Peters, G. W., Cornebise, J., & Regan, D. G. (2013). Adaptive Markov chain Monte Carlo forward projection for statistical analysis in epidemic modelling of human papillomavirus. Statistics in Medicine, 32(11), 1917–1953.

A Bayesian statistical model and estimation methodology based on forward projection adaptive Markov chain Monte Carlo is developed in order to perform the calibration of a high-dimensional nonlinear system of ordinary differential equations representing an epidemic model for human papillomavirus types 6 and 11 (HPV-6, HPV-11). The model is compartmental and involves stratification by age, gender and sexual-activity group. Developing this model and a means to calibrate it efficiently is relevant because HPV is a very multi-typed and common sexually transmitted infection with more than 100 types currently known. The two types studied in this paper, types 6 and 11, are causing about 90% of anogenital warts. We extend the development of a sexual mixing matrix on the basis of a formulation first suggested by Garnett and Anderson, frequently used to model sexually transmitted infections. In particular, we consider a stochastic mixing matrix framework that allows us to jointly estimate unknown attributes and parameters of the mixing matrix along with the parameters involved in the calibration of the HPV epidemic model. This matrix describes the sexual interactions between members of the population under study and relies on several quantities that are a priori unknown. The Bayesian model developed allows one to estimate jointly the HPV-6 and HPV-11 epidemic model parameters as well as unknown sexual mixing matrix parameters related to assortativity. Finally, we explore the ability of an extension to the class of adaptive Markov chain Monte Carlo algorithms to incorporate a forward projection strategy for the ordinary differential equation state trajectories. Efficient exploration of the Bayesian posterior distribution developed for the ordinary differential equation parameters provides a challenge for any Markov chain sampling methodology, hence the interest in adaptive Markov chain methods. We conclude with simulation studies on synthetic and recent actual data.

title = {Adaptive Markov chain Monte Carlo forward projection for
statistical analysis in epidemic modelling of human
papillomavirus},
author = {Korostil, Igor A and Peters, Gareth W and Cornebise, Julien and Regan, David G},
journal = {Statistics in medicine},
volume = {32},
number = {11},
pages = {1917--1953},
year = {2013},
file = {http://arxiv.org/pdf/1108.3137v1.pdf}
}

4. Calderhead, B., Girolami, M., & Lawrence, N. D. (2009). Accelerating Bayesian Inference over Nonlinear Differential Equations with Gaussian Processes. In D. Koller, D. Schuurmans, Y. Bengio, & L. Bottou (Eds.), Advances in Neural Information Processing Systems 21 (pp. 217–224). Curran Associates, Inc.
@incollection{NIPS2008_3497,
title = {Accelerating Bayesian Inference over Nonlinear Differential
Equations with Gaussian Processes},
author = {Calderhead, Ben and Girolami, Mark and Lawrence, Neil D.},
booktitle = {Advances in Neural Information Processing Systems 21},
editor = {Koller, D. and Schuurmans, D. and Bengio, Y. and Bottou, L.},
pages = {217--224},
year = {2009},
publisher = {Curran Associates, Inc.},
file = {http://papers.nips.cc/paper/3497-accelerating-bayesian-inference-over-nonlinear-differential-equations-with-gaussian-processes.pdf}
}

Partial Differential Equations

1. Dupont, M., & Enßlin, T. (2018). Consistency and convergence of simulation schemes in Information field dynamics. ArXiv e-Prints.

We explore a new simulation scheme for partial differential equations (PDE’s) called Information Field Dynamics (IFD). Information field dynamics attempts to improve on existing simulation schemes by incorporating Bayesian field inference, which seeks to preserve the maximum amount of information about the field being simulated. The field inference is truly Bayesian and thus depends on a notion of prior belief. Here, we analytically prove that a restricted subset of simulation schemes in IFD are consistent, and thus deliver valid predictions in the limit of high resolutions. This has not previously been done for any IFD schemes. This restricted subset is roughly analogous to traditional fixed-grid numerical PDE solvers, given the additional restriction of translational symmetry. Furthermore, given an arbitrary IFD scheme modelling a PDE, it is a-priori not obvious to what order the scheme is accurate in space and time. For this subset of models, we also derive an easy rule-of-thumb for determining the order of accuracy of the simulation. As with all analytic consistency analysis, an analysis for nontrivial systems is intractable, thus these results are intended as a general indicator of the validity of the approach, and it is hoped that the results will generalize.

@article{2018arXiv180200971D,
author = {{Dupont}, M. and {En{\ss}lin}, T.},
title = {Consistency and convergence of simulation schemes in Information field dynamics},
journal = {ArXiv e-prints},
archiveprefix = {arXiv},
eprint = {1802.00971},
primaryclass = {astro-ph.IM},
keywords = {Astrophysics - Instrumentation and Methods for Astrophysics},
year = {2018},
month = feb,
file = {https://arxiv.org/pdf/1802.00971}
}

2. Leike, R. H., & Enßlin, T. A. (2018). Towards information-optimal simulation of partial differential equations. Phys. Rev. E, 97(3), 033314. https://doi.org/10.1103/PhysRevE.97.033314

Most simulation schemes for partial differential equations (PDEs) focus on minimizing a simple error norm of a discretized version of a field. This paper takes a fundamentally different approach; the discretized field is interpreted as data providing information about a real physical field that is unknown. This information is sought to be conserved by the scheme as the field evolves in time. Such an information theoretic approach to simulation was pursued before by information field dynamics (IFD). In this paper we work out the theory of IFD for nonlinear PDEs in a noiseless Gaussian approximation. The result is an action that can be minimized to obtain an information-optimal simulation scheme. It can be brought into a closed form using field operators to calculate the appearing Gaussian integrals. The resulting simulation schemes are tested numerically in two instances for the Burgers equation. Their accuracy surpasses finite-difference schemes on the same resolution. The IFD scheme, however, has to be correctly informed on the subgrid correlation structure. In certain limiting cases we recover well-known simulation schemes like spectral Fourier-Galerkin methods. We discuss implications of the approximations made.

@article{PhysRevE97033314,
title = {Towards information-optimal simulation of partial differential equations},
author = {Leike, Reimar H. and En\ss{}lin, Torsten A.},
journal = {Phys. Rev. E},
volume = {97},
issue = {3},
pages = {033314},
numpages = {8},
year = {2018},
month = mar,
publisher = {American Physical Society},
doi = {10.1103/PhysRevE.97.033314},
}

3. Raissi, M., Perdikaris, P., & Karniadakis, G. E. (2017). Numerical Gaussian Processes for Time-dependent and Non-linear Partial Differential Equations. ArXiv e-Prints, (stat.ML 1703.10230).

We introduce the concept of numerical Gaussian processes, which we define as Gaussian processes with covariance functions resulting from temporal discretization of time-dependent partial differential equations. Numerical Gaussian processes, by construction, are designed to deal with cases where: (1) all we observe are noisy data on black-box initial conditions, and (2) we are interested in quantifying the uncertainty associated with such noisy data in our solutions to time-dependent partial differential equations. Our method circumvents the need for spatial discretization of the differential operators by proper placement of Gaussian process priors. This is an attempt to construct structured and data-efficient learning machines, which are explicitly informed by the underlying physics that possibly generated the observed data. The effectiveness of the proposed approach is demonstrated through several benchmark problems involving linear and nonlinear time-dependent operators. In all examples, we are able to recover accurate approximations of the latent solutions, and consistently propagate uncertainty, even in cases involving very long time integration.

@article{2017arXiv170310230,
author = {Raissi, Maziar and Perdikaris, Paris and Karniadakis, George Em},
title = {Numerical {G}aussian Processes for Time-dependent and Non-linear Partial Differential Equations},
journal = {ArXiv e-prints},
issue = {stat.ML 1703.10230},
year = {2017},
month = mar,
file = {https://arxiv.org/pdf/1703.10230.pdf}
}

4. Owhadi, H., & Zhang, L. (2017). Gamblets for opening the complexity-bottleneck of implicit schemes for hyperbolic and parabolic ODEs/PDEs with rough coefficients. Journal of Computational Physics, 347, 99–128. https://doi.org/10.1016/j.jcp.2017.06.037

Implicit schemes are popular methods for the integration of time dependent PDEs such as hyperbolic and parabolic PDEs. However the necessity to solve corresponding linear systems at each time step constitutes a complexity bottleneck in their application to PDEs with rough coefficients. We present a generalization of gamblets introduced in }cite{OwhadiMultigrid:2015} enabling the resolution of these implicit systems in near-linear complexity and provide rigorous a-priori error bounds on the resulting numerical approximations of hyperbolic and parabolic PDEs. These generalized gamblets induce a multiresolution decomposition of the solution space that is adapted to both the underlying (hyperbolic and parabolic) PDE (and the system of ODEs resulting from space discretization) and to the time-steps of the numerical scheme.

title = {Gamblets for opening the complexity-bottleneck of implicit schemes for hyperbolic and parabolic {ODEs}/{PDEs} with rough coefficients},
volume = {347},
issn = {00219991},
url = {http://arxiv.org/abs/1606.07686},
doi = {10.1016/j.jcp.2017.06.037},
urldate = {2017-09-10},
journal = {Journal of Computational Physics},
author = {Owhadi, Houman and Zhang, Lei},
month = oct,
year = {2017},
note = {arXiv: 1606.07686},
file = {https://arxiv.org/pdf/1606.07686.pdf},
keywords = {65T60, 65N55, 65N75, 62C99, 42C40, 62M86, Mathematics - Numerical Analysis, Statistics - Machine Learning},
pages = {99--128}
}

5. Raissi, M., & Karniadakis, G. E. (2017). Machine Learning of Linear Differential Equations using Gaussian Processes. ArXiv:1701.02440 [Cs, Math, Stat].

This work leverages recent advances in probabilistic machine learning to discover conservation laws expressed by parametric linear equations. Such equations involve, but are not limited to, ordinary and partial differential, integro-differential, and fractional order operators. Here, Gaussian process priors are modified according to the particular form of such operators and are employed to infer parameters of the linear equations from scarce and possibly noisy observations. Such observations may come from experiments or "black-box" computer simulations.

@article{raissi_machine_2017,
title = {Machine {Learning} of {Linear} {Differential} {Equations} using {Gaussian} {Processes}},
file = {https://arxiv.org/pdf/1701.02440.pdf},
journal = {arXiv:1701.02440 [cs, math, stat]},
author = {Raissi, Maziar and Karniadakis, George Em},
month = jan,
year = {2017},
note = {arXiv: 1701.02440},
keywords = {Computer Science - Learning, Mathematics - Numerical Analysis, Statistics - Machine Learning}
}

6. Oates, C. J., Cockayne, J., & Aykroyd, R. G. (2017). Bayesian Probabilistic Numerical Methods for Industrial Process Monitoring. ArXiv:1707.06107 [Stat]. Retrieved from http://arxiv.org/abs/1707.06107

The use of high-power industrial equipment, such as large-scale mixing equipment or a hydrocyclone for separation of particles in liquid suspension, demands careful monitoring to ensure correct operation. The task of monitoring the liquid suspension can be posed as a time-evolving inverse problem and solved with Bayesian statistical methods. In this paper, we extend Bayesian methods to incorporate statistical models for the error that is incurred in the numerical solution of the physical governing equations. This enables full uncertainty quantification within a principled computation-precision trade-off, in contrast to the over-confident inferences that are obtained when numerical error is ignored. The method is cast with a sequential Monte Carlo framework and an optimised implementation is provided in Python.

@article{oates_bayesian_2017,
title = {Bayesian {Probabilistic} {Numerical} {Methods} for {Industrial} {Process} {Monitoring}},
url = {http://arxiv.org/abs/1707.06107},
urldate = {2017-07-20},
journal = {arXiv:1707.06107 [stat]},
author = {Oates, Chris J. and Cockayne, Jon and Aykroyd, Robert G.},
month = jul,
year = {2017},
note = {arXiv: 1707.06107},
keywords = {Statistics - Applications},
file = {https://arxiv.org/pdf/1707.06107.pdf}
}

7. Cockayne, J., Oates, C., Sullivan, T., & Girolami, M. (2016). Probabilistic Meshless Methods for Partial Differential Equations and Bayesian Inverse Problems. ArXiv, (1605.07811).

This paper develops a class of meshless methods that are well-suited to statistical inverse problems involving partial differential equations (PDEs). The methods discussed in this paper view the forcing term in the PDE as a random field that induces a probability distribution over the residual error of a symmetric collocation method. This construction enables the solution of challenging inverse problems while accounting, in a rigorous way, for the impact of the discretisation of the forward problem. In particular, this confers robustness to failure of meshless methods, with statistical inferences driven to be more conservative in the presence of significant solver error. In addition, (i) a principled learning-theoretic approach to minimise the impact of solver error is developed, and (ii) the challenging setting of inverse problems with a non-linear forward model is considered. The method is applied to parameter inference problems in which non-negligible solver error must be accounted for in order to draw valid statistical conclusions.

@article{2016arXiv160507811C,
author = {{Cockayne}, J. and {Oates}, C. and {Sullivan}, T. and {Girolami}, M.},
title = {Probabilistic Meshless Methods for Partial Differential Equations and {B}ayesian Inverse Problems},
journal = {ArXiv},
issue = {1605.07811},
year = {2016},
month = may,
file = {http://arxiv.org/pdf/1605.07811v1.pdf}
}

8. Owhadi, H., & Zhang, L. (2016). Gamblets for opening the complexity-bottleneck of implicit schemes for hyperbolic and parabolic ODEs/PDEs with rough coefficients. ArXiv e-Prints, (math.NA 1606.07686).

Implicit schemes are popular methods for the integration of time dependent PDEs such as hyperbolic and parabolic PDEs. However the necessity to solve corresponding linear systems at each time step constitutes a complexity bottleneck in their application to PDEs with rough coefficients. We present a generalization of gamblets introduced in arXiv:1503.03467 enabling the resolution of these implicit systems in near-linear complexity and provide rigorous a-priori error bounds on the resulting numerical approximations of hyperbolic and parabolic PDEs. These generalized gamblets induce a multiresolution decomposition of the solution space that is adapted to both the underlying (hyperbolic and parabolic) PDE (and the system of ODEs resulting from space discretization) and to the time-steps of the numerical scheme.

@article{2016arXiv160607686O,
author = {{Owhadi}, H. and {Zhang}, L.},
title = {{Gamblets for opening the complexity-bottleneck of implicit schemes for hyperbolic and parabolic ODEs/PDEs with rough coefficients}},
journal = {ArXiv e-prints},
issue = {math.NA 1606.07686},
year = {2016},
month = jun,
file = {http://arxiv.org/pdf/1606.07686v1.pdf}
}

9. Owhadi, H. (2015). Multigrid with rough coefficients and Multiresolution operator decomposition from Hierarchical Information Games. ArXiv, math.NA(1503.03467).

We introduce a near-linear complexity (geometric and meshless/algebraic) multigrid/multiresolution method for PDEs with rough (L^∞) coefficients with rigorous a-priori accuracy and performance estimates. The method is discovered through a decision/game theory formulation of the problems of (1) identifying restriction and interpolation operators (2) recovering a signal from incomplete measurements based on norm constraints on its image under a linear operator (3) gambling on the value of the solution of the PDE based on a hierarchy of nested measurements of its solution or source term. The resulting elementary gambles form a hierarchy of (deterministic) basis functions of H^1_0(Ω) (gamblets) that (1) are orthogonal across subscales/subbands with respect to the scalar product induced by the energy norm of the PDE (2) enable sparse compression of the solution space in H^1_0(Ω) (3) induce an orthogonal multiresolution operator decomposition. The operating diagram of the multigrid method is that of an inverted pyramid in which gamblets are computed locally (by virtue of their exponential decay), hierarchically (from fine to coarse scales) and the PDE is decomposed into a hierarchy of independent linear systems with uniformly bounded condition numbers. The resulting algorithm is parallelizable both in space (via localization) and in bandwith/subscale (subscales can be computed independently from each other). Although the method is deterministic it has a natural Bayesian interpretation under the measure of probability emerging (as a mixed strategy) from the information game formulation and multiresolution approximations form a martingale with respect to the filtration induced by the hierarchy of nested measurements.

@article{2015arXiv150303467O,
title = {Multigrid with rough coefficients and Multiresolution operator decomposition from Hierarchical Information Games},
journal = {ArXiv},
volume = {math.NA},
issue = {1503.03467},
year = {2015},
month = mar,
file = {http://arxiv.org/pdf/1503.03467v4.pdf}
}

10. Owhadi, H. (2015). Bayesian Numerical Homogenization. Multiscale Modeling & Simulation, 13(3), 812–828.

Numerical homogenization, i.e. the finite-dimensional approximation of solution spaces of PDEs with arbitrary rough coefficients, requires the identification of accurate basis elements. These basis elements are oftentimes found after a laborious process of scientific investigation and plain guesswork. Can this identification problem be facilitated? Is there a general recipe/decision framework for guiding the design of basis elements? We suggest that the answer to the above questions could be positive based on the reformulation of numerical homogenization as a Bayesian Inference problem in which a given PDE with rough coefficients (or multi-scale operator) is excited with noise (random right hand side/source term) and one tries to estimate the value of the solution at a given point based on a finite number of observations. We apply this reformulation to the identification of bases for the numerical homogenization of arbitrary integro-differential equations and show that these bases have optimal recovery properties. In particular we show how Rough Polyharmonic Splines can be re-discovered as the optimal solution of a Gaussian filtering problem.

title = {Bayesian Numerical Homogenization},
journal = {Multiscale Modeling \& Simulation},
volume = {13},
number = {3},
pages = {812--828},
year = {2015},
publisher = {SIAM},
file = {http://arxiv.org/pdf/1406.6668v2.pdf}
}

11. Conrad, P. R., Girolami, M., Särkkä, S., Stuart, A., & Zygalakis, K. (2015). Probability Measures for Numerical Solutions of Differential Equations. ArXiv:1506.04592 [Stat].

In this paper, we present a formal quantification of epistemic uncertainty induced by numerical solutions of ordinary and partial differential equation models. Numerical solutions of differential equations contain inherent uncertainties due to the finite dimensional approximation of an unknown and implicitly defined function. When statistically analysing models based on differential equations describing physical, or other naturally occurring, phenomena, it is therefore important to explicitly account for the uncertainty introduced by the numerical method. This enables objective determination of its importance relative to other uncertainties, such as those caused by data contaminated with noise or model error induced by missing physical or inadequate descriptors. To this end we show that a wide variety of existing solvers can be randomised, inducing a probability measure over the solutions of such differential equations. These measures exhibit contraction to a Dirac measure around the true unknown solution, where the rates of convergence are consistent with the underlying deterministic numerical method. Ordinary differential equations and elliptic partial differential equations are used to illustrate the approach to quantifying uncertainty in both the statistical analysis of the forward and inverse problems.

title = {Probability {Measures} for {Numerical} {Solutions} of {Differential} {Equations}},
file = {http://arxiv.org/pdf/1506.04592v1.pdf},
urldate = {2015-06-16},
journal = {arXiv:1506.04592 [stat]},
author = {Conrad, Patrick R. and Girolami, Mark and Särkkä, Simo and Stuart, Andrew and Zygalakis, Konstantinos},
month = jun,
year = {2015},
note = {arXiv: 1506.04592},
keywords = {Statistics - Methodology}
}

12. Bui-Thanh, T., & Girolami, M. (2014). Solving Large-Scale PDE-constrained Bayesian Inverse Problems with Riemann Manifold Hamiltonian Monte Carlo. ArXiv Pre-Print 1407.1517.

We consider the Riemann manifold Hamiltonian Monte Carlo (RMHMC) method for solving statistical inverse problems governed by partial differential equations (PDEs). The power of the RMHMC method is that it exploits the geometric structure induced by the PDE constraints of the underlying inverse problem. Consequently, each RMHMC posterior sample is almost independent from the others providing statistically efficient Markov chain simulation. We reduce the cost of forming the Fisher information matrix by using a low rank approximation via a randomized singular value decomposition technique. This is efficient since a small number of Hessian-vector products are required. The Hessian-vector product in turn requires only two extra PDE solves using the adjoint technique. The results suggest RMHMC as a highly efficient simulation scheme for sampling from PDE induced posterior measures.

@article{2014arXiv14071517B,
author = {{Bui-Thanh}, T. and {Girolami}, M.},
title = {{Solving Large-Scale PDE-constrained Bayesian Inverse
Problems with Riemann Manifold Hamiltonian Monte Carlo}},
journal = {arXiv pre-print 1407.1517},
keywords = {Mathematics - Statistics Theory},
year = {2014},
month = jul,
}

13. Enßlin, T. A. (2013). Information field dynamics for simulation scheme construction. Phys. Rev. E, 87(1), 013308. https://doi.org/10.1103/PhysRevE.87.013308

Information field dynamics (IFD) is introduced here as a framework to derive numerical schemes for the simulation of physical and other fields without assuming a particular subgrid structure as many schemes do. IFD constructs an ensemble of nonparametric subgrid field configurations from the combination of the data in computer memory, representing constraints on possible field configurations, and prior assumptions on the subgrid field statistics. Each of these field configurations can formally be evolved to a later moment since any differential operator of the dynamics can act on fields living in continuous space. However, these virtually evolved fields need again a representation by data in computer memory. The maximum entropy principle of information theory guides the construction of updated data sets via entropic matching, optimally representing these field configurations at the later time. The field dynamics thereby become represented by a finite set of evolution equations for the data that can be solved numerically. The subgrid dynamics is thereby treated within auxiliary analytic considerations. The resulting scheme acts solely on the data space. It should provide a more accurate description of the physical field dynamics than simulation schemes constructed ad hoc, due to the more rigorous accounting of subgrid physics and the space discretization process. Assimilation of measurement data into an IFD simulation is conceptually straightforward since measurement and simulation data can just be merged. The IFD approach is illustrated using the example of a coarsely discretized representation of a thermally excited classical Klein-Gordon field. This should pave the way towards the construction of schemes for more complex systems like turbulent hydrodynamics.

@article{PhysRevE87013308,
title = {Information field dynamics for simulation scheme construction},
author = {En\ss{}lin, Torsten A.},
journal = {Phys. Rev. E},
volume = {87},
issue = {1},
pages = {013308},
numpages = {17},
year = {2013},
month = jan,
publisher = {American Physical Society},
doi = {10.1103/PhysRevE.87.013308},
}

1. Garrabrant, S., Benson-Tilsen, T., Critch, A., Soares, N., & Taylor, J. (2016). Logical Induction. ArXiv Preprint 1609.03543v3.

This monograph gives a strong theoretical optimality notion for using bounded computational resources to assign accurate probabilities to the outcomes of computations.

We present a computable algorithm that assigns probabilities to every logical statement in a given formal language, and refines those probabilities over time. For instance, if the language is Peano arithmetic, it assigns probabilities to all arithmetical statements, including claims about the twin prime conjecture, the outputs of long-running computations, and its own probabilities. We show that our algorithm, an instance of what we call a logical inductor, satisfies a number of intuitive desiderata, including: (1) it learns to predict patterns of truth and falsehood in logical statements, often long before having the resources to evaluate the statements, so long as the patterns can be written down in polynomial time; (2) it learns to use appropriate statistical summaries to predict sequences of statements whose truth values appear pseudorandom; and (3) it learns to have accurate beliefs about its own current beliefs, in a manner that avoids the standard paradoxes of self-reference. For example, if a given computer program only ever produces outputs in a certain range, a logical inductor learns this fact in a timely manner; and if late digits in the decimal expansion of π are difficult to predict, then a logical inductor learns to assign ≈10% probability to "the nth digit of π is a 7" for large n. Logical inductors also learn to trust their future beliefs more than their current beliefs, and their beliefs are coherent in the limit (whenever ϕ⟹ψ, ℙ∞(ϕ)≤ℙ∞(ψ), and so on); and logical inductors strictly dominate the universal semimeasure in the limit. These properties and many others all follow from a single logical induction criterion, which is motivated by a series of stock trading analogies. Roughly speaking, each logical sentence ϕ is associated with a stock that is worth \$1 per share if f φ is true and nothing otherwise, and we interpret the belief-state of a logically uncertain reasoner as a set of market prices, where Pn(φ) = 50% means that on day n, shares of φ may be bought or sold from the reasoner for 50¢. The logical induction criterion says (very roughly) that there should not be any polynomial-time computable trading strategy with finite risk tolerance that earns unbounded profits in that market over time. This criterion bears strong resemblance to the “no Dutch book” criteria that support both expected utility theory (von Neumann and Morgenstern 1944) and Bayesian probability theory (Ramsey 1931; de Finetti 1937).

@article{garrabrant,
author = {Garrabrant, Scott and Benson-Tilsen, Tsvi and Critch, Andrew and Soares, Nate and Taylor, Jessica},
title = {Logical Induction},
journal = {arXiv preprint 1609.03543v3},
year = {2016},
file = {https://arxiv.org/pdf/1609.03543v3.pdf},
notes = {This monograph gives a strong theoretical optimality notion for using bounded computational resources to assign accurate probabilities to the outcomes of computations.}
}

2. Garnett, R., Osborne, M., & Hennig, P. (2014). Active Learning of Linear Embeddings for Gaussian Processes. In N. L. Zhang & J. Tian (Eds.), Proceedings of the 30th Conference on Uncertainty in Artificial Intelligence (pp. 230–239). AUAI Press. Retrieved from http://auai.org/uai2014/proceedings/individuals/152.pdf

We propose an active learning method for discovering low-dimensional structure in high-dimensional Gaussian process (GP) tasks. Such problems are increasingly frequent and important, but have hitherto presented severe practical difficulties. We further introduce a novel technique for approximately marginalizing GP hyperparameters, yielding marginal predictions robust to hyperparameter misspecification. Our method offers an efficient means of performing GP regression, quadrature, or Bayesian optimization in high-dimensional spaces.

@inproceedings{GarnettOH2013,
title = {Active Learning of Linear Embeddings for Gaussian Processes},
author = {Garnett, R. and Osborne, M. and Hennig, P.},
booktitle = {Proceedings of the 30th Conference on Uncertainty in
Artificial Intelligence},
editor = {Zhang, NL and Tian, J},
publisher = {AUAI Press},
pages = {230-239},
year = {2014},
url = {http://auai.org/uai2014/proceedings/individuals/152.pdf},
url2 = {https://github.com/rmgarnett/mgp},
department = {Department Sch{\"o}lkopf},
file = {http://auai.org/uai2014/proceedings/individuals/152.pdf},
code = {https://github.com/rmgarnett/mgp}
}

3. O’Hagan, A. (2013). Polynomial Chaos: A Tutorial and Critique from a Statistician’s Perspective. University of Sheffield, UK.
@techreport{ohagan13-polyn-chaos,
author = {O'Hagan, Anthony},
title = {Polynomial Chaos: A Tutorial and Critique from a
Statistician's Perspective},
institution = {University of Sheffield, UK},
year = {2013},
month = may
}

4. Hennig, P., & Schuler, C. J. (2012). Entropy Search for Information-Efficient Global Optimization. Journal of Machine Learning Research, 13, 1809–1837.

Contemporary global optimization algorithms are based on local measures of utility, rather than a probability measure over location and value of the optimum. They thus attempt to collect low function values, not to learn about the optimum. The reason for the absence of probabilistic global optimizers is that the corresponding inference problem is intractable in several ways. This paper develops desiderata for probabilistic optimization algorithms, then presents a concrete algorithm which addresses each of the computational intractabilities with a sequence of approximations and explicitly adresses the decision problem of maximizing information gain from each evaluation.

@article{HennigS2012,
title = {Entropy Search for Information-Efficient Global Optimization},
author = {Hennig, P. and Schuler, CJ.},
month = jun,
volume = {13},
pages = {1809-1837},
journal = {Journal of Machine Learning Research},
year = {2012},
file = {http://jmlr.csail.mit.edu/papers/volume13/hennig12a/hennig12a.pdf},
code = {http://probabilistic-optimization.org/Global.html}
}

5. Halko, N., Martinsson, P.-G., & Tropp, J. A. (2011). Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. SIAM Review, 53(2), 217–288.
@article{halko2011finding,
title = {Finding structure with randomness: Probabilistic algorithms
for constructing approximate matrix decompositions},
author = {Halko, Nathan and Martinsson, Per-Gunnar and Tropp, Joel A},
journal = {SIAM review},
volume = {53},
number = {2},
pages = {217--288},
year = {2011},
publisher = {SIAM}
}

6. Hennig, P., Stern, D., & Graepel, T. (2010). Coherent Inference on Optimal Play in Game Trees. In Y. W. Teh & M. Titterington (Eds.), Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics (Vol. 9, pp. 326–333). Chia Laguna Resort, Sardinia, Italy: PMLR.

Round-based games are an instance of discrete planning problems. Some of the best contemporary game tree search algorithms use random roll-outs as data. Relying on a good policy, they learn on-policy values by propagating information upwards in the tree, but not between sibling nodes. Here, we present a generative model and a corresponding approximate message passing scheme for inference on the optimal, off-policy value of nodes in smooth AND/OR trees, given random roll-outs. The crucial insight is that the distribution of values in game trees