https://doi.org/10.1140/epjc/s10052-025-13885-9
Regular Article - Theoretical Physics
Bayesian solution to the inverse problem and its relation to Backus–Gilbert methods
1
Higgs Centre for Theoretical Physics, School of Physics and Astronomy, The University of Edinburgh, Peter Guthrie Tait Road, EH9 3FD, Edinburgh, UK
2
Aix Marseille Univ, Université de Toulon, CNRS, CPT, Marseille, France
3
Department of Physics, University of Turin and INFN, Turin, Via Pietro Giuria 1, 20125, Turin, Italy
4
Department of Physics and Helsinki Institute of Physics, University of Helsinki, PL 64, 00014, Helsinki, Finland
5
University and INFN of Roma Tor Vergata, Via della Ricerca Scientifica 1, 00133, Rome, Italy
a
alessandro.lupo@cpt.univ-mrs.fr
Received:
22
November
2024
Accepted:
31
January
2025
Published online:
14
February
2025
The problem of obtaining spectral densities from lattice data has been receiving great attention due to its importance in our understanding of scattering processes in Quantum Field Theory, with applications both in the Standard Model and beyond. The problem is notoriously difficult as it amounts to performing an inverse Laplace transform, starting from a finite set of noisy data. Several strategies are now available to tackle this inverse problem. In this work, we discuss how Backus–Gilbert methods, in particular the variation introduced by some of the authors, relate to the solution based on Gaussian Processes. Both methods allow computing spectral densities smearing with a kernel whose features depend on the detail of the algorithm. We will discuss such kernel, and show how Backus–Gilbert methods can be understood in a Bayesian fashion. As a consequence of this correspondence, we are able to interpret the algorithmic parameters of Backus–Gilbert methods as hyperparameters in the Bayesian language, which can be chosen by maximising a likelihood function. By performing a comparative study on lattice data, we show that, when both frameworks are set to compute the same quantity, the results are generally in agreement. Finally, we adopt a strategy to systematically validate both methodologies against pseudo-data, using covariance matrices measured from lattice simulations. In our setup, we find that the determination of the algorithmic parameters based on a stability analysis provides results that are, on average, more conservative than those based on the maximisation of a likelihood function.
© The Author(s) 2025
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Funded by SCOAP3.