Functional Priors for Bayesian Neural Networks through Wasserstein Distance Minimization to Gaussian Processes

Published in Symposium on Advances in Approximate Bayesian Inference, 2021

Tran, Ba-Hien; Milios, Dimitrios; Rossi, Simone; Filippone, Maurizio. Functional Priors for Bayesian Neural Networks through Wasserstein Distance Minimization to Gaussian Processes. 3rd Symposium on Advances in Approximate Bayesian Inference, 2021.

The Bayesian treatment of neural networks dictates that a prior distribution is considered over the weight and bias parameters of the network. The non-linear nature of the model implies that any distribution of the parameters has an unpredictable effect on the distribution of the function output. Gaussian processes offer a rigorous framework to define prior distributions over the space of functions. Our proposal is to impose such functional priors on well-established architectures of neural networks by means of minimising the Wasserstein distance between samples of stochastic processes. Early experimental results demonstrate the potential of functional priors for Bayesian neural networks.

Download paper here