Submitted by Administrator on Mon, 21/01/2019 - 11:15
In this webinar, Tim Pearce discusses his paper on Bayesian Neural Network Ensembles
In this webinar, Tim Pearce presents the Alliance December 2018 paper on Bayesian Neural Network Ensembles. Ensembles of neural networks (NN) have long been used to estimate predictive uncertainty; a small number of NNs are trained from different initialisations and sometimes on differing versions of the dataset. The variance of the ensemble’s predictions is interpreted as its epistemic uncertainty. The appeal of ensembling stems from being a collection of regular NNs - this makes them both scalable and easily implementable. NN ensembles have continued to achieve strong empirical results in recent years. The departure from Bayesian methodology is of concern since the Bayesian framework provides a principled, widely-accepted approach to handling uncertainty. This paper considered a method to produce Bayesian behaviour in neural networks (NN) ensembles by leveraging randomised MAP sampling. It departs only slightly from the usual handling of NNs, with parameters regularised around values drawn from a prior distribution. We showed that for NNs of sufficient width, each produces a sample from the posterior predictive distribution. Qualitative and benchmarking experiments were encouraging. Our ongoing work considers extending the presented theory to classification tasks as well as other architectures such as convolutional NNs.