Submitted by Administrator on Wed, 19/12/2018 - 23:10
December 2018 Paper on 'Bayesian Neural Network Ensembles'
December 2018 Paper on 'Bayesian Neural Network Ensembles' by Tim Pearce, Mohamed Zaki and Andy Neely
Ensembles of neural networks (NN) have long been used to estimate predictive uncertainty; a small number of NNs are trained from different initialisations and sometimes on differing versions of the dataset. The variance of the ensemble’s predictions is interpreted as its epistemic uncertainty. The appeal of ensembling stems from being a collection of regular NNs - this makes them both scalable and easily implementable. NN ensembles have continued to achieve strong empirical results in recent years. The departure from Bayesian methodology is of concern since the Bayesian framework provides a principled, widely-accepted approach to handling uncertainty. This paper considered a method to produce Bayesian behaviour in neural networks (NN) ensembles by leveraging randomised MAP sampling. It departs only slightly from the usual handling of NNs, with parameters regularised around values drawn from a prior distribution. We showed that for NNs of sufficient width, each produces a sample from the posterior predictive distribution. Qualitative and benchmarking experiments were encouraging. Our ongoing work considers extending the presented theory to classification tasks as well as other architectures such as convolutional NNs.