Authors
Marıa Pérez-Ortiz,
Omar Rivasplata,
John Shawe-Taylor,
Publication date
2020
Publisher
Total citations
Description
The result of training a probabilistic neural network is a probability distribution over network weights. This learnt distribution is the basis of a prediction scheme, eg building a stochastic predictor or integrating the predictions of all possible parameter settings. In this paper we experiment with training probabilistic neural networks from a PAC-Bayesian approach. We name PAC-Bayes with Backprop (PBB) the family of (probabilistic) neural network training methods derived from PAC-Bayes bounds and optimized through stochastic gradient descent. We show that the methods studied here represent promising candidates for self-certified learning, achieving state-of-the-art test performance in several data sets and at the same time obtaining reasonably tight certificates on the risk on any unseen data without the need for data-splitting protocols (both for testing and model selection).