Authors
John Shawe-Taylor,
Publication date
1995
Publisher
IEEE
Total citations
Cited by
Description
This paper reviews some of the recent results in applying the theory of Probably Approximately Correct (PAC) learning to feedforward neural networks with continuous activation functions. Despite the best-known upper bound on the VC dimension of sigmoid networks being O((WN)’), for W parameters and N computational nodes, at is shown that the asymptotic bound on the sample size required for learning sigmoid networks is better than would be expected from a naive use of the VC dimension result. We propose a way of using boolean circuits to perform real valued computation in a way that naturally extends their boolean functionality. The functionality of multiple fans in threshold gates in this model is shown to mimic that of a hardware implementation of continuous Neural Networks. The sample sizes obtained for these networks are significantly lower than those obtained for sigmoidal networks.