Authors
Amiran Ambroladze,
John Shawe-Taylor,
Publication date
2004
Publisher
Total citations
Cited by
Description
The fundamental theorem of learning asserts that if the VC dimension of a hypothesis class is finite, then the probability that the training set will not mislead us is big independently of the distribution generating the set. This means that for a typical training set any hypothesis consistent with the set has a good generalization ability (the generalization error is small). Here one counts even those training sets for which there are no hypothesis consistent with the given training set at all. Such training sets does not mislead us but at the same time does not lead anywhere either. In this work we investigate how big is the portion of those training sets for which there exist consistent hypotheses and any such hypothesis has a good generalization ability. Our main result is that this portion is small and it does not help if we drop the requirement on the bounds to be distribution free.