Authors
John Langford,
John Shawe-Taylor,
Publication date
2002
Publisher
Total citations
Description
2.2 Discussion Theorem 2.1 shows that when a margin exists it is always possible to find a" posterior" distribution (in the style of [5]) which introduces only a small amount of additional training error rate. The true error bound for this stochastization of the large-margin classifier is not dependent on the dimensionality except via the margin. Since the Gaussian tail decreases exponentially, the value of Pl (f) is not very large for any reasonable value of f. In particular, at P (3), we have f:::; 0.01. Thus, for the purpose of understanding, we can replace Pl (f) with 3 and consider f~ O. One useful approximation for P (x) with large x is: