Authors
Petroula Tsampouka,
John Shawe-Taylor,
Publication date
2007
Publisher
Total citations
Description
We present a family of incremental Perceptron-like algorithms (PLAs) with margin in which both the "effective" learning rate, defined as the ratio of the learning rate to the length of the weight vector, and the misclassification condition are entirely controlled by rules involving (powers of) the number of mistakes. We examine the convergence of such algorithms in a finite number of steps and show that under some rather mild conditions there exists a limit of the parameters involved in which convergence leads to classification with maximum margin. An experimental comparison of algorithms belonging to this family with other large margin PLAs and decomposition SVMs is also presented.