Authors
Jaz Kandola,
John Shawe-Taylor,
Nello Cristianini,
Publication date
2002
Publisher
Total citations
Description
Alignment has recently been proposed as a method for measuring the degree of agreement between a kernel and a learning task (Cristianini et al., 2001). Previous approaches to optimizing kernel alignment have required the eigendecomposition of the kernel matrix which can be computationally prohibitive especially for large kernel matrices. In this paper we propose a general method for optimizing alignment over a linear combination of kernels. We apply the approach to give both transductive and inductive algorithms based on the Incomplete Cholesky factorization of the kernel matrix. The Incomplete Cholesky factorization is equivalent to performing a Gram-Schmidt orthogonalization of the training points in the feature space. The alignment optimization method adapts the feature space to increase its training set alignment. Regularization is required to ensure this alignment is also retained for the test set. Both theoretical and experimental evidence is given to show that improving the alignment leads to a reduction in generalization error of standard classifiers.