Optimizing Kernel Alignment over Combinations of Kernel.
Abstract Alignment has recently been proposed as a method for measuring the degree of agree- ment between a kernel and a learning task (Cristianini et al., 2001). Previous ap- proaches to optimizing kernel alignment have required the eigendecomposition of the kernel matrix which can be computationally pro- hibitive especially for large kernel matrices. In this paper we propose a general method for optimizing alignment over a linear com- bination of kernels. We apply the approach to give both transductive and inductive al- gorithms based on the Incomplete Cholesky factorization of the kernel matrix. The In- complete Cholesky factorization is equivalent to performing a Gram-Schmidt orthogonal- ization of the training points in the feature space. The alignment optimization method adapts the feature space to increase its train- ing set alignment. Regularization is required to ensure this alignment is also retained for the test set. Both theoretical and experimen- tal evidence is given to show that improving the alignment leads to a reduction in gener- alization error of standard classifiers
|Title:||Optimizing Kernel Alignment over Combinations of Kernel|
|UCL classification:||UCL > School of BEAMS > Faculty of Engineering Science
UCL > School of BEAMS > Faculty of Engineering Science > Computer Science
Archive Staff Only