UCL logo

UCL Discovery

UCL home » Library Services » Electronic resources » UCL Discovery

Robust Bounds on Generalization from the Margin Distribution

Shawe-Taylor, J; Cristianini, N; (1998) Robust Bounds on Generalization from the Margin Distribution.

Full text not available from this repository.

Abstract

A number of results have bounded generalization of a classifier in terms of its margin on the training points. There has been some debate about whether the minimum margin is the best measure of the distribution of training set margin values with which to estimate the generalization. Freund and Schapire have shown how a different function of the margin distribution can be used to bound the number of mistakes of an on-line learning algorithm for a perceptron, as well as an expected error bound. We show that a slight generalization of their construction can be used to give a pac style bound on the tail of the distribution of the generalization errors that arise from a given sample size. Algorithms arising from the approach are related to those of Cortes and Vapnik. We generalise the basic result to function classes with bounded fat- shattering dimension and the 1-norm of the slack variables which gives rise to Vapnik's box constraint algorithm. We also extend the results to the regression case and obtain bounds on the probability that a randomly chosen test point will have error greater than a given value. The bounds apply to the $epsilon$-insensitive loss function proposed by Vapnik for Support Vector Machine regression. A special case of this bound gives a bound on the probabilities in terms of the least squares error on the training set showing a quadratic decline in probability with margin

Type:Report
Title:Robust Bounds on Generalization from the Margin Distribution
Keywords:learning algorithm, online learning, perceptron, support vector, Support Vector Machine
UCL classification:UCL > School of BEAMS > Faculty of Engineering Science > Computer Science

Archive Staff Only: edit this record