UCL logo

UCL Discovery

UCL home » Library Services » Electronic resources » UCL Discovery

A short review of statistical learning theory

Pontil, M; (2002) A short review of statistical learning theory. In: Marinaro, M and Tagliaferri, R, (eds.) NEURAL NETS. (pp. 233 - 242). SPRINGER-VERLAG BERLIN

Full text not available from this repository.


Statistical learning theory has emerged in the last few years as a solid and elegant framework for studying the problem of learning from examples. Unlike previous "classical" learning techniques, this theory completely characterizes the necessary and sufficient conditions for a learning algorithm to be consistent. The key quantity is the capacity of the set of hypotheses employed in the learning algorithm and the goal is to control this capacity depending on the given examples. Structural risk minimization (SRM) is the main theoretical algorithm which implements this idea. SRM is inspired and closely related to regularization theory. For practical purposes, however, SRM is a very hard problem and impossible to implement when dealing with a large number of examples. Techniques such as support vector machines and older regularization networks are a viable solution to implement the idea of capacity control. The paper also discusses how these techniques can be formulated as a variational problem in a Hilbert space and show how SRM can be extended in order to implement both classical regularization networks and support vector machines.

Type: Proceedings paper
Title: A short review of statistical learning theory
Event: 13th Italian Workshop on Neural Nets (WIRN VIETRI 2002)
Dates: 2002-05-30 - 2002-06-01
ISBN: 3-540-44265-0
Keywords: statistical learning theory, structural risk minimization, regularization, SUPPORT VECTOR MACHINES, NETWORKS
URI: http://discovery.ucl.ac.uk/id/eprint/164123
Downloads since deposit
Download activity - last month
Download activity - last 12 months
Downloads by country - last 12 months

Archive Staff Only

View Item View Item