Pontil, M; (2002) A short review of statistical learning theory. In: Marinaro, M and Tagliaferri, R, (eds.) NEURAL NETS. (pp. 233 - 242). SPRINGER-VERLAG BERLIN
Full text not available from this repository.
Statistical learning theory has emerged in the last few years as a solid and elegant framework for studying the problem of learning from examples. Unlike previous "classical" learning techniques, this theory completely characterizes the necessary and sufficient conditions for a learning algorithm to be consistent. The key quantity is the capacity of the set of hypotheses employed in the learning algorithm and the goal is to control this capacity depending on the given examples. Structural risk minimization (SRM) is the main theoretical algorithm which implements this idea. SRM is inspired and closely related to regularization theory. For practical purposes, however, SRM is a very hard problem and impossible to implement when dealing with a large number of examples. Techniques such as support vector machines and older regularization networks are a viable solution to implement the idea of capacity control. The paper also discusses how these techniques can be formulated as a variational problem in a Hilbert space and show how SRM can be extended in order to implement both classical regularization networks and support vector machines.
|Title:||A short review of statistical learning theory|
|Event:||13th Italian Workshop on Neural Nets (WIRN VIETRI 2002)|
|Location:||VIETRI SUL MARE, ITALY|
|Dates:||2002-05-30 - 2002-06-01|
|Keywords:||statistical learning theory, structural risk minimization, regularization, SUPPORT VECTOR MACHINES, NETWORKS|
|UCL classification:||UCL > School of BEAMS > Faculty of Engineering Science > Computer Science|
Archive Staff Only: edit this record