UCL logo

UCL Discovery

UCL home » Library Services » Electronic resources » UCL Discovery

Comparing Convolution Kernels and Recursive Neural Networks for Learning Preferences on Structured Data

Menchetti, S; Costa, F; Frasconi, P; Pontil, M; (2003) Comparing Convolution Kernels and Recursive Neural Networks for Learning Preferences on Structured Data. In: Artificial Neural Networks in Pattern Recognition: IAPR - TC3 International Workshop on Artificial Neural Networks in Pattern Recognition: University of Florence, Italy, September 12-13, 2003: Workshop Proceedings. (pp. ? - ?). Dipartimento di Sistemi e Informatica, Universita Degli Studi di Firenze: Florence, Italy.

Full text not available from this repository.

Abstract

Convolution kernels and recursive neural networks (RNN) are both suitable approaches for supervised learning when the input portion of an instance is a discrete structure like a tree or a graph. We report about an empirical comparison between the two architectures in a large scale preference learning problem related to natural language processing, where instances are candidate incremental parse trees. We found that kernels never outperform RNNs, even when a limited number of examples is employed for learning. We argue that convolution kernels may lead to feature space representations that are too sparse and too general because not focused on the specific learning task. The adaptive encoding mechanism in RNNs in this case allows us to obtain better prediction accuracy at smaller computational cost.

Type:Proceedings paper
Title:Comparing Convolution Kernels and Recursive Neural Networks for Learning Preferences on Structured Data
Publisher version:http://www.dsi.unifi.it/ANNPR/Papers.html
UCL classification:UCL > School of BEAMS > Faculty of Engineering Science > Computer Science

Archive Staff Only: edit this record