Menchetti, S; Costa, F; Frasconi, P; Pontil, M; (2003) Comparing Convolution Kernels and Recursive Neural Networks for Learning Preferences on Structured Data. In: Artificial Neural Networks in Pattern Recognition: IAPR - TC3 International Workshop on Artificial Neural Networks in Pattern Recognition: University of Florence, Italy, September 12-13, 2003: Workshop Proceedings. (pp. ? - ?). Dipartimento di Sistemi e Informatica, Universita Degli Studi di Firenze: Florence, Italy.
Full text not available from this repository.
Convolution kernels and recursive neural networks (RNN) are both suitable approaches for supervised learning when the input portion of an instance is a discrete structure like a tree or a graph. We report about an empirical comparison between the two architectures in a large scale preference learning problem related to natural language processing, where instances are candidate incremental parse trees. We found that kernels never outperform RNNs, even when a limited number of examples is employed for learning. We argue that convolution kernels may lead to feature space representations that are too sparse and too general because not focused on the specific learning task. The adaptive encoding mechanism in RNNs in this case allows us to obtain better prediction accuracy at smaller computational cost.
|Title:||Comparing Convolution Kernels and Recursive Neural Networks for Learning Preferences on Structured Data|
|UCL classification:||UCL > School of BEAMS > Faculty of Engineering Science > Computer Science|
Archive Staff Only: edit this record