Comparing Convolution Kernels and Recursive Neural Networks for Learning Preferences on Structured Data.
Presented at: UNSPECIFIED.
Convolution kernels and recursive neural networks (RNN) are both suitable approaches for supervised learning when the input portion of an instance is a discrete structure like a tree or a graph. We report about an empirical comparison between the two architectures in a large scale preference learning problem related to natural language processing, where instances are candidate incremental parse trees. We found that kernels never outperform RNNs, even when a limited number of examples is employed for learning. We argue that convolution kernels may lead to feature space representations that are too sparse and too general because not focused on the specific learning task. The adaptive encoding mechanism in RNNs in this case allows us to obtain better prediction accuracy at smaller computational cost.
|Type:||Conference item (UNSPECIFIED)|
|Title:||Comparing Convolution Kernels and Recursive Neural Networks for Learning Preferences on Structured Data|
|UCL classification:||UCL > School of BEAMS > Faculty of Engineering Science
UCL > School of BEAMS > Faculty of Engineering Science > Computer Science
Archive Staff Only