Noceti, N and Caputo, B and Castellini, C and Baldassarre, L and Barla, A and Rosasco, L and Odone, F and Sandini, G (2009) Towards a Theoretical Framework for Learning Multi-modal Patterns for Embodied Agents. In: Foggia, P and Sansone, C and Vento, M, (eds.) IMAGE ANALYSIS AND PROCESSING - ICIAP 2009, PROCEEDINGS. (pp. 239 - 248). SPRINGER-VERLAG BERLIN
Full text not available from this repository.
Multi-modality is a fundamental feature that characterizes biological systems and lets them achieve high robustness in understanding skills while coping with uncertainty. Relatively recent studies showed that multi-modal learning is a potentially effective add-on to artificial systems, allowing the transfer of information from one modality to another. In this paper we propose a general architecture for jointly learning visual and motion patterns: by means of regression theory we model a mapping between the two sensorial modalities improving the performance of artificial perceptive systems. We present promising results on a case study of grasp classification in a controlled setting and discuss future developments.
|Title:||Towards a Theoretical Framework for Learning Multi-modal Patterns for Embodied Agents|
|Event:||15th International Conference on Image Analysis and Processing (ICIAP 2009)|
|Location:||Vietri sul Mare, ITALY|
|Dates:||2009-09-08 - 2009-09-11|
|Keywords:||multi-modality, visual and sensor-motor patterns, regression theory, behavioural model, objects and actions recognition, SCALE|
|UCL classification:||UCL > School of BEAMS > Faculty of Engineering Science > Computer Science|
Archive Staff Only: edit this record