Bi, Yin;
(2020)
Graph-based Feature Learning for Neuromorphic Vision Sensing.
Doctoral thesis (Ph.D), UCL (University College London).
Preview |
Text
Yin Bi - Thesis.pdf Download (6MB) | Preview |
Abstract
Neuromorphic vision sensing (NVS) devices represent visual information as sequences of asynchronous discrete events (a.k.a., ’spikes’) in response to changes in scene reflectance. Unlike conventional active pixel sensing (APS), NVS allows for significantly higher event sampling rates at substantially increased energy efficiency and robustness to illumination changes. However, neuromorphic vision sensing comes with two key challenges: (i) the lack of large-scale annotated datasets to train advanced machine learning frameworks with; (ii) feature representation for NVS is far behind that of APS-based counterparts, resulting in lower accuracy for high-level computer vision tasks. In this thesis, we attempt to bridge these gaps by firstly proposing an NVS emulation framework, termed as PIX2NVS, that converts frames from APS videos to emulated neuromorphic spike events so that we can generate large annotated NVS data from existing video frame collections (e.g., UCF101, YouTube-8M, YFCC 100m, etc.) used in machine learning research. We evaluate PIX2NVS with three proposed distance metrics and test the emulated data on two recognition applications. Furthermore, given the sparse and asynchronous nature of NVS, we propose a compact graph representation for NVS, which allows for end-to-end learning with graph convolutional neural networks. We couple this with a novel end-to-end feature learning framework that accommodates both appearance-based and motionbased tasks. The core of our framework comprises a spatial feature learning module, which utilizes our proposed residual-graph CNN (RG-CNN), for end-to-end learning of appearance-based features directly from graphs. We extend this with our proposed Graph2Grid block and temporal feature learning module in order to efficiently model temporal dependencies over multiple graphs and allow for long temporal extent. We show that performance of this framework generalizes to object classification, action recognition, action similarity labeling and scene recognition, with state-of-the-art results. Importantly, our framework preserves the spatial and temporal coherence of spike events, while requiring less computation and memory. Finally, to address the absence of large real-world NVS datasets for complex recognition tasks, we introduce, evaluate and make available a 100k dataset of NVS recordings of the American Sign Language letters (ASL-DVS) acquired with an iniLabs DAVIS240c device under real-world conditions, as well as three neuromorphic human action dataset (UCF101-DVS, HMDB51-DVS and ASLAN-DVS) and one scene recognition dataset (YUPENN-DVS) recorded by the DAVIS240c capturing their screen playback reflectance.
Type: | Thesis (Doctoral) |
---|---|
Qualification: | Ph.D |
Title: | Graph-based Feature Learning for Neuromorphic Vision Sensing |
Event: | University College London |
Open access status: | An open access version is available from UCL Discovery |
Language: | English |
Additional information: | Copyright © The Author 2020. Original content in this thesis is licensed under the terms of the Creative Commons Attribution 4.0 International (CC BY 4.0) Licence (https://creativecommons.org/licenses/by/4.0/). Any third-party copyright material present remains the property of its respective owner(s) and is licensed under its existing terms. Access may initially be restricted at the author’s request. |
UCL classification: | UCL UCL > Provost and Vice Provost Offices UCL > Provost and Vice Provost Offices > UCL BEAMS UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science > Dept of Electronic and Electrical Eng |
URI: | https://discovery.ucl.ac.uk/id/eprint/10109453 |
Archive Staff Only
View Item |