UCL Discovery
UCL home » Library Services » Electronic resources » UCL Discovery

ASPnet: Action Segmentation with Shared-Private Representation of Multiple Data Sources

Van Amsterdam, B; Kadkhodamohammadi, A; Luengo, I; Stoyanov, D; (2023) ASPnet: Action Segmentation with Shared-Private Representation of Multiple Data Sources. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. (pp. pp. 2384-2393). IEEE Green open access

[thumbnail of van_Amsterdam_ASPnet_Action_Segmentation_With_Shared-Private_Representation_of_Multiple_Data_Sources_CVPR_2023_paper (1).pdf]
Preview
Text
van_Amsterdam_ASPnet_Action_Segmentation_With_Shared-Private_Representation_of_Multiple_Data_Sources_CVPR_2023_paper (1).pdf - Accepted Version

Download (1MB) | Preview

Abstract

Most state-of-the-art methods for action segmentation are based on single input modalities or naïve fusion of multiple data sources. However, effective fusion of complementary information can potentially strengthen segmentation models and make them more robust to sensor noise and more accurate with smaller training datasets. In order to improve multimodal representation learning for action segmentation, we propose to disentangle hidden features of a multi-stream segmentation model into modality-shared components, containing common information across data sources, and private components; we then use an attention bottleneck to capture long-range temporal dependencies in the data while preserving disentanglement in consecutive processing layers. Evaluation on 50salads, Breakfast and RARP45 datasets shows that our multimodal approach outperforms different data fusion baselines on both multiview and multimodal data sources, obtaining competitive or better results compared with the state-of-the-art. Our model is also more robust to additive sensor noise and can achieve performance on par with strong video baselines even with less training data.

Type: Proceedings paper
Title: ASPnet: Action Segmentation with Shared-Private Representation of Multiple Data Sources
Event: CVPR 2023, 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 17-24 June 2023, Vancouver, BC, Canada
Dates: 17 Jun 2023 - 24 Jun 2023
ISBN-13: 979-8-3503-0129-8
Open access status: An open access version is available from UCL Discovery
DOI: 10.1109/CVPR52729.2023.00236
Publisher version: https://doi.org/10.1109/cvpr52729.2023.00236
Language: English
Additional information: This version is the author accepted manuscript. For information on re-use, please refer to the publisher’s terms and conditions. - This research was funded in part by the Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) [203145/Z/16/Z]; the Engineering and Physical Sciences Research Council (EPSRC) [EP/P012841/1]; and the Royal Academy of Engineering Chair in Emerging Technologies Scheme. For the purpose of open access, the author has applied a CC BY public copyright licence to any author accepted manuscript version arising from this submission.
Keywords: Training, Accelerometers, Soft sensors, Data integration, Robot sensing systems, Data models, Trajectory, Video: Action and event understanding
UCL classification: UCL
UCL > Provost and Vice Provost Offices > UCL BEAMS
UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science > Dept of Computer Science
URI: https://discovery.ucl.ac.uk/id/eprint/10205458
Downloads since deposit
11Downloads
Download activity - last month
Download activity - last 12 months
Downloads by country - last 12 months

Archive Staff Only

View Item View Item