eprintid: 10143471
rev_number: 7
eprint_status: archive
userid: 699
dir: disk0/10/14/34/71
datestamp: 2022-02-15 14:17:44
lastmod: 2022-02-15 14:17:44
status_changed: 2022-02-15 14:17:44
type: proceedings_section
metadata_visibility: show
sword_depositor: 699
creators_name: Sayed, M
creators_name: Brostow, G
title: Improved Handling of Motion Blur in Online Object Detection
ispublished: pub
divisions: C05
divisions: F48
divisions: B04
divisions: UCL
keywords: Computer vision, Computational modeling, Machine vision, Object detection, Cameras, Pattern recognition, Automobiles
note: This version is the author accepted manuscript. For information on re-use, please refer to the publisher’s terms and conditions.
abstract: We wish to detect specific categories of objects, for on-line vision systems that will run in the real world. Object detection is already very challenging. It is even harder when the images are blurred, from the camera being in a car or a hand-held phone. Most existing efforts either focused on sharp images, with easy to label ground truth, or they have treated motion blur as one of many generic corruptions.Instead, we focus especially on the details of egomotion induced blur. We explore five classes of remedies, where each targets different potential causes for the performance gap between sharp and blurred images. For example, first deblurring an image changes its human interpretability, but at present, only partly improves object detection. The other four classes of remedies address multi-scale texture, out-of-distribution testing, label generation, and conditioning by blur-type. Surprisingly, we discover that custom label generation aimed at resolving spatial ambiguity, ahead of all others, markedly improves object detection. Also, in contrast to findings from classification, we see a noteworthy boost by conditioning our model on bespoke categories of motion blur.We validate and cross-breed the different remedies experimentally on blurred COCO images and real-world blur datasets, producing an easy and practical favorite model with superior detection rates.
date: 2021-11-13
date_type: published
publisher: IEEE
official_url: https://doi.org/10.1109/CVPR46437.2021.00175
oa_status: green
full_text_type: other
language: eng
primo: open
primo_central: open_green
verified: verified_manual
elements_id: 1938776
doi: 10.1109/CVPR46437.2021.00175
isbn_13: 9781665445092
lyricists_name: Brostow, Gabriel
lyricists_id: GBROS38
actors_name: Flynn, Bernadette
actors_id: BFFLY94
actors_role: owner
full_text_status: public
pres_type: paper
series: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
publication: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
volume: 2021
place_of_pub: Nashville, TN, USA
pagerange: 1706-1716
event_title: 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
event_dates: 20 Jun 2021 - 25 Jun 2021
issn: 2575-7075
book_title: Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
citation:        Sayed, M;    Brostow, G;      (2021)    Improved Handling of Motion Blur in Online Object Detection.                     In:  Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).  (pp. pp. 1706-1716).  IEEE: Nashville, TN, USA.       Green open access   
 
document_url: https://discovery.ucl.ac.uk/id/eprint/10143471/1/2011.14448.pdf