eprintid: 10087294
rev_number: 26
eprint_status: archive
userid: 608
dir: disk0/10/08/72/94
datestamp: 2020-01-20 12:44:44
lastmod: 2021-10-15 22:56:37
status_changed: 2020-01-20 12:44:44
type: article
metadata_visibility: show
creators_name: Ibrahim, MR
creators_name: Haworth, J
creators_name: Cheng, T
title: Weathernet: Recognising weather and visual conditions from street-level images using deep residual learning
ispublished: pub
divisions: UCL
divisions: B04
divisions: C05
divisions: F44
keywords: Computer vision; deep learning; convolutional neural networks (CNN); weather condition; visual conditions
note: This work is licensed under a Creative Commons Attribution 4.0 International License. The images
or other third party material in this article are included in the Creative Commons license,
unless indicated otherwise in the credit line; if the material is not included under the Creative Commons license,
users will need to obtain permission from the license holder to reproduce the material. To view a copy of this
license, visit http://creativecommons.org/licenses/by/4.0/
abstract: Extracting information related to weather and visual conditions at a given time and space is indispensable for scene awareness, which strongly impacts our behaviours, from simply walking in a city to riding a bike, driving a car, or autonomous drive-assistance. Despite the significance of this subject, it has still not been fully addressed by the machine intelligence relying on deep learning and computer vision to detect the multi-labels of weather and visual conditions with a unified method that can be easily used in practice. What has been achieved to-date are rather sectorial models that address a limited number of labels that do not cover the wide spectrum of weather and visual conditions. Nonetheless, weather and visual conditions are often addressed individually. In this paper, we introduce a novel framework to automatically extract this information from street-level images relying on deep learning and computer vision using a unified method without any pre-defined constraints in the processed images. A pipeline of four deep convolutional neural network (CNN) models, so-called WeatherNet, is trained, relying on residual learning using ResNet50 architecture, to extract various weather and visual conditions such as dawn/dusk, day and night for time detection, glare for lighting conditions, and clear, rainy, snowy, and foggy for weather conditions. WeatherNet shows strong performance in extracting this information from user-defined images or video streams that can be used but are not limited to autonomous vehicles and drive-assistance systems, tracking behaviours, safety-related research, or even for better understanding cities through images for policy-makers.
date: 2019-11-30
official_url: https://doi.org/10.3390/ijgi8120549
oa_status: green
full_text_type: pub
language: eng
primo: open
primo_central: open_green
verified: verified_manual
elements_id: 1711802
doi: 10.3390/ijgi8120549
lyricists_name: Cheng, Tao
lyricists_name: Haworth, James
lyricists_name: Ibrahim, Mohamed
lyricists_id: TCHEN23
lyricists_id: JHAWO13
lyricists_id: MIBRA11
actors_name: Flynn, Bernadette
actors_id: BFFLY94
actors_role: owner
full_text_status: public
publication: ISPRS International Journal of Geo-Information
volume: 8
number: 12
article_number: 549
citation:        Ibrahim, MR;    Haworth, J;    Cheng, T;      (2019)    Weathernet: Recognising weather and visual conditions from street-level images using deep residual learning.                   ISPRS International Journal of Geo-Information , 8  (12)    , Article 549.  10.3390/ijgi8120549 <https://doi.org/10.3390/ijgi8120549>.       Green open access   
 
document_url: https://discovery.ucl.ac.uk/id/eprint/10087294/1/ijgi-08-00549-v2.pdf