eprintid: 1529221
rev_number: 28
eprint_status: archive
userid: 608
dir: disk0/01/52/92/21
datestamp: 2016-11-22 15:06:25
lastmod: 2021-10-05 22:41:09
status_changed: 2016-11-22 15:06:25
type: proceedings_section
metadata_visibility: show
creators_name: De Castro Mota, JF
creators_name: Song, P
creators_name: Deligiannis, N
creators_name: Rodrigues, MRD
title: Coupled dictionary learning for multimodal image super-resolution
ispublished: pub
divisions: UCL
divisions: B04
divisions: C05
divisions: F46
keywords: coupled dictionary learning, multimodal data, sparse representation, sequential recursive optimization,
multispectral image super-resolution, Dictionaries,
Image resolution, Signal resolution, Data models, Training,
Optimization, Sparse matrices
note: This version is the author accepted manuscript. For information on re-use, please refer to the publisher’s terms and conditions.
abstract: Real-world data processing problems often involve multiple data modalities, e.g., panchromatic and multispectral images, positron emission tomography (PET) and magnetic resonance imaging (MRI) images. As these modalities capture information associated with the same phenomenon, they must necessarily be correlated, although the precise relation is rarely known. In this paper, we propose a coupled dictionary learning (CDL) framework to automatically learn these relations. In particular, we propose a new data model to characterize both similarities and discrepancies between multimodal signals in terms of common and unique sparse representations with respect to a group of coupled dictionaries. However, learning these coupled dictionaries involves solving a highly non-convex structural dictionary learning problem. To address this problem, we design a coupled dictionary learning algorithm, referred to sequential recursive optimization (SRO) algorithm, to sequentially learn these dictionaries in a recursive manner. By capitalizing on our model and algorithm, we conceive a CDL based multimodal image super-resolution (SR) approach. Practical multispectral image SR experiments demonstrate that our SR approach outperforms the bicubic interpolation and the state-of-the-art dictionary learning based image SR approach, with Peak-SNR (PSNR) gains of up to 8.2 dB and 5.1 dB, respectively.
date: 2017-04-24
date_type: published
publisher: Institute of Electrical and Electronics Engineers (IEEE)
official_url: http://dx.doi.org/10.1109/GlobalSIP.2016.7905824
oa_status: green
full_text_type: other
language: eng
primo: open
primo_central: open_green
verified: verified_manual
elements_id: 1192406
doi: 10.1109/GlobalSIP.2016.7905824
isbn_13: 9781509045464
lyricists_name: De Castro Mota, Joao
lyricists_name: Rodrigues, Miguel
lyricists_name: Song, Pingfan
lyricists_id: JFDEC19
lyricists_id: MRDIA06
lyricists_id: PSONG06
actors_name: De Castro Mota, Joao
actors_id: JFDEC19
actors_role: owner
full_text_status: public
series: Global Conference on Signal and Information Processing
volume: 2016
place_of_pub: New York, USA
pagerange: 162-166
event_title: IEEE Global Conference on Signal and Information Processing (GlobalSIP), 7-9 December 2016, Washington, DC, USA
event_location: Greater Washington D.C.
event_dates: 07 December 2016
institution: IEEE Global Conference on Signal and Information Processing (GlobalSIP)
book_title: 2016 IEEE Global Conference on Signal and Information Processing (GlobalSIP)
citation:        De Castro Mota, JF;    Song, P;    Deligiannis, N;    Rodrigues, MRD;      (2017)    Coupled dictionary learning for multimodal image super-resolution.                     In:  2016 IEEE Global Conference on Signal and Information Processing (GlobalSIP).  (pp. pp. 162-166).  Institute of Electrical and Electronics Engineers (IEEE): New York, USA.       Green open access   
 
document_url: https://discovery.ucl.ac.uk/id/eprint/1529221/1/CoupledDLforImgSR_FinalVer.pdf