eprintid: 10080256
rev_number: 25
eprint_status: archive
userid: 608
dir: disk0/10/08/02/56
datestamp: 2019-11-14 15:32:01
lastmod: 2021-12-06 00:17:52
status_changed: 2019-11-14 15:32:01
type: article
metadata_visibility: show
creators_name: Williamson, RS
creators_name: Sahani, M
creators_name: Pillow, JW
title: The Equivalence of Information-Theoretic and Likelihood-Based Methods for Neural Dimensionality Reduction
ispublished: pub
divisions: UCL
divisions: B02
divisions: C08
divisions: D76
keywords: Neurons, Statistical models, Covariance, Entropy, Linear filters, Probability distribution, Macaque, Optimization
note: Copyright © 2015 Williamson et al. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited
abstract: Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron’s probability of spiking. One popular method, known as maximally informative dimensions (MID), uses an information-theoretic quantity known as “single-spike information” to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP) model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex.
date: 2015-04-01
date_type: published
publisher: PUBLIC LIBRARY SCIENCE
official_url: https://doi.org/10.1371/journal.pcbi.1004141
oa_status: green
full_text_type: pub
language: eng
primo: open
primo_central: open_green
verified: verified_manual
elements_id: 895404
doi: 10.1371/journal.pcbi.1004141
lyricists_name: Sahani, Maneesh
lyricists_id: MSAHA91
actors_name: Flynn, Bernadette
actors_id: BFFLY94
actors_role: owner
full_text_status: public
publication: PLOS Computational Biology
volume: 11
number: 4
article_number: e1004141
pages: 31
citation:        Williamson, RS;    Sahani, M;    Pillow, JW;      (2015)    The Equivalence of Information-Theoretic and Likelihood-Based Methods for Neural Dimensionality Reduction.                   PLOS Computational Biology , 11  (4)    , Article e1004141.  10.1371/journal.pcbi.1004141 <https://doi.org/10.1371/journal.pcbi.1004141>.       Green open access   
 
document_url: https://discovery.ucl.ac.uk/id/eprint/10080256/1/journal.pcbi.1007139.s001.PDF