Maximal causes for non-linear component extraction.
J MACH LEARN RES
1227 - 1267.
We study a generative model in which hidden causes combine competitively to produce observations. Multiple active causes combine to determine the value of an observed variable through a max function, in the place where algorithms such as sparse coding, independent component analysis, or non-negative matrix factorization would use a sum. This max rule can represent a more realistic model of non-linear interaction between basic components in many settings, including acoustic and image data. While exact maximum-likelihood learning of the parameters of this model proves to be intractable, we show that efficient approximations to expectation-maximization (EM) can be found in the case of sparsely active hidden causes. One of these approximations can be formulated as a neural network model with a generalized softmax activation function and Hebbian learning. Thus, we show that learning in recent softmax-like neural networks may be interpreted as approximate maximization of a data likelihood. We use the bars benchmark test to numerically verify our analytical results and to demonstrate the competitiveness of the resulting algorithms. Finally, we show results of learning model parameters to fit acoustic and visual data sets in which max-like component combinations arise naturally.
|Title:||Maximal causes for non-linear component extraction|
|Open access status:||An open access publication|
|Keywords:||component extraction, maximum likelihood, approximate EM, competitive learning, neural networks, NONNEGATIVE MATRIX FACTORIZATION, UNSUPERVISED NEURAL-NETWORKS, OBJECT RECOGNITION, HIERARCHICAL-MODELS, SELF-ORGANIZATION, CORTICAL COLUMNS, NATURAL IMAGES, VISUAL-CORTEX, EM ALGORITHM, REPRESENTATIONS|
|UCL classification:||UCL > School of Life and Medical Sciences > Faculty of Life Sciences > Gatsby Computational Neuroscience Unit|
Archive Staff Only