UCL Discovery
UCL home » Library Services » Electronic resources » UCL Discovery

Efficient Bayesian methods for clustering.

Heller, K.A.; (2008) Efficient Bayesian methods for clustering. Doctoral thesis , University of London. Green open access

[thumbnail of U591498.pdf] PDF
U591498.pdf

Download (65MB)

Abstract

One of the most important goals of unsupervised learning is to discover meaningful clusters in data. Clustering algorithms strive to discover groups, or clusters, of data points which belong together because they are in some way similar. The research presented in this thesis focuses on using Bayesian statistical techniques to cluster data. We take a model-based Bayesian approach to defining a cluster, and evaluate cluster membership in this paradigm. Due to the fact that large data sets are increasingly common in practice, our aim is for the methods in this thesis to be efficient while still retaining the desirable properties which result from a Bayesian paradigm. We develop a Bayesian Hierarchical Clustering (BHC) algorithm which efficiently addresses many of the drawbacks of traditional hierarchical clustering algorithms. The goal of BHC is to construct a hierarchical representation of the data, incorporating both finer to coarser grained clusters, in such a way that we can also make predictions about new data points, compare different hierarchies in a principled manner, and automatically discover interesting levels of the hierarchy to examine. BHC can also be viewed as a fast way of performing approximate inference in a Dirichlet Process Mixture model (DPM), one of the cornerstones of nonparametric Bayesian Statistics. We create a new framework for retrieving desired information from large data collections, Bayesian Sets, using Bayesian clustering techniques. Unlike current retrieval methods, Bayesian Sets provides a principled framework which leverages the rich and subtle information provided by queries in the form of a set of examples. Whereas most clustering algorithms are completely unsupervised, here the query provides supervised hints or constraints as to the membership of a particular cluster. We call this "clustering on demand", since it involves forming a cluster once some elements of that cluster have been revealed. We use Bayesian Sets to develop a content-based image retrieval system. We also extend Bayesian Sets to a discriminative setting and use this to perform automated analogical reasoning. Lastly, we develop extensions of clustering in order to model data with more complex structure than that for which traditional clustering is intended. Clustering models traditionally assume that each data point belongs to one and only one cluster, and although they have proven to be a very powerful class of models, this basic assumption is somewhat limiting. For example, there may be overlapping regions where data points actually belong to multiple clusters, like movies which can each belong to multiple genres. We extend traditional mixture models to create a statistical model for overlapping clustering, the Infinite Overlapping Mixture Model (IOMM), in a non-parametric Bayesian setting, using the Indian Buffet Process (IBP). We also develop a Bayesian Partial Membership model (BPM), which allows data points to have partial membership in multiple clusters via a continuous relaxation of a finite mixture model.

Type: Thesis (Doctoral)
Title: Efficient Bayesian methods for clustering.
Identifier: PQ ETD:591498
Open access status: An open access version is available from UCL Discovery
Language: English
Additional information: Thesis digitised by ProQuest. Third party copyright material has been removed from the ethesis
UCL classification: UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Life Sciences > Gatsby Computational Neurosci Unit
URI: https://discovery.ucl.ac.uk/id/eprint/1444196
Downloads since deposit
116Downloads
Download activity - last month
Download activity - last 12 months
Downloads by country - last 12 months

Archive Staff Only

View Item View Item