Gibbs sampling for parameter learning in probabilistic expert systems.
Masters thesis, UCL (University College London).
We have a probabilistic statistical model which is required to adapt in the light of observed cases. The adapted model can be viewed as the knowledge base of an expert system which is designed to solve a complex forensic problem. The adaptation takes the form of Bayesian parameter learning and since the data is incomplete we have an analytically intractable problem that requires some form of approximation. In this thesis the chosen form is Gibbs sampling. We categorise the various forms of Gibbs sampling as either numerical (in the sense that numerical processing is integral to the algorithm) or algebraic (where the numerical processing is viewed as a set-up phase). These categories are further subdivided into complex (where we deal with complex mixtures) and simple methods (where standard conjugate analysis is performed). We show, through computer experiments, that when taking a complex numerical approach, there is the possibility that reducing the configuration space of the Gibbs sampler can compensate for the computational inefficiency of performing the Gibbs iterations. Thus there is the possibility that, for certain types of problems, this approach will out-perform the commonly used simple numerical method. A simple algebraic approach is developed which aims to improve efficiency by reducing the time taken for a single iteration of the sampler. An example is introduced, namely the mutation rate learning problem, to demonstrate the application of this method. We conclude that in certain circumstances it may be reasonable to compromise on accuracy in order to take advantage of the efficiency of the above methods.
|Title:||Gibbs sampling for parameter learning in probabilistic expert systems|
|Additional information:||Permission for digitisation not received|
|UCL classification:||UCL > School of BEAMS > Faculty of Maths and Physical Sciences > Statistical Science|
Archive Staff Only