UCL Discovery
UCL home » Library Services » Electronic resources » UCL Discovery

Combining observed and predicted data for robot vision in poor visibility

Stolkin, Rustam Alexander George; (2004) Combining observed and predicted data for robot vision in poor visibility. Doctoral thesis (Ph.D), UCL (University College London). Green open access

[thumbnail of Combining_observed_and_predict.pdf] Text
Combining_observed_and_predict.pdf

Download (20MB)

Abstract

This thesis addresses the problems of recovering the 3D position and orientation of a vehicle mounted camera relative to a known object and, additionally, tracking the 2D position of that object in camera images, under conditions of extremely poor visibility such as encountered underwater. The human visual system can often make correct interpretations of images that are of such poor quality that they contain insufficient explicit information to do so. It is asserted that such systems must therefore make use of prior knowledge in several forms. A novel algorithm (the EM/E-MRF algorithm) is presented for the interpretation of scene content and camera position from extremely poor visibility images. The algorithm is capable of tracking camera trajectories over extended image sequences. The algorithm combines observed data (the current image) with predicted data derived from prior knowledge of the object being viewed and an estimate of the camera's motion. During image segmentation, a predicted image is used to estimate class conditional probability distributions and an Extended-Markov Random Field technique is used to combine observed image data with expectations of that data within a probabilistic framework. Markov dependency is extended to include contributions from corresponding pixels in the predicted image. Interpretations of scene content and camera position are then mutually improved using Expectation-Maximisation. The resulting algorithm exhibits elements of continuous machine learning. Non-rigid statistical models of object being viewed and background are continuously modified and updated during the analysis of each frame of the video sequence. Poor visibility image sequences of known objects, filmed along pre-measured trajectories with a calibrated camera have been constructed in order to provide real test data with underlying ground-truth. An industrial robot arm was used to move a camera along a highly repeatable trajectory. Test sequences, (featuring an object of interest in extremely poor visibility generated using dry ice fog), and calibration sequences (featuring calibration targets in good visibility) were filmed along identical trajectories. Camera intrinsics, lens distortion parameters and camera position and orientation could be extracted from the calibration sequences for every frame. This information was used to provide ground-truth for corresponding frames in the poor visibility test sequences. Using this data, the EM/E-MRF algorithm has been tested on several hundred images, over a range of visibility conditions, camera trajectories, algorithm parameters and observed objects.

Type: Thesis (Doctoral)
Qualification: Ph.D
Title: Combining observed and predicted data for robot vision in poor visibility
Open access status: An open access version is available from UCL Discovery
Language: English
Additional information: Thesis digitised by ProQuest.
Keywords: Applied sciences; Vehicle mounted camera
URI: https://discovery.ucl.ac.uk/id/eprint/10099568
Downloads since deposit
27Downloads
Download activity - last month
Download activity - last 12 months
Downloads by country - last 12 months

Archive Staff Only

View Item View Item