Shawash, J.; (2012) Generalised correlation higher order neural networks, neural network operation and Levenberg-Marquardt training on field programmable gate arrays. Doctoral thesis, UCL (University College London).
|PDF - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader|
Higher Order Neural Networks (HONNs) were introduced in the late 80's as a solution to the increasing complexity within Neural Networks (NNs). Similar to NNs HONNs excel at performing pattern recognition, classification, optimisation particularly for non-linear systems in varied applications such as communication channel equalisation, real time intelligent control, and intrusion detection. This research introduced new HONNs called the Generalised Correlation Higher Order Neural Networks which as an extension to the ordinary first order NNs and HONNs, based on interlinked arrays of correlators with known relationships, they provide the NN with a more extensive view by introducing interactions between the data as an input to the NN model. All studies included two data sets to generalise the applicability of the findings. The research investigated the performance of HONNs in the estimation of short term returns of two financial data sets, the FTSE 100 and NASDAQ. The new models were compared against several financial models and ordinary NNs. Two new HONNs, the Correlation HONN (C-HONN) and the Horizontal HONN (Horiz-HONN) outperformed all other models tested in terms of the Akaike Information Criterion (AIC). The new work also investigated HONNs for camera calibration and image mapping. HONNs were compared against NNs and standard analytical methods in terms of mapping performance for three cases; 3D-to-2D mapping, a hybrid model combining HONNs with an analytical model, and 2D-to-3D inverse mapping. This study considered 2 types of data, planar data and co-planar (cube) data. To our knowledge this is the first study comparing HONNs against NNs and analytical models for camera calibration. HONNs were able to transform the reference grid onto the correct camera coordinate and vice versa, an aspect that the standard analytical model fails to perform with the type of data used. HONN 3D-to-2D mapping had calibration error lower than the parametric model by up to 24% for plane data and 43% for cube data. The hybrid model also had lower calibration error than the parametric model by 12% for plane data and 34% for cube data. However, the hybrid model did not outperform the fully non-parametric models. Using HONNs for inverse mapping from 2D-to-3D outperformed NNs by up to 47% in the case of cube data mapping. This thesis is also concerned with the operation and training of NNs in limited precision specifically on Field Programmable Gate Arrays (FPGAs). Our findings demonstrate the feasibility of on-line, real-time, low-latency training on limited precision electronic hardware such as Digital Signal Processors (DSPs) and FPGAs. This thesis also investigated the effects of limited precision on the Back Propagation (BP) and Levenberg-Marquardt (LM) optimisation algorithms. Two new HONNs are compared against NNs for estimating the discrete XOR function and an optical waveguide sidewall roughness dataset in order to find the Minimum Precision for Lowest Error (MPLE) at which the training and operation are still possible. The new findings show that compared to NNs, HONNs require more precision to reach a similar performance level, and that the 2nd order LM algorithm requires at least 24 bits of precision. The final investigation implemented and demonstrated the LM algorithm on Field Programmable Gate Arrays (FPGAs) for the first time in our knowledge. It was used to train a Neural Network, and the estimation of camera calibration parameters. The LM algorithm approximated NN to model the XOR function in only 13 iterations from zero initial conditions with a speed-up in excess of 3 x 10^6 compared to an implementation in software. Camera calibration was also demonstrated on FPGAs; compared to the software implementation, the FPGA implementation led to an increase in the mean squared error and standard deviation of only 17.94% and 8.04% respectively, but the FPGA increased the calibration speed by a factor of 1:41 x 106.
|Title:||Generalised correlation higher order neural networks, neural network operation and Levenberg-Marquardt training on field programmable gate arrays|
|Open access status:||An open access version is available from UCL Discovery|
|UCL classification:||UCL > School of BEAMS > Faculty of Engineering Science > Electronic and Electrical Engineering|
View download statistics for this item
Activity - last month
Activity - last 12 months
Archive Staff Only: edit this record