UCL logo

UCL Discovery

UCL home » Library Services » Electronic resources » UCL Discovery

Feed forward neural networks and genetic algorithms for automated financial time series modelling

Kingdon, J.C.; (1995) Feed forward neural networks and genetic algorithms for automated financial time series modelling. Doctoral thesis, University of London. Green open access

[img]
Preview
PDF - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader
12Mb

Abstract

This thesis presents an automated system for financial time series modelling. Formal and applied methods are investigated for combining feed-forward Neural Networks and Genetic Algorithms (GAs) into a single adaptive/learning system for automated time series forecasting. Four important research contributions arise from this investigation: i) novel forms of GAs are introduced which are designed to counter the representational bias associated with the conventional Holland GA, ii) an experimental methodology for validating neural network architecture design strategies is introduced, iii) a new method for network pruning is introduced, and iv) an automated method for inferring network complexity for a given learning task is devised. These methods provide a general-purpose applied methodology for developing neural network applications and are tested in the construction of an automated system for financial time series modelling. Traditional economic theory has held that financial price series are random. The lack of a priori models on which to base a computational solution for financial modelling provides one of the hardest tests of adaptive system technology. It is shown that the system developed in this thesis isolates a deterministic signal within a Gilt Futures prices series, to a confidences level of over 99%, yielding a prediction accuracy of over 60% on a single run of 1000 out-of-sample experiments. An important research issue in the use of feed-forward neural networks is the problems associated with parameterisation so as to ensure good generalisation. This thesis conducts a detailed examination of this issue. A novel demonstration of a network's ability to act as a universal functional approximator for finite data sets is given. This supplies an explicit formula for setting a network's architecture and weights in order to map a finite data set to arbitrary precision. It is shown that a network's ability to generalise is extremely sensitive to many parameter choices and that unless careful safeguards are included in the experimental procedure over-fitting can occur. This thesis concentrates on developing automated techniques so as to tackle these problems. Techniques for using GAs to parameterise neural networks are examined. It is shown that the relationship between the fitness function, the GA operators and the choice of encoding are all instrumental in determining the likely success of the GA search. To address this issue a new style of GA is introduced which uses multiple encodings in the course of a run. These are shown to out-perform the Holland GA on a range of standard test functions. Despite this innovation it is argued that the direct use of GAs to neural network parameterisation runs the risk of compounding the network sensitivity issue. Moreover, in the absence of a precise formulation of generalisation a less direct use of GAs to network parameterisation is examined. Specifically a technique, artficia1 network generation (ANG), is introduced in which a GA is used to artificially generate test learning problems for neural networks that have known network solutions. ANG provides a means for directly testing i) a neural net architecture, ii) a neural net training process, and iii) a neural net validation procedure, against generalisation. ANG is used to provide statistical evidence in favour of Occam's Razor as a neural network design principle. A new method for pruning and inferring network complexity for a given learning problem is introduced. Network Regression Pruning (NRP) is a network pruning method that attempts to derive an optimal network architecture by starting from what is considered an overly large network. NRP differs radically from conventional pruning methods in that it attempts to hold a trained network's mapping fixed as pruning proceeds. NRP is shown to be extremely successful at isolating optimal network architectures on a range of test problems generated using ANG. Finally, NRP and techniques validated using ANG are combined to implement an Automated Neural network Time series Analysis System (ANTAS). ANTAS is applied to the gilt futures price series The Long Gilt Futures Contract (LGFC).

Type:Thesis (Doctoral)
Title:Feed forward neural networks and genetic algorithms for automated financial time series modelling
Open access status:An open access version is available from UCL Discovery
Language:English
Additional information:Thesis digitised by British Library EThOS

View download statistics for this item

Archive Staff Only: edit this record