UCL Discovery
UCL home » Library Services » Electronic resources » UCL Discovery

Interpretable and explainable machine learning via optimisation

Liapis, Georgios; (2025) Interpretable and explainable machine learning via optimisation. Doctoral thesis (Ph.D), UCL (University College London).

[thumbnail of Thesis-Liapis.pdf] Text
Thesis-Liapis.pdf - Submitted Version
Access restricted to UCL open access staff until 1 October 2026.

Download (43MB)

Abstract

As machine learning becomes increasingly embedded in decision-making processes across science, industry, and public services, the need for interpretable and explainable models has grown. While complex models often achieve high predictive accuracy, they are typically regarded as black boxes, limiting trust and transparency in their use. This thesis explores a series of optimisation-based machine learning methodologies designed to address these concerns, focusing on interpretability, explainability, and fairness. The first part of the thesis introduces a novel classification algorithm based on hyper-box representation. By formulating the learning task as a Mixed Integer Linear Programming (MILP) model, the approach explicitly controls the number and complexity of IF-THEN rules, resulting in highly interpretable multi-class classifiers. Extensive testing shows strong performance against existing interpretable models. The second topic focuses on fairness in classification trees. A game-theoretic MILP model is proposed that incorporates group fairness via a Nash bargaining scheme, balancing misclassification error across protected and non-protected groups. The resulting trees remain interpretable while significantly improving predictive equity. The third contribution addresses neural networks, where an MILP-based feature selection framework is developed for ReLU-activated regression models. By adjusting the weights and biases of the neural network, the method identifies the most influential features, enhancing explainability. The framework extends to deep neural networks and multi-output regression tasks. Furthermore, a clustering-based strategy ensures scalability with respect to the number of samples, improving computational efficiency. Finally, this thesis presents a symbolic regression approach using a Mixed Integer Quadratically Constrained Programming (MIQCP) model. This method supports basic binary operators at branch nodes and user-defined transformation functions at leaf nodes. It effectively recovers true underlying formulas in most physico-chemical phenomena tested and constructs accurate surrogate models when exact recovery is challenging. Collectively, these contributions demonstrate that mathematical optimisation can produce machine learning models that are accurate, interpretable, explainable and fair, which constitute essential qualities for trustworthy and transparent decision-making.

Type: Thesis (Doctoral)
Qualification: Ph.D
Title: Interpretable and explainable machine learning via optimisation
Language: English
Additional information: Copyright © The Author 2025. Original content in this thesis is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) Licence (https://creativecommons.org/licenses/by-nc/4.0/). Any third-party copyright material present remains the property of its respective owner(s) and is licensed under its existing terms. Access may initially be restricted at the author’s request.
UCL classification: UCL
UCL > Provost and Vice Provost Offices > UCL BEAMS
UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science > Dept of Chemical Engineering
URI: https://discovery.ucl.ac.uk/id/eprint/10214508
Downloads since deposit
3Downloads
Download activity - last month
Download activity - last 12 months
Downloads by country - last 12 months

Archive Staff Only

View Item View Item