top of page

Towards Explainable and Fair AI-Generated Decisions

Round Slates

AI-driven innovation can bring enormous benefits but such complex solutions are often referred to as “black boxes” because typically it is difficult to trace the steps the algorithms took to arrive at its conclusions. DIGITAL will describe how well XAI tools meet the explainability requirements of various financial value chain stakeholders, develop non-perturbation-based XAI methods that preserve the natural time ordering and dependence structures of the data and create methodologies to ensure that algorithmic systems do not produce socially biassed outcomes that exacerbate inequalities.

This research topic will address the crucial question of how to build trust in human-centric AI models as opposed to the currently widespread AI black boxes, which do not meet the modern European requirements of explainability, trust and unbiasedness. We will validate the applicability of state-of-the-art XAI algorithms to financial applications and extend XAI frameworks, ensuring that complex models applied to financial use cases satisfy the explainability requirements of different stakeholders within the finance value chain and do not reinforce social biases. A qualitative evaluation of the comprehensive frameworks' insights into explainability will be made in comparison to the baseline models. Through industry-ready use cases, we will demonstrate for the first time ever the viability of the proposed framework for audience-dependent explanations, the novel time-series XAI methods, and the fair algorithmic designs.

Under this research stream, THREE doctoral candidates will tackle the following research projects:

Lead of the Research Topic
BFH_Logo_deutsch.png
bottom of page