top of page

About the Project 

Artificial intelligence (AI) is creating one of the biggest revolutions across technology-driven application fields. 

Despite all the enthusiasm, the real-world implementation of AI remains challenging. Namely, AI solutions are often referred to as “black boxes” because, typically, it is difficult to trace the steps the algorithm took to arrive at its decision. This unclarity is mostly because AI models are built on complex logic, often with thousands of parameters interlinked with nonlinear dependencies.








This property is considered one of the biggest challenges in implementing AI solutions in practice. It makes the decision-making process intransparent and often incomprehensive even to the developers of the tools.

To emphasize the relevance of this challenge, we discuss an interesting thought experiment that was posed before the announcement of the winners of the 2018 NeurIPS Explainable Machine Learning Challenge. The participants were asked to think about the following situation (Rudin and Radin 2019).

‘Suppose you have a tumor and need surgery. Would you rather trust an AI surgeon who cannot tell anything about its inner workings but has a 2% chance of making a fatal mistake or a human surgeon who can explain every step in detail but has a 15% chance of making a fatal mistake?’


The AI’s 2% chance of fatality is the superior of the two choices. Nevertheless, even with the better choice in terms of risk, we would still feel uncomfortable as we need explanations to trust the decision. Hence, in this case, there emerges a need to understand the machine’s inner workings, which ultimately leads to the decision. This example shows one of the significant drawbacks of today’s complex AI architectures and showcases the trade-off between efficiency and explainability we currently need to make. Furthermore, it also shows the human need for ever-higher-quality decision making versus the desire to understand and trust.


The challenge of explainability is particularly relevant for Swiss financial intermediaries as they are subjected to the General Data Protection Regulation (GDPR). This regulation took effect in 2018 and places restrictions on automated individual decision-making. It provides a right to an explanation, enabling users to ask for an explanation as to the automated decision-making processes affecting them.






As a result of such rising concerns, the concept of XAI emerged, introducing a suite of techniques attempting to explain to users how the model arrived at a particular decision (i.e., (Lundberg et al., 2019), (Arya et al., 2019), (Molnar et al., 2020), and (Sokol and Flach 2020)).Even though many of the classical XAI approaches can lead to valuable insights about the models’ inner workings, in most cases these techniques are not tailored
for financial applications, nor provide explanations suited to the needs of different financial experts. 


In this project, we aim to close this gap, by developing a visual analytics (VA) tool, specifically tailored to financial applications (credit risk management and financial time series forecasting), useable for both model developers (i.e. financial intermediaries operating in the commercial or consumer credit space) as well as model evaluators (regulatory bodies that have to validate the models). More importantly, such a visual tool will enable model evaluators, a non-technical audience, to gain some insight into how AI models applied to credit scoring work and identify the reasons behind the decisions taken.

black box.jpg
bottom of page