top of page

Investigating the utility of classical XAI methods in financial time series

  • Host institution: Bern Business School, Switzerland (BFH)

  • Starting month: M13

  • Duration: 36 months

  • Pillar 1: The need for eXplainable AI: methods and applications in finance (Bern University of Applied Sciences, 4 ECTs), Work Package 3

  • Work Packages: WP3

  • Lead Researcher: Jens Reil

Overview

Artificial intelligence is increasingly used to forecast financial time series such as stock prices, volatility, and macroeconomic indicators. While machine learning and deep learning models often outperform traditional econometric models in predictive accuracy, they introduce a significant challenge: people do not understand the model's decisions. Financial decision-making, however, occurs in highly regulated environments where transparency, accountability, and trust are essential. As a result, understanding how AI models generate predictions is crucial for regulators, analysts, and risk managers.

​

This research project investigates how Explainable Artificial Intelligence (XAI) methods can be reliably applied to financial (time series) forecasting models, and how these methods can be improved to better reflect the statistical properties of financial data.

The work is conducted within the Marie Skłodowska-Curie Industrial Doctoral Network DIGITAL (Horizon Europe).

Phase I: Systematic Literature Review

The first phase of the research consists of a systematic literature review (SLR) investigating how explainable AI methods are currently used to interpret AI-based time series forecasting models. The SLR synthesizes the methodological landscape and identifies limitations of existing explainability approaches when applied to financial data . The SLR addresses three core research questions:

​

RQ1: How does the adoption of AI-based time series forecasting models in finance differ from other application domains?

RQ2: Which explainable AI methods are most commonly used to interpret financial forecasting models?

RQ3: What limitations arise when these explainability methods are applied to financial time series?

​

The study follows the PRISMA systematic review framework, ensuring transparency and reproducibility. An overview of the process is provided in the figures below.

SLR_research_framework.png
PRISMA_Framework_Overview.png

 

 

The SLR identifies several important patterns. 

 

First, the adoption of forecasting models in finance largely mirrors trends observed in other fields. Deep learning architectures such as Long Short-Term Memory networks (LSTM) and Convolutional Neural Networks (CNN) models dominate the literature due to their ability to capture complex temporal patterns and nonlinear relationships in data. 

 

Second, the explainability (XAI) landscape is highly concentrated around a small number of methods. In particular, model-agnostic techniques such as SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-Agnostic Explanations) are widely used to interpret forecasting models. These methods are frequently applied as default explanation tools across different model architectures, often without a thorough examination of whether their underlying assumptions hold in financial time series contexts.

​

One of the validation graphics for these findings, which are based on the literature corpus identified, has been provided below.​​

heatmap_ml_vs_xai_by_titlepairs.png

An other critical finding of the review is the structural mismatch between the assumptions of many explainability methods and the statistical properties of financial time series. Financial data typically exhibit characteristics such as temporal dependence, non-stationarity, volatility clustering, and regime shifts. Many XAI techniques, however, were originally designed for domains where input features are assumed to be independent and identically distributed. As a result, explanations produced by these methods can be unstable, temporally misaligned, or potentially misleading when applied to financial forecasting models.​

Next Research Phases (II, III, and IV)

Building on the insights gained from the systematic literature review, the next stages of the research project aim to develop a deeper understanding of explainability in financial time series forecasting and to design improved explainability methods tailored to the specific characteristics of financial data. The research is structured into several interconnected phases that address both technical and human-centered aspects of XAI.​

​

The first subsequent phase (Phase II) focuses on the empirical evaluation of existing XAI methods when applied to financial forecasting models. In this phase, commonly used explainability techniques will be systematically tested using both synthetic and real-world financial time series data. Synthetic datasets are particularly valuable because they allow the underlying data-generating processes to be controlled and fully understood. This makes it possible to determine whether an explanation correctly identifies the true drivers of model predictions. The evaluation will rely on several explainability validation metrics, such as fidelity, stability, robustness, and consistency, which measure how accurately and reliably an explanation reflects the behavior of the underlying model. 

​

In addition to the technical evaluation of explainability methods, the project also addresses the human-centered dimension of explainability (Phase III). Different stakeholders in financial institutions, including regulators, risk managers, data scientists, and traders, often have different expectations regarding what constitutes a meaningful or useful explanation. The third phase of the project therefore investigates how explanations generated by AI models are perceived and interpreted by financial stakeholders. This research will involve qualitative methods such as surveys and semi-structured interviews to identify which explanation characteristics are most important for practical decision-making in finance. 

​

The final phase of the project (Phase IV) focuses on the development of improved XAI approaches specifically designed for financial time series forecasting. Based on the insights obtained from both the technical evaluation and the stakeholder analysis, new methods or methodological extensions will be proposed to address the limitations identified in existing XAI techniques. In particular, the research aims to design approaches that better account for temporal dependencies, non-stationarity, and other statistical properties that characterize financial time series. The resulting methods will be evaluated through both quantitative experiments and qualitative assessments to ensure that they are not only technically sound but also practically meaningful for financial applications. â€‹â€‹

Expected Results

Within this reseach project, we will propose a set of novel explainability functions that are specifically tailored for financial time series. We envision a framework for XAI in finance that addresses the shortcomings of existing methods. Namely, under existing, perturbation-based XAI methods, if features are correlated, the artificial coalitions created will lie outside of the multivariate joint distribution of the data. Furthermore, generating artificial data points through random replacement disregards the time sequence hence producing unrealistic values for the feature of interest. In addition to the novel, finance-tailored methodology for obtaining explanations, the project will also aim to produce industry-ready deployments of the novel XAI techniques developed.

Planned Secondments

  • European Central Bank (ECB). Dr. Lukaz Kubicki, M21, 12 months, exposure to globally leading central bank research, training on EU principles.

  • Secondment #2. To be planned further.

​

Planned Timetable

unnamed.png

Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or Horizon Europe: Marie Skłodowska-Curie Actions. Neither the European Union nor the granting authority can be held responsible for them. This project has received funding from the Horizon Europe research and innovation programme under the Marie Skłodowska-Curie Grant Agreement No. 101119635

Follow us

  • Wikipedia
  • LinkedIn
logo-nobackground-500.png
quantinar_logo.png
FinAI_COST.png

© 2023-2025

bottom of page