top of page

Boundaries of Explainable AI in Financial Time Series

Studies on Explainable AI (XAI) have greatly increased in number - but there are issues. In a research seminar at Bern University of Applied Sciences (Business School), PhD candidate Jens Reil presented the first results of his ongoing research into explainable artificial intelligence (XAI) for financial time series.


Jens's project focuses on how current XAI methods perform when applied to time-dependent data. This includes common techniques such as SHAP and LIME, which are widely used to make black-box models more transparent.


In his structured literature review, Jens highlights that while the number of XAI studies is growing fast, many rely on outdated assumptions. In the financial domain especially, researchers tend to apply XAI tools without questioning whether the explanations are actually accurate or meaningful.


He highlights a key issue: many XAI methods break the time sequence of the data. As a result, their explanations can lead to contradictory or non-comparable results. To address this, Jens calls for new, tailored approaches that respect time dependencies and avoid synthetic distortions. His goal is to help build XAI tools that work more reliably in complex, real-world financial systems.


The project is funded by the European Union’s Horizon Europe programme under the Marie Skłodowska-Curie Actions.




Comments


unnamed.png

Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or Horizon Europe: Marie Skłodowska-Curie Actions. Neither the European Union nor the granting authority can be held responsible for them. This project has received funding from the Horizon Europe research and innovation programme under the Marie Skłodowska-Curie Grant Agreement No. 101119635

Follow us

  • Wikipedia
  • LinkedIn
logo-nobackground-500.png
quantinar_logo.png
FinAI_COST.png

© 2023-2025

bottom of page