Boundaries of Explainable AI in Financial Time Series
- stephanieblum7
- Jun 5
- 1 min read
Studies on Explainable AI (XAI) have greatly increased in number - but there are issues. In a research seminar at Bern University of Applied Sciences (Business School), PhD candidate Jens Reil presented the first results of his ongoing research into explainable artificial intelligence (XAI) for financial time series.
Jens's project focuses on how current XAI methods perform when applied to time-dependent data. This includes common techniques such as SHAP and LIME, which are widely used to make black-box models more transparent.
In his structured literature review, Jens highlights that while the number of XAI studies is growing fast, many rely on outdated assumptions. In the financial domain especially, researchers tend to apply XAI tools without questioning whether the explanations are actually accurate or meaningful.
He highlights a key issue: many XAI methods break the time sequence of the data. As a result, their explanations can lead to contradictory or non-comparable results. To address this, Jens calls for new, tailored approaches that respect time dependencies and avoid synthetic distortions. His goal is to help build XAI tools that work more reliably in complex, real-world financial systems.
The project is funded by the European Union’s Horizon Europe programme under the Marie Skłodowska-Curie Actions.

Comments