top of page

Audience-dependent explainable AI in finance

Image by Maryna Yazbeck

Fast facts

  • Host institution: University of Naples Federico II, Italy (UNA)
  • Starting month: M9

  • Duration: 36 months

  • Pillar 1: The need for eXplainable AI: methods and applications in finance (Bern University of Applied Sciences), Work Package 3

  • Work Packages: WP3, WP6, WP7, WP8

The problem

shap-importance-extended.jpg
big-picture.jpg

Finance increasingly relies on advanced machine learning models that deliver strong performance but often function as “black boxes.” Explainable AI (XAI) aims to make these systems more transparent and understandable.

​

Most solutions today are post-hoc explanations—generated after a model produces a result, without changing how the model works internally. Popular post-hoc methods include LIME and SHAP, which estimate which input factors most influenced a specific prediction.

​

However, these explanations are not universally useful. Regulators, compliance teams, business users, developers, and customers need different levels of detail and different kinds of evidence. The key challenge is not only explaining AI, but making explanations fit the audience and accountability context.

​

Image credit: Molnar

"The business would not understand it (blackbox models). They were so used to the regression coefficients."

AI project manager from Asia

Our mission

Part of the EU-funded MSCA DIGITAL network, this project moves beyond evaluating existing AI tools to empirically profile the preferences of the people who actually use them. We aim to map the requirements of explainable systems across three key stakeholders:

  • Finance Professionals: Who need explanations to make and defend decisions

  • Regulators: Who require explanations contextualized with legal goals and compliance

  • ​General Public: Who need friendly explanations to calibrate trust in automated services and empower them to contest unfair decisions

Get involved and shape the future of AI transparency in finance

I am a...
Image by Vitaly Gariev

Finance Professional

Shopper with Smartphone

Financial app user

Milestones

  • Nov 2024 – Jul 2025 | Foundations
    Conceptual framing and coursework completed at UNINA, including research linking financial innovation, sustainable finance, and explainability.

  • Feb – Sep 2025 | Ethics & research infrastructure
    Data management, consent processes, and ethics clearance finalized to support responsible, GDPR-aligned data collection.

  • Apr – Nov 2025 | Exploratory interviews completed
    17 interviews conducted across regions and financial roles, highlighting explainability as a system-level requirement and a process embedded in accountability chains.

  • Oct – Dec 2025 | Surveys launched
    Two survey instruments developed and deployed:

    • General Public / Financial App Users (multi-language)

    • Finance Professionals (role-specific explainability needs)

  • 2026 | Regulator engagement & publications
    Focus group discussions (FGDs) with supervisors/regulators and a sequential publication plan across audiences, building toward a practical matching framework for audience-dependent XAI.

Read the latest work

Trade-offs in Financial AI: Explainability in a Trilemma with Accuracy and Compliance (arXiv preprint, v1) 

Our preliminary research identifies a "Financial AI Trilemma." Unlike the traditional view where accuracy is traded for explainability, financial professionals view Accuracy and Compliance as non-negotiable "hygiene factors." Ease of Understanding functions as the gateway to adoption—determining whether a system is actually usable and defensible in practice

IMG_8876.heif
A global training

From February 2026, the lead researcher will be based in Vilnius, Lithuania for an 18-month secondment at Swedbank Baltics, focusing on AI scaling at the strategic level. During this period, the project will collaborate closely with banking and AI teams to connect research on audience-dependent explainability with real-world financial AI deployment, strengthening the bridge between academia and industry and supporting governance-aware AI adoption in finance.

​

Following this phase, the project will collaborate with the Fraunhofer Institute (FRA) under the supervision of Prof. Dr. Ralf Korn, with a focus on enhancing know-how transfer through the use and implementation of advanced financial models. This stage further integrates strategic AI scaling with rigorous quantitative financial modeling, reinforcing the project’s applied and methodological impact.

​

Original Planned Timetable

Our Team

​

unnamed.png

Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or Horizon Europe: Marie Skłodowska-Curie Actions. Neither the European Union nor the granting authority can be held responsible for them. This project has received funding from the Horizon Europe research and innovation programme under the Marie Skłodowska-Curie Grant Agreement No. 101119635

Follow us

  • Wikipedia
  • LinkedIn
logo-nobackground-500.png
quantinar_logo.png
FinAI_COST.png

© 2023-2025

bottom of page