Explainable AI - Training Week
The course provides a comprehensive introduction to Explainable Artificial Intelligence (XAI), emphasizing the methodologies and practical applications of cutting-edge models such as LIME, SHAP, deep learning XAI, time series-based XAI methods and others.


Time & Location
06 Oct 2025, 08:30 – 10 Oct 2025, 18:00
BFH Business School, Brückenstrasse 73, 3005 Bern, Switzerland
About the event
The training week provides a comprehensive introduction to Explainable Artificial Intelligence (XAI), emphasizing both foundational and advanced methodologies. Participants will explore practical applications of model-agnostic techniques such as Partial Dependence Plots (PDP), Individual Conditional Expectation (ICE) plots, LIME, and SHAP, alongside specialized approaches for interpreting complex deep learning models. Particular attention will be given to explainability in neural networks, including gradient-based methods, as well as emerging XAI techniques tailored to time-series data. Participants will explore how these techniques enhance interpretability and transparency in AI systems, along with the challenges they face, such as scalability, interpretability trade-offs, and accuracy limitations.
The course also investigates the limitations and reliability of XAI models when applied to complex datasets, with advanced discussions on their performance and practical constraints. A distinctive focus is placed on financial applications, examining how XAI can address the unique challenges and regulatory requirements of the financial sector.
By the end of the course, participants will gain the expertise to implement XAI models, critically evaluate their effectiveness, and apply them responsibly within financial systems, fostering trust and compliance with regulatory standards.
Modules and Credits: The training week covers the following modules and credits of the MSCA doctoral program:
6-9th October, 2025: Foundation Module "Need for Explainable AI in Finance" (4 ECT)
10th October, 2025: Module "AI for Data Analysis: Privacy and Coding in Digital Finance" (1 ECT)
Prior Knowledge: Participants are expected to have a foundational understanding of machine learning concepts, including supervised and unsupervised learning, common algorithms, and evaluation metrics. Certain familiarity with Python programming is needed, with experience using libraries such as scikit-learn, pandas, and NumPy. Familiarity with basic statistics and linear algebra will also be helpful for understanding the mathematical foundations of explainability methods.
Detailed Schedule
Monday, 6th October
8:30 - 9:00: Networking Coffee
9:00 - 9:15: Welcome to BFH (Christian Hopp, Head of Research, Bern University of Applied Sciences)
9:15 - 9:45: Overview of the Week / Launch of Training Competition (Branka Hadji Misheva & Lucia Gomez)
Industry Session - AI in Action (real-world use cases)
09:45 - 10:15: Explainable AI for Philanthropy and Finance (Milos Maricic, CEO, Altruist League)
10:15 - 10:45: Microsoft’s Responsible AI adoption for FSI: Copilot, SAFE (Secure AI for Everyone), and the Journey from Pilot to Governed Scale (Darri Helgason, AI Business Solutions Engineer, Microsoft Switzerland)
Break
11:00 - 11:30: AI and Investment Analysis (Alexandru-Septimiu Rif, Portfolio Manager, Alean Capital AG and Robert Gutsche, Professor Applied Data Science, Bern University of Applied Sciences)
11:30 - 12:00: Independent Wealth Management in Transition (Lilian Nordet, Director of HUB+) and Interpretable AI for Stock Return Prediction Models (Louis-Alexandre Piquet, Quantitative Researcher and Trader at RAM Active Investments SA)
12.00 - 12.30: AI Governance for Trustworthy, Compliant AI (Kevin Schawinski, Co-founder & CEO of Modulos)
Lunch
13:30 - 14:00: Applied AI in Banking: Failures, Successes, and What’s Next? (Aidas Malakauskas, Head of Omnichannel Solutions, Swedbank)
14:00 - 14:30: Beyond Efficiency Promises: Bottom-Up Agents That Teams Actually Use (Fabio Duo, Founder & CEO, PeakPrivacy and Founder & Project Manager, Freihandlabor GmbH)
14:30 - 15:00: AI in Finance: Redefining Intelligence in Investing (Luba Schoenig, Co-Founder, Umushroom and former banker, Credit Suisse)
Break
15:30 - 16:00: AI in Practice: Real-World Applications at RBI (Stefan Theussl, Head of Research Innovation Hub, Raiffeisen Bank International AG)
16:00 - 16:30: From Models to Systems: Advancing Explainable AI in Financial Decision-Making (Gennaro Di Brino, Head of Data Science, Cardo AI)
Break
17:00 - 18:00: Student Pitches xAI
Tuesday, 7th October
9:00 - 10:30: Recap to Machine & Deep Learning (Flipped Classroom)
Break
10:45 - 12:30: White Box AI - Intrinsic Explainability & Basic Explainability - Feature Importance, PDP, ICE (Faizan Ahmed, xAI Professor and Program Director BIT at University of Twente
Lunch
13:30 - 15:30: Practical Session, moderated by Faizan Ahmed
Break
16:00 - 17:00: Modulos Demo. Students will discover an AI governance platform that streamlines compliance with global regulations and standards and manages risks (online session with Kevin Schawinski Co-Founder & CEO of Modulos)
Wednesday, 8th October
9:00 - 10:45: SHAP & LIME: Introduction and Deep Dive (Branka Hadji Misheva)
Break
11:15 - 12:00: Practical Session I: SHAP Limitations (Adam Andrzej Kurpisz, Professor in Operations Research at BFH)
Lunch
13.30 - 14.15: Practical Session II: Real Use-Case Replication (Marcos Machado & Julius Kooistra)
14:15 - 15:30: Deep Learning xAI (Faizan Ahmed)
Break
16:00 - 17:00: Practical Session, moderated by Faizan Ahmed
Thursday, 9th October
9:00 - 10:45: Time-Series based xAI Methods (Faizan Ahmed)
Break
11:00 - 12:00: Practical Session, moderated by Faizan Ahmed
Lunch
13:30 - 14:30: Evaluation frameworks & Applied Work (Golnoosh Babaei, researcher at University of Pavia)
14:30 - 15:00: A Systematic Literature Review of XAI: Process, Challenges, and Insights (Jurgita Černevičienė, PhD student, Kaunas University of Technology)
Break
15:30 - 16:00: Explainability for Reinforcement Learning (Wouter van Heeswijk, Assistant Professor at University of Twente)
16:30 - 17:00: Intelligence and Explainability (Frederik Sinan Bernard, Senior Researcher at University of Twente)
17:00 - 17:30: Discussion on Assessment for XAI Module & Closing remarks
Friday, 10th October
Module: AI for Data Analysis: Privacy and Coding in Digital Finance (Luca DI Grazia, Great Minds Fellow at University of St. Gallen)
9:00 - 10:00: Overview of AI privacy concerns, coding tools and local LLMs.
10.00 - 11.30: Build a basic app interface connected to a private on-device AI assistant.
11.30 - 12.00: Data analysis with AI for company revenue data; discuss privacy, IP, and compliance.
Lunch
13.30 - 16.30: Applying the protype built to a case study: Students work in groups to extend their prototype to address a specific problem in the digital finance domain.
16.30 - 17.45: Each group presents their findings (10 minutes), with constructive feedback from peers and the instructor.
17.45 - 18.00: Discussion: Lessons Learned and Pathways Forward
Speakers and Talks
Milos Maricic (CEO, Altruist League): Explainable AI for Philanthropy and Finance

Milos Maricic is a philanthropy expert, author, and former humanitarian executive pioneering the use of artificial intelligence in global giving. He is the founder of the Altruist League, a Geneva-based consultancy that has reshaped how foundations and investors approach systemic challenges through AI-driven funding strategies.
Talk: Explainable AI for Philanthropy and Finance
In this session, Milos Maricic, CEO of the Altruist League, explores how Explainable AI (XAI) is transforming high-stakes decision-making in philanthropy and finance. Drawing on real-world examples from his organization’s work with global foundations and institutional investors, he will show how tools like SHAP and LIME can bring much-needed transparency to AI-driven grant allocation and risk analysis. The talk will address the unique ethical, regulatory, and strategic challenges of applying XAI in contexts where social outcomes, not just profit, are at stake. With a focus on interpretability as a cornerstone of trust, Milos will share lessons on navigating the trade-offs between explainability, accuracy, and impact in mission-driven AI systems.
Stefan Theussl (Head of Research Innovation Hub; Raiffeisen Bank International AG): (Explainable) AI in Practice: Real-World Applications at RBI

Stefan Theussl is currently heading the Research Innovation Hub of Raiffeisen Research at RBI Group. In this role he and his team are responsible for the Digital Research Platform (https://www.raiffeisenresearch.com) as well as the cloud-based data platform supporting local financial analysis and facilitating the development of new AI and Quant use cases. Previously, Stefan was holding a position as Head of Team Portfolio Credit Risk Analytics within RBI in which he and his team were responsible for state-of-the-art applications for the analysis of RBI Group's non-retail credit portfolio. Stefan received his PhD from the WU Wien and as such still has a strong link to academia. He is also a member of the supervisory board of the MSCA Digital Finance Industrial Doctoral Network and co-lead of work package AI in Financial Markets. Furthermore, he has contributed several open source packages to the R-Project for Statistical Computing.
Talk: (Explainable) AI in Practice: Real-World Applications at RBI
In this talk we will be presenting on our strategies for (explainable) AI at RBI, with a focus on several key use cases.
Our first use case will be about utilizing AI in the analysis of sentiment with respect to specific companies of interest derived from (financial) news. This use case exemplifies how we harness AI at larger scale to process extensive volumes of financial news data, yielding valuable insights into market sentiment and trends as well as the impact on ESG metrics.
Next, we explore our application of AI in Know Your Customer (KYC) processes. We explain how we employ AI for AML data extraction, KYC reviews for credit cards, or translations, thereby ensuring compliance and mitigating risks.
Finally, we present a use case involving our financial research chatbot MIRAI. This chatbot is designed to answer queries related to our financial research, and we demonstrate how we maintain the explainability of its responses, fostering transparency and trust among our users.
Luba Schoenig (Formar Credit Suiss banker & co-founder of Umushroom): AI in Finance: Redefining Intelligence in Investing

Luba Schoenig holds a Master’s in Economics (University of Fribourg) and a PhD in Quantitative Finance (University of Zurich). She worked in structured products at Lehman Brothers and Credit Suisse, later being responsible for UHNWI investment solutions for part of the Emerging Markets. At Julius Baer, she built an investment specialist team for UHNWI clients. In 2020, she co-founded UMushroom together with Tonia Zimmermann.
Title: AI in Finance: Redefining Intelligence in Investing
Artificial Intelligence is breaking down the long-standing information and knowledge barriers of the traditional financial world—making investing more accessible, personalized, and transparent than ever before. By addressing challenges such as complex language, information overload, lack of personalization, and limited access to expert guidance, AI is transforming financial decision-making with remarkable speed and precision. This talk will explore cutting-edge innovations, ethical implications, and the evolving role of intelligent eco-systems in shaping the future of finance. Join us to discover how AI is redefining investing for everyday individuals.
Alexandru-Septimiu Rif (Portfolio Manager at Alean Capital AG)
Dr. Alex Rif is a portfolio manager and investment professional with a strong focus on equity research and data-driven investment strategies. He currently works at Alean (Capital) AG in Liechtenstein, where he serves as portfolio manager and member of the fund’s investment committee, contributing to strategic asset allocation and investment decisions. Alongside his investment role, Alex is a lecturer at the Zurich University of Applied Sciences (ZHAW) and the University of St. Gallen, where he teaches courses on corporate finance, valuation, trading, and risk management. Alex holds a PhD in Finance from the University of St. Gallen and his research is focused on investment strategies, equity analysis and data analytics.
Robert Gutsche (Professor in Applied Data Science and Finance at BFH)
(details to be added)
Gennaro Di Brino (Head of Data Science at CARDO AI): From Models to Systems: Advancing Explainable AI in Financial Decision-Making

Gennaro Di Brino is currently serving as the Head of Data Science at Cardo AI, where he directs the design, experimentation, and deployment of language modeling, quantitative, and machine learning systems for structured finance and lending platforms. His team's recent deliveries include a retrieval-augmented document analyzer, text-to-SQL and data quality assistants, a credit risk simulation platform, and early-warning engines for credit default. Before Cardo AI, Gennaro held data science roles at Docebo, Altius Consulting (now part of Avanade), and Data Reply, creating recommender systems, demand forecasting simulators, natural language models, and computer vision pipelines for e-learning, retail, telecom, and insurance clients. A pure mathematician by training, he earned his PhD from Yale University and subsequently held a postdoctoral Marie Curie fellowship, with later stays at the IHES and the Max Planck Institute. His academic work has appeared in several international peer-reviewed journals.
Title: From Models to Systems: Advancing Explainable AI in Financial Decision-Making
We'll cover a couple of use cases, ranging from credit default prediction on tabular loan data to retrieval-augmented data extraction from transaction documents. In the former case, literature and explainability practices in the financial industry seem to be fairly well aligned—notably, the use of Shapley values keeps gaining ground—while explainability in the latter remains an active area of research. To compensate for the lack of consolidated methods, we'll combine careful design with the selective adoption of techniques from recent literature. We’ll see how putting these concepts into practice can make our users’ life easier—especially when the user is interacting with a "system" of models rather than a single model.
Aidas Malakauskas (Head of Omnichannel Solutions, Swedbank): Applied AI in Banking: Failures, Successes, and What’s Next?

Aidas Malakauskas holds a doctorate in economics and leads the Omnichannel Solutions Division at Swedbank Baltics. His work focuses on digital service development, automation, and the use of artificial intelligence in banking. His academic research has explored access to credit for small and medium-sized enterprises and the application of AI in economic analysis.
Talk: Applied AI in Banking: Failures, Successes, and What’s Next?
A look at how AI is applied in banking — the successes delivering real impact, the failures falling short, and the lessons shaping the next phase of AI adoption in finance.
Branka Hadji Misheva (Professor in Applied Data Science at BFH): SHAP and LIME: Deep Dive

Branka Hadji Misheva is one of the main organizers of this event. She is a Professor of Applied Data Science and Finance at Bern University of Applied Sciences, where she leads the Applied AI Research and Solutions (AIRS) group. She holds a PhD in Economics and Management of Technology from the University of Pavia, specializing in fintech risk management using network theory. Branka has led multiple research projects funded by MSCA, Innosuisse, and SNF, focusing on AI applications in finance, credit risk modeling, and explainability in machine learning. She is an active contributor to international research networks, including COST FinAI. Branka has an extensive publication record in top-tier journals and regularly speaks at conferences on AI in finance. She has experience collaborating with industry partners, regulatory bodies, and central banks to develop responsible AI solutions. Her expertise spans deep learning, financial risk modeling, and counterfactual explanations. She is also an editor and reviewer for leading academic journals in AI and finance.
Title: SHAP and LIME: Deep Dive
This workshop offers an in-depth exploration of LIME and SHAP, two leading techniques for explaining machine learning models. Through a mix of lectures and hands-on practical sessions, participants will learn how these methods work, when to use them, and how to interpret their outputs. The workshop emphasizes understanding underlying algorithms and applying them to real-world datasets for robust model explainability.
Lucia Teijeiro Gomez (Professor in Applied Data Science at BFH): SHAP and LIME: Deep Dive

Prof. Dr. Lucia Gomez Teijeiro is one of the main organizers of this event. She is a Professor in Applied Data Science and Finance at the Bern University of Applied Sciences. She brings extensive expertise in the development and application of Artificial Intelligence (AI) technologies across a broad spectrum of domains, with a particular focus on financial services and systems. Her research and teaching activities center on leveraging cutting-edge AI methodologies to address complex, real-world challenges. Lucia has a profound understanding of AI models, especially large language models (LLMs), and possesses end-to-end technical skills ranging from model design and training to full-stack implementation.
Title: SHAP and LIME: Deep Dive
This workshop offers an in-depth exploration of LIME and SHAP, two leading techniques for explaining machine learning models. Through a mix of lectures and hands-on practical sessions, participants will learn how these methods work, when to use them, and how to interpret their outputs. The workshop emphasizes understanding underlying algorithms and applying them to real-world datasets for robust model explainability.
Julius Kooistra (Researcher in Applied Data Science at BFH): XAI in Public Employment Services

Julius Kooistra is a researcher at Bern University of Applied Science in the Applied AI Research and Solutions (AIRS) group. He holds a MSc in Business Information Technology from the University of Twente, specialising in Data Science & Business and Enterprise Architecture & IT Management. Julius has co-authored proposals funded by MSCA, Innosuisse, BeLEARN, and private companies. His current work involves building transparent and interpretable AI systems for use in government, education, legal, and financial sectors.
Title: XAI in Public Employment Services
This workshop offers an in-depth exploration of XAI in a real-world Public Employment Services use case. Participants will get hand-on experience applying explainable clustering, explainable ensemble learning, and an insight in the practical application of these methods in a real-world setting.
Faizan Ahmed (Lecturer and program director in Business Information Technology at the University of Twente): Explainable AI for Deep Learning & Time Series

Faizan Ahmed is a lecturer and program director in Business Information Technology at the University of Twente. With a PhD in Applied Mathematics, his research focuses on applied machine learning and explainable AI (XAI). He supervises interdisciplinary graduation projects on topics like point cloud processing, XAI for time series, energy forecasting, and data-driven approaches to disease progression modelling.
Title: Explainable AI for Deep Learning & Time Series
This session introduces methods for making deep learning models applied to time series more transparent and interpretable. Participants will explore key explainability techniques, their applications in finance and other high-stakes domains, and how these methods help build trust in model predictions.
Golnoosh Babaei (Senior researcher at University of Pavia): SAFE AI

Golnoosh Babaei is a researcher and data scientist specializing in SAFE Artificial intelligence (AI), Python programming, data analytics, and machine learning. She holds a PhD in computer engineering from University of Pavia and has several years of experience applying statistical and computational methods to real-world problems. Over her career, she has worked on academic and industry projects ranging from a cross-selling project in AXA insurance company to application of the SAFE AI framework to text data in bank of Italy, and she has taught Python and machine learning through different courses. Her research interests include SAFE AI, machine learning, and financial decision-making.
Title: SAFE AI
As AI systems increasingly shape decisions in finance, healthcare, and education, ensuring their responsible design and their safety has become a critical challenge. This talk introduces a SAFE AI framework. In this framework, SAFE is an acronym for S: Security, A: Accuracy, F: Fairness and E: Explainability. We will explore how we can measure these AI principles using the new proposed metrics based on Lorenz curve, Dual Lorenz curve and Concordance curve. A real-world example will illustrate how these methods can measure safety of a decision-making model. By the end of the session, students will gain a clear framework for assuring safety of their own projects.
Fabio Duó (Founder and CEO of freihandlabor GmbH): Beyond Efficiency Promises: Bottom-Up Agents That Teams Actually Use

Fabio Duó is a tech entrepreneur with 25 years of experience. He is the founder and CEO of freihandlabor GmbH (since 2011), a Zürich-based Software Company where he leads a team of 14 developers and designers creating comprehensive web applications and digital solutions for startups and SMEs. Fabio also founded PeakPrivacy, an AI startup focused on ethical artificial intelligence and privacy-preserving data solutions for businesses. Previously founded Headlessforms and Easy User Test (2007-2011). He specializes in end-to-end digital transformation, from strategy and UX/UI design to complex web application development. Known for combining technical innovation with strong emphasis on data privacy, user-centric design, and responsible AI implementation. He is actively involved in the Swiss tech and NPO communities, sharing expertise on AI adoption and digital marketing strategies.
Title: Beyond Efficiency Promises: Bottom-Up Agents That Teams Actually Use
This talk explores how PeakPrivacy shifted from top-down AI initiatives that stalled to empowering frontline employees to design and guide their own agents. Fabio Duo shares practical patterns, like clear human-agent handovers, lightweight review notes, and simple traceability, that turn AI into tools that reduce drudgery while reinforcing, rather than replacing, professional judgment.
Wouter van Heeswijk (Assistant Professor in Operations Research & Financial Engineering at the University of Twente): Tutorial on explainable reinforcement learning (XRL)

Wouter van Heeswijk is an assistant professor in operations research & financial engineering at the University of Twente. His research efforts focus primarily on reinforcement learning, with both methodological developments and applications across domains. He teaches a number of courses in financial engineering, with topics including reinforcement learning in finance, numerical valuation of derivatives, real option analysis, risk management and financial accounting. Within the DIGITAL network, he is primarily involved in the doctoral training programme.
Title: Tutorial on explainable reinforcement learning (XRL)
Reinforcement learning has shown promise for financial decision-making, but black-box policies raise challenges of trust, governance, and deployment. This tutorial introduces explainable reinforcement learning (XRL) with a focus on finance. We will discuss when and why explanations matter, review practical techniques (attribution, policy distillation, counterfactuals), and connect them to regulatory and risk-management contexts. A live demo will illustrate how to train a simple portfolio allocation agent, generate feature attributions, and distil its behaviour into an interpretable model. The goal is to provide participants with both conceptual frameworks and hands-on tools for making RL policies auditable without sacrificing performance.
Daniel Traian Pele (Professor of Statistics and Data Science at the Bucharest University of Economic Studies): Explainable AI for Electricity Price Anomaly Detection: A SHAP-Driven Approach in Romania’s Energy Market

Daniel Traian Pele is Professor of Statistics and Data Science at Bucharest University of Economic Studies and serves as local project manager for the MSCA Industrial Doctoral Network on Digital Finance (DIGITAL). His research spans time series analysis, financial risk management, and applications of AI and blockchain in digital finance, while also contributing expertise to the World Bank, European Investment Bank, and European Commission. As an active educator and PhD coordinator, he teaches, mentors, and publishes internationally, regularly speaking at leading conferences on risk, digital finance, and AI.
Title: Explainable AI for Electricity Price Anomaly Detection: A SHAP-Driven Approach in Romania’s Energy Market
This research presents an approach to anomaly detection in electricity price data using explainable artificial intelligence (XAI) techniques. The study focuses on the Romanian electricity market, analyzing hourly price data alongside generation and load variables to identify and explain price anomalies. We use Isolation Forest for anomaly detection and Random Forest for predictive modeling, while SHAP (SHapley Additive exPlanations) values provide interpretability of the detected anomalies. Our methodology categorizes anomalies into price spikes, price drops, and other anomalies, revealing distinct patterns in each category. Results show that renewable energy generation, particularly wind and solar, significantly influences price drops, while load forecast deviations and conventional generation constraints contribute to price spikes. This framework offers insights for market participants and regulators, enabling better understanding of market dynamics and potentially improving forecasting accuracy and market stability. This research contributes to the growing field of explainable AI applications in energy markets by providing a transparent methodology that bridges the gap between black-box anomaly detection and actionable market intelligence.
Jurgita Černevičienė (Lecturer and PhD candidate in Computer Science at Kaunas University of Technology): Explainable artificial intelligence (XAI) in finance: a systematic literature review

Jurgita Černevičienė is a lecturer and PhD candidate in Computer Science (2022–present) at the Department of Mathematical Modelling at Kaunas University of Technology, Lithuania. She holds an MSc in Mathematics (Business Big Data Analytics), which motivated her deeper focus on Artificial Intelligence. Her research focuses on evaluating the effectiveness of XAI in finance—linking model transparency to measurable risk-management outcomes. Beyond XAI for risk assessment, she investigates data-driven properties of digital-asset markets, evaluating stylized facts in cryptocurrencies; she is also certified in the Google Data Analytics program. Current work extends to investment decision-making and portfolio management, aiming to connect interpretable ML with practical allocation and risk-control strategies.
Title: Explainable artificial intelligence (XAI) in finance: a systematic literature review
Explainable Artificial Intelligence (XAI) has emerged as a vital research domain in finance, where transparency, regulatory compliance, and stakeholder confidence are paramount. Through a review of recent literature, we define four enduring challenges to successful adoption: (i) customizing explanations for the appropriate audience (experts versus end-users); (ii) a scarcity of truly innovative XAI methodologies, with numerous studies merely augmenting the interpretability of ML models rather than progressing XAI itself; (iii) the lack of standardized, universally recognized evaluation metrics for assessing explanation quality; and (iv) persistent information-security risks that threaten model integrity and trust. Despite these challenges, findings show that XAI can strengthen risk assessment, credit scoring, fraud detection, and regulatory alignment by clarifying drivers of model outputs and improving decision confidence. Notable advances—such as the S.A.F.E. trustworthiness criteria, the KAIRI risk-monitoring framework, and interpretable toolkits like Aletheia and PiML—signal progress toward more reliable and auditable systems. Nevertheless, gaps remain around domain-aware evaluation standards, rigorous security hardening, and applications beyond credit—especially portfolio optimization and internet financing. Overall, the evidence indicates that XAI can bridge the performance–trust gap in financial AI when paired with audience-appropriate explanations, robust metrics, and close collaboration between data scientists and finance professionals.
Darri Helgason (AI Business Solutions Engineer at Microsoft Switzerland): Microsoft’s Responsible AI adoption for FSI: Copilot, SAFE (Secure AI for Everyone), and the Journey from Pilot to Governed Scale

Darri Helgason is a Solution Engineer at Microsoft Switzerland, where he designs and guides the adoption of AI platforms and Agentic AI for the Financial Services Industry. He supports a global S&P 500 financial institution headquartered in Zurich, helping to operationalize secure, compliant, and scalable data and AI capabilities aligned with risk and regulatory frameworks. Drawing on an extensive background in information technology that spans solution architecture, data platforms, and enterprise integration, he focuses on translating complex technical constructs into measurable business outcomes. His professional interests include responsible and explainable AI in regulated environments, model governance, and the operating models required to deploy and sustain AI systems at scale.
Title: Microsoft’s Responsible AI adoption for FSI: Copilot, SAFE (Secure AI for Everyone), and the Journey from Pilot to Governed Scale
This session explains Microsoft’s AI strategy for Financial Services, highlighting Copilot and SAFE (Secure AI for Everyone) as enablers of responsible adoption. It explores how leading institutions are embracing generative AI through an Access → ROI → Governance framework—balancing innovation with security, compliance, and risk management. Attendees will gain insights into practical steps for using and adopting AI, scaling Copilot, including governance controls, data protection, and change‑management practices that address shadow AI and workforce skilling. Learn how to accelerate AI transformation while preserving trust, privacy, and control with insights and learnings from Microsoft as Customer Zero.
Lilian Nordet (Director of HUB+): Independent Wealth Management in Transition: The Role of HUB+ and the Promise of Explainable AI in Finance

Lilian Nordet is Director of HUB+ (founded under GSCGI) a long-standing not-profit professional association dedicated to defending the interests of its Members and supporting independent financial advisors and all actors in the independent financial space in Switzerland. She holds a master degree from Ecole de Traduction et d’Interprétation (ETI), Geneva, and successfully completed the Sustainable Finance Module from Haute Ecole de Gestion. Lilian brings extensive expertise in the finance industry; she worked in renowned law firms before entering the banking sector and asset management in Geneva and Zurich. Lilian specialized in ESG (Corporate Engagement) and Project Management (Event Coordinator), has a background in economic and law translation, and is fluent in 7 languages.
Title: Independent Wealth Management in Transition: The Role of HUB+ and the Promise of Explainable AI in Finance
This session introduces HUB+, a not-for-profit association supporting independent wealth managers, family offices, and financial advisors in addressing sectoral challenges. We examine the pressures of regulatory complexity, shifting client expectations, and technological transformation. Finally, we highlight the potential of explainable AI in finance as a lever for transparency, accountability, and competitiveness in independent wealth management, and present a real-world use case from one of our members.
Louis-Alexandre Piquet (Quantitative Researcher and Trader at RAM Active Investments SA): Interpretable AI for Stock Return Prediction Models

Louis-Alexandre Piquet joined RAM Active Investments in October 2019 and is currently Quantitative Researcher and Trader in the Systematic Equities team. He graduated from CentraleSupélec (ECP P2018) and EPFL (Master in Financial Engineering) in 2019.
Title: Interpretable AI for Stock Return Prediction Models
While non-linear models provide superior accuracy in predicting stock returns, they present a significant challenge in terms of interpretability. We will showcase tools used by RAM AI to address this "black box" problem. Furthermore, we will demonstrate how understanding the key drivers behind the model's predictions can directly inform and improve portfolio construction.
Luca DI Grazia (Great Minds Fellow, University of St. Gallen): AI for Data Analysis: Privacy and Coding in Digital Finance

Luca Di Grazia is a researcher in automated security testing for AI-generated code. He held a Great Minds Fellowship at the University of St. Gallen and was previously a Postdoctoral Researcher at USI, Switzerland, working on GenAI for software testing, supervising PhD students, and designing real-world testing scenarios. He earned his Ph.D. in Computer Science (summa cum laude) from the University of Stuttgart with a thesis on supporting software evolution through search and predictions. During a research internship at Uber in Amsterdam, he developed GenAI-based bug-fixing tools with LLMs, winning an internal competition among 103 teams and presenting the project to the CEO.
Title: AI for Data Analysis: Privacy and Coding in Digital Finance
The course explores three central themes: the use of AI in software development, the application of AI to financial data analysis, and the privacy, ethical, and regulatory challenges mentioned by the EU AI Act. Particular attention is given to how large language models (LLMs) and related tools lower barriers to coding and transform the practices of data analysts. At the same time, the course highlights the critical importance of privacy, transparency, and accountability showing how AI can be used in a more private way.
Frederik Sinan Bernard (Researcher, University of Twente)
(details to be added)
Accommodation Options
To help with planning your stay, we have compiled a list of affordable accommodation options in Bern, which are near the university campus.