The conversation around trustworthy AI in financial services has reached a new level of maturity. This becomes clear in the recent SFTI roundtable discussion held at the Swiss AI Summit 2025 in Zurich, which brought together practitioners from banks, insurers, academia, and leading AI providers. The core objective was to create an open platform for exchange regarding strategic approaches to responsibly and effectively utilize Artificial Intelligence (AI) and developing a suitable governance framework for that purpose. The exchange focused on one central question: How can financial institutions build AI frameworks that are not only innovative, but trustworthy, scalable, and compliant by design?
Financial institutions are increasingly embedding AI into core processes, from information extraction and automation to more advanced decision support systems. In this matter, Unique AI is the leading platform for agentic use cases in Financial Services with built-in security and AI Governance processes. Participants at the roundtable included senior legal, data, privacy, and academic leaders from law firms, banks, insurance companies, technology providers, and universities, including Dr. Sina Wulfmeyer as Chief Data Officer at Unique AI. The round agreed that effective AI governance must follow a holistic, risk based approach and be integrated into existing organizational structures, rather than treated as a standalone IT topic. As Dr. Sina Wulfmeyer said “AI governance done right becomes a strategic enabler.”
A recurring theme throughout the discussion was that AI amplifies existing risks, particularly around data, model behavior, and third party dependencies. As a result, governance frameworks need to build on established risk management, data governance, and information security practices, while extending them to address AI specific challenges such as model explainability, lifecycle management, and automation risk.
Importantly, interdisciplinary collaboration emerged as a decisive success factor. Early involvement of Business, Risk, Compliance, Legal, and IT functions significantly increases the likelihood that AI initiatives move beyond the proof of concept stage and deliver sustainable value in production environments.
While regulation is often perceived as a constraint, the discussion at the Swiss AI Summit highlighted a different perspective. Beyond compliance, it strengthens trust with regulators, customers, and internal stakeholders and creates the foundation for scalable innovation.
One of the central challenges discussed was the translation of abstract principles such as transparency, explainability, and fairness into concrete operational measures. While regulatory frameworks provide important orientation, institutions must tailor their implementation to their specific risk profile and operating model.
Participants emphasized practical measures such as maintaining a transparent, enterprise wide inventory of AI use cases, adopting risk based categorization of AI systems, and treating AI governance as an iterative process that evolves alongside technology and regulation.
Operationalizing trustworthy AI requires governance frameworks that translate abstract principles into concrete, enforceable controls across the organization.
Read the full report here.