Responsible governance of GenAI in finance is more urgent now than ever. To ground our approach in global best practice, we turned to the Handbook on Generative AI Guardrails in Banking, published in May 2025 by the Association of Banks in Singapore (ABS) — one of the most rigorous and comprehensive frameworks to emerge from any banking jurisdiction worldwide.
Drawing on its structured guardrail methodology, we mapped our own AI risk practices against its standards (plus other standards like ISO 42001 (Link) — and validated that our approach holds up not only to Singapore's demanding regulatory bar, but equally to the European and Swiss frameworks that govern many of the banks we work with. The result is a practical, proportionate model for AI risk management that we're sharing here.
Why it matters
Generative AI is being adopted across financial services at an unseen pace — improving customer experience, automating workflows, and augmenting decision-making. One recent example is the Unique Source of Wealth (SoW) agent which facilitates KYC processes through a standardized, AI-enabled approach tailored for regulated private banking environments. But, alongside the opportunity comes a set of risks that go beyond traditional IT or model governance frameworks. The question is not whether to adopt AI, but how to do so with the observability, controllability, and oversight that regulators, employees, and customers require.
A Framework Built from Real Experience
In May 2025, the Association of Banks in Singapore (ABS), together with MAS and a working group of major institutions — including DBS, HSBC, JPMorgan, OCBC, and UOB — published the Handbook on Generative AI Guardrails in Banking. The Handbook draws on real implementation experience across more than thirty enterprise Gen AI use cases and provides a structured framework for managing the risks that come with them.
It covers seven Gen AI application categories (Coding, Document Extraction & Summarization, Speech-to-Text, Translation, Content Creation, Knowledge Management, and Process Optimization) and maps them against ten key risks:
- Model Quality: Hallucination & Confabulation, Insufficient Model Accuracy, Overconfidence, Model Degradation
- Output Safety: Bias & Unrepresentative Outputs, Toxic Outputs
- Governance & Oversight: Inadequate Human Oversight, Insufficient Governance, Low AI Risk Awareness
- User & Process: Weak Feedback & Recourse Mechanisms
Against each, it proposes nine concrete guardrails at both enterprise and system level — from governance structures and red-teaming to human-in-the-loop moderation and user transparency.
The core principle is proportionality: the extent and complexity of guardrails should match the risk materiality of the use case. A low-risk internal productivity tool requires a different control set than a client-facing application used in KYC or underwriting decisions.
A Practical Guide to AI Risk Management in Finance and the Unique AI Approach
Responsible governance of GenAI in finance is more urgent now than ever. To ground our approach in global best practice, we turned to the Handbook on Generative AI Guardrails in Banking, published in May 2025 by the Association of Banks in Singapore (ABS) — one of the most rigorous and comprehensive frameworks to emerge from any banking jurisdiction worldwide.
Drawing on its structured guardrail methodology, we mapped our own AI risk practices against its standards (plus other standards like ISO 42001 (Link) — and validated that our approach holds up not only to Singapore's demanding regulatory bar, but equally to the European and Swiss frameworks that govern many of the banks we work with. The result is a practical, proportionate model for AI risk management that we're sharing here.
Why it matters
Generative AI is being adopted across financial services at an unseen pace — improving customer experience, automating workflows, and augmenting decision-making. One recent example is the Unique Source of Wealth (SoW) agent which facilitates KYC processes through a standardized, AI-enabled approach tailored for regulated private banking environments. But, alongside the opportunity comes a set of risks that go beyond traditional IT or model governance frameworks. The question is not whether to adopt AI, but how to do so with the observability, controllability, and oversight that regulators, employees, and customers require.
A Framework Built from Real Experience
In May 2025, the Association of Banks in Singapore (ABS), together with MAS and a working group of major institutions — including DBS, HSBC, JPMorgan, OCBC, and UOB — published the Handbook on Generative AI Guardrails in Banking. The Handbook draws on real implementation experience across more than thirty enterprise Gen AI use cases and provides a structured framework for managing the risks that come with them.
It covers seven Gen AI application categories (Coding, Document Extraction & Summarization, Speech-to-Text, Translation, Content Creation, Knowledge Management, and Process Optimization) and maps them against ten key risks:
Against each, it proposes nine concrete guardrails at both enterprise and system level — from governance structures and red-teaming to human-in-the-loop moderation and user transparency.
The core principle is proportionality: the extent and complexity of guardrails should match the risk materiality of the use case. A low-risk internal productivity tool requires a different control set than a client-facing application used in KYC or underwriting decisions.