AI in the financial industry

What FINMA’s Guidance 08/2024 means for your institution

AI in the financial industry
  • Blog
  • 10 minute read
  • 03/02/25

On 18 December 2024, the Swiss Financial Market Supervisory Authority (FINMA) released its Guidance 08/2024 on Governance and Risk Management when using Artificial Intelligence. As AI continues to revolutionise the financial sector, this Guidance provides insights into how institutions can manage the risks associated with AI technologies. In this blog post, we’ll explore how the key elements of FINMA Guidance 08/2024 compare to the broader regulatory framework established by the EU AI Act.

Background

In light of the EU AI Act and FINMA Guidance 08/2024, Swiss financial institutions should recognise how the EU regulation can complement FINMA’s Guidance. To support this approach, FINMA’s guidelines include an ‘outlook’ section, which suggests that institutions should not only comply with the guidelines but also consider international standards, such as those set by the EU AI Act.

The EU AI Act establishes a comprehensive, risk-based framework that applies across various sectors, including both regulated and non-regulated entities. It’s a detailed, prescriptive regulation focused on sector-wide governance, transparency and the protection of fundamental rights.

In contrast, FINMA’s Guidance 08/2024 offers a principle-based, flexible regulatory approach tailored to the unique risks and operational challenges of the Swiss financial sector. Unlike the EU AI Act, FINMA’s Guidance focuses specifically on financial institutions – such as banks, investment fund managers and insurers – and emphasises proportionality, ensuring regulatory obligations align with the complexity and risk of AI applications.

While FINMA’s Guidance provides a strong foundation for AI practices within the Swiss financial sector, integrating some of the safeguards from the EU AI Act could enhance the operational resilience and regulatory preparedness of Swiss institutions, positioning them for compliance with broader international AI regulations.

FINMA Guidance 08/2024 Key takeaways

At the core of FINMA’s Guidance is the emphasis on robust governance structures and risk management practices tailored specifically for AI applications. The Guidance outlines several critical areas that financial institutions must focus on:

Chapter

Finding

Description

Governance

Focus on Data Protection Risks

Supervised institutions focus primarily on data protection risks, but less on model risks such as lack of robustness and correctness, bias, lack of stability and explainability.

Decentralised Development

The development of AI applications is often decentralised, making it challenging to implement consistent standards, assign responsibilities clearly to employees with the appropriate skills and experience, and address all relevant risks.

Challenges with Externally Purchased Applications

Supervised institutions sometimes had difficulties determining whether AI is included in externally purchased applications and services, which data and methods are used, and whether sufficient due diligence exists.

AI Governance

FINMA assessed whether supervised institutions have AI governance in place, including a centrally managed inventory with a risk classification and resulting measures, the definition of responsibilities and accountabilities, requirements for model testing and supporting system controls, documentation standards and broad training measures.

Outsourcing Issues

In the case of outsourcing, supervised institutions sometimes struggled with implementing additional tests, controls and contractual clauses governing responsibilities and liability issues, and ensuring that third parties had the necessary skills and experience.

Inventory and Risk Classification

Narrow Definition of AI

Some supervised institutions defined AI narrowly to focus on supposedly larger or new risks, making it challenging to ensure the completeness of inventories.

Lack of Consistent Criteria for Risk Management

Not all supervised institutions had established consistent criteria for identifying applications that require special attention in risk management due to their materiality, specific risks and probability of these materialising.

Data Quality

Data Quality Issues

Some supervised institutions haven’t defined any requirements or controls to ensure data quality for AI applications, leading to potential issues with incorrect, inconsistent, incomplete, unrepresentative or outdated data.

Tests and Ongoing Monitoring

Weaknesses in Performance Indicators and Monitoring

FINMA observed weaknesses in the selection of performance indicators, tests and ongoing monitoring at some supervised institutions.

Documentation

Lack of Centralised Documentation Requirements

Some supervised institutions don’t have centralised documentation requirements, and existing documentation isn’t sufficiently detailed and recipient-oriented.

Explainability

Explainability Issues

Results from AI applications often can’t be understood, explained or reproduced, making it difficult to critically assess them.

Independent Review

Lack of Independent Review

FINMA didn’t observe a clear distinction between the development of AI applications and the independent review in all cases, and only a few supervised institutions carry out an independent review of the entire model development process by qualified personnel.

Key similarities

The EU AI Act and FINMA Guidance 08/2024 share a key similar approach when tackling AI, focusing on responsible deployment, risk assessment, transparency and accountability. Both emphasise compliance with standards prioritising safety, fairness and human rights. While Switzerland follows principle-based regulation and the EU adopts rules-based regulation, both reflect a global drive for strong AI governance. For more on the EU AI Act, click here and here.

Key similarities between FINMA Guidance 08/2024 and the EU AI Act:

  • Risk-based approach: Similar to the EU AI Act, FINMA’s Guidance adopts a risk-based approach to regulation, categorising AI systems based on their potential risk levels, albeit certain provisions of the EU AI Act aren’t included in the FINMA guidelines; however, they could be considered when deploying AI tools. For example, the EU AI Act highlights prohibited tools through its risk categorisation, as illustrated below, which will no longer be allowed for companies affected by EU regulation as of 2 February 2025.
  • Transparency requirements: The EU AI Act mandates transparency requirements similar to those found in FINMA Guidance 08/2024. Organisations will need to provide clear information about their AI systems’ capabilities and limitations.
  • Upskilling: Both the EU AI Act and FINMA Guidance highlight the importance of providing training to staff. This training should extend beyond those directly using AI tools to also include individuals responsible for overseeing outsourced activities to entities that employ AI. This is particularly critical, for instance, when an entity outsources tasks to another organisation utilising AI. It’s essential that not only the users of AI tools understand GenAI/large language models (LLMs), but also those overseeing outsourced activities, as their familiarity with the tools is vital for conducting effective sampling tests.
  • Compliance requirements: Both the EU AI Act and FINMA Guidance 08/2024 emphasise the importance of data quality and governance, ensuring AI systems use accurate, unbiased data. Additionally, both frameworks call for continuous monitoring and validation of AI systems to maintain reliability and mitigate risks over time. Lastly, both stress the need for strong accountability structures within organisations to manage AI-related risks effectively requiring the establishment of governance structures that ensure responsible use of AI technologies which have been illustrated under FINMA’s key takeaways above.

How the provisions of the EU AI Act correspond to certain FINMA observations

As financial institutions move forward with AI-driven innovation, it’s essential to not only comply with the Guideline but also to move outside of a tick-box exercise and keep the institutions safe and secure, as it’s a way to get future-ready. By aligning with the EU AI Act, it reinforces the global shift towards greater transparency and accountability in AI usage. In the table below, we’ve highlighted how specific EU AI Act provisions correspond to some of the FINMA observations.

Area

Observation of FINMA

EU AI Act Legal Requirements

Focus on Data Protection Risks

Supervised institutions focus primarily on data protection risks, but less on model risks such as lack of robustness and correctness, bias, lack of stability and explainability.

The EU AI Act mandates that high-risk AI systems comply with requirements for risk management, data quality, transparency and human oversight (Articles 9, 10, 11, 13, 14). It also emphasises the need for technical documentation and post-market monitoring to ensure ongoing compliance (Articles 11, 72). 

Decentralised Development

The development of AI applications is often decentralised, making it challenging to implement consistent standards, assign responsibilities clearly to employees with the appropriate skills and experience, and address all relevant risks.

The EU AI Act requires providers to establish a quality management system (Article 17) and maintain technical documentation (Article 11) to ensure consistent standards and clear assignment of responsibilities. It also mandates risk management systems (Article 9) and regular audits (Annex VII). 

Challenges with Externally Purchased Applications

Supervised institutions sometimes had difficulties determining whether AI is included in externally purchased applications and services, which data and methods are used and whether sufficient due diligence exists.

The EU AI Act requires transparency and traceability for high-risk AI systems, including detailed technical documentation and information on data sources and methods used (Articles 11, 13).  Providers must also ensure compliance with data governance and management practices (Article 10).

AI Governance

FINMA assessed whether supervised institutions have AI governance in place, including a centrally managed inventory with a risk classification and resulting measures, the definition of responsibilities and accountabilities, requirements for model testing and supporting system controls, documentation standards and broad training measures.

The EU AI Act mandates the establishment of a risk management system (Article 9), quality management system (Article 17) and technical documentation (Article 11). It also requires human oversight (Article 14) and transparency measures (Article 13).  Providers must ensure compliance through regular audits and updates (Annex VII). 

Outsourcing Issues

In the case of outsourcing, supervised institutions sometimes struggled with implementing additional tests, controls and contractual clauses governing responsibilities and liability issues, and ensuring that third parties had the necessary skills and experience.

The EU AI Act requires that providers ensure compliance with all requirements, even when outsourcing (Article 28). This includes maintaining technical documentation (Article 11) and ensuring that third parties meet the necessary standards and skills (Article 21).

Narrow Definition of AI

Some supervised institutions defined AI narrowly to focus on supposedly larger or new risks, making it challenging to ensure the completeness of inventories.

The EU AI Act provides a broad definition of AI systems and includes specific criteria for classifying high-risk AI systems (Article 6).  It also mandates comprehensive risk management and documentation to ensure all relevant risks are addressed (Articles 9, 11).

Lack of Consistent Criteria for Risk Management

Not all supervised institutions had established consistent criteria for identifying applications that require special attention in risk management due to their materiality, specific risks and probability of these materialising.

The EU AI Act requires a risk management system that includes the identification and analysis of risks, evaluation of risks and adoption of risk management measures (Article 9). It also mandates ongoing monitoring and updates to ensure consistent risk management (Article 72).

Data Quality Issues

Some supervised institutions have not defined any requirements or controls to ensure data quality for AI applications, leading to potential issues with incorrect, inconsistent, incomplete, unrepresentative or outdated data.

The EU AI Act mandates that high-risk AI systems use high-quality data sets for training, validation and testing (Article 10). Providers must ensure data governance and management practices to maintain data quality (Article 10).

Weaknesses in Performance Indicators and Monitoring

FINMA observed weaknesses in the selection of performance indicators, tests and ongoing monitoring at some supervised institutions.

The EU AI Act requires providers to establish performance metrics and conduct regular testing and validation of AI systems (Articles 11, 13). It also mandates post-market monitoring to ensure ongoing compliance and performance (Article 72).

Lack of Centralised Documentation Requirements

Some supervised institutions don’t have centralised documentation requirements, and existing documentation isn’t sufficiently detailed and recipient-oriented.

The EU AI Act mandates detailed technical documentation for high-risk AI systems (Article 11). This documentation must be kept up to date and include all necessary information to assess compliance (Annex IV).

Explainability Issues

Results from AI applications often can’t be understood, explained or reproduced, making it difficult to critically assess them.

The EU AI Act requires transparency and explainability for high-risk AI systems (Article 13). Providers must ensure that AI systems are designed to be interpretable and that outputs can be understood and explained (Article 14).

Lack of Independent Review

FINMA didn’t observe a clear distinction between the development of AI applications and the independent review in all cases, and only a few supervised institutions carry out an independent review of the entire model development process by qualified personnel.

The EU AI Act mandates that high-risk AI systems undergo conformity assessments, which may include third-party assessments (Articles 43, 44). It also requires regular audits and updates to ensure ongoing compliance (Annex VII).

The safeguards that regulated entities put in place will depend on their ambitions and the risks associated with the tools being used – whether in AML, operations, accounting or HR. It’s essential to evaluate the risks in each case.

Conclusion

FINMA Guidance 08/2024 places a strong emphasis on governance, risk management, data quality, explainability, ongoing monitoring and independent reviews – setting a high benchmark for organisations striving to integrate AI technologies effectively and ethically.

How can PwC help?
We leverage the expertise gained from implementing the EU AI Act. Our standard support consists of five steps, each of which can be tailored to meet your specific needs.

Contact us

Philipp Rosenauer

Partner Legal, PwC Switzerland

+41 58 792 18 56

Email

Tomasz Wolowski

Senior Manager, Compliance & Regulatory Advisory Services, PwC Switzerland

+41 77 995 94 93

Email