Press ESC to close

Canada: Artificial intelligence brings greater risks to financial institutions,

in short

On September 24, 2024, following in-depth consultations with industry participants, the Office of the Superintendent of Financial Institutions (OSFI) and the Financial Consumer Authority of Canada (FCAC)1 Published findings on the use and adoption of artificial intelligence (AI) by federally regulated financial institutions. The report emphasizes that by 2026, the vast majority of financial institutions will adopt artificial intelligence and lists some of the key risks that arise from the use of artificial intelligence by financial institutions. OSFI and FCAC emphasized the need for financial institutions to adopt dynamic and responsive risk management systems in relation to artificial intelligence and confirmed their commitment to developing more specific best practices for industry participants.


On September 24, 2024, OSFI and FCAC released a report on the use and risks of artificial intelligence at federally regulated financial institutions (hereinafter referred to as “Artificial Intelligence Report“), which includes the results of a voluntary questionnaire recently sent to financial institutions in December 2023 seeking feedback on their artificial intelligence and quantum computing readiness.

Survey results show that the use of artificial intelligence by financial institutions is increasing rapidly, and it is expected that by 2026, 70% of financial institutions will use artificial intelligence. In fact, the AI ​​report found that financial institutions are now using AI for more critical use cases such as pricing, underwriting, claims management, trading, investment decisions and credit adjudication. Additionally, financial institutions face competitive pressures to adopt AI, leading to further potential business or strategic risks. Therefore, financial institutions must remain vigilant and maintain adaptable risk and control frameworks to address the internal and external risks posed by AI, according to the AI ​​report.

The Artificial Intelligence report outlines the key risks arising from the use of artificial intelligence by financial institutions, which may arise from the adoption of artificial intelligence internally or from the use of artificial intelligence by external actors.

  1. Data governance risks Considered the top issue in the use of artificial intelligence. The AI ​​report points out that it is critical to address AI data governance, whether through a general data governance framework, specific AI data governance or a model risk management framework.
  2. Model risk and interpretability Identified as a critical risk because the risks associated with AI models are elevated due to their complexity and opacity. The AI ​​report states that financial institutions must ensure that all stakeholders – including users, developers and control functions – are involved in the design and implementation of AI models. Additionally, financial institutions need to ensure there is an appropriate level of explanation for informing internal users/customers and for compliance and governance purposes.
  3. Legal, ethical and reputational risks is a challenge for financial institutions implementing AI systems. Among other things, the AI ​​report recommends that financial institutions take a comprehensive approach to managing risks associated with AI, as strict compliance with jurisdictional legal requirements may expose financial institutions to reputational risks. The report also states that consumer privacy and consent should be prioritized.
  4. third party risk The reliance of AI models and systems on third-party providers was also noted as significant challenges, including when seeking to ensure that third parties comply with financial institutions’ internal standards.
  5. Operational and cybersecurity risks It can also be amplified by the adoption of artificial intelligence. The AI ​​report states that as financial institutions integrate AI into their processes, procedures and controls, operational risks will increase. Additionally, cyber risks may arise from internal use of AI tools and may be exacerbated by complex relationships with third parties. Without appropriate security measures, the use of artificial intelligence may increase the risk of cyberattacks. As a result, the AI ​​report warns, financial institutions must implement strong enough safeguards for their AI systems to ensure resilience.
  6. business and financial risks Notably, these risks include risks related to financial and competitive pressures on financial institutions that do not adopt AI. OSFI and FCAC warn that if AI begins to disrupt the financial industry, companies that have lagged behind in its adoption may find it difficult to respond without in-house AI expertise and knowledge.
  7. Emerging credit, market and liquidity risks. The AI ​​report noted that the macroeconomic impact of AI on areas such as unemployment rates could lead to credit losses. Additionally, as adoption increases, AI models could have a significant impact on asset price volatility and deposit flows among financial institutions.

In response to the risks identified in the AI ​​report, OSFI and FCAC have made a number of recommendations for financial institutions to manage or mitigate such risks within their organizations. The following recommendations were made:

  1. Financial institutions need to rigorously identify and assess risks and build multidisciplinary, diverse teams to address the use of AI within their organizations.
  2. When it comes to AI and data, financial institutions must be open, honest and transparent in their dealings with customers.
  3. Financial institutions should plan and develop an AI strategy, even if they don’t plan to adopt AI in the short term.
  4. As a horizontal risk, AI adoption must be addressed holistically, with risk management standards in place that integrate all relevant risks. Financial institutions’ boards of directors and supervisory authorities must be involved to ensure that their organizations are appropriately prepared for the outcomes of AI by balancing the benefits and risks of AI adoption.

In the AI ​​report, OSFI and FCAC emphasized their plans to respond dynamically and proactively to the changing risk environment surrounding AI, whose uncertain impacts also pose challenges to regulators. OSFI and FCAC will also work with other industry players to build on previous AI work to establish more specific best practices.

On October 2, 2024, following the release of the AI ​​report, OSFI issued a semi-annual update stating that while the risks previously identified in the Annual Risk Outlook (FY 2024-2025) remain, integrity and security risks continue to ” intensified and multiplied”, particularly as the risks from artificial intelligence “have grown in importance since the release of the Annual Risk Outlook”. OSFI noted that while the assessment of the impact and interrelationships of AI adoption on the risk landscape is still ongoing, it plans to strengthen existing guidance to support the mitigation of AI-related risks. To this end, as a first step, it will publish updated model risk management guidance in summer 2025, which will include clearer expectations for AI models.

For more information about artificial intelligence in financial services, visit our landing page, attend our events and chat with us.


1 OSFI and FCAC are the federal regulatory agencies for Canada’s banking and financial services industry.

Leave a Reply Cancel reply

Canopy Tents Professional Customization

- Sponsored Ad -
Canopy Tents Professional Customization