Artificial intelligence and machine learning (AI/ML) have been near the top of the strategic agenda for boards and bank leaders for several years and are likely to continue. The emergence of generative AI tools capable of producing rich, prompt-based content and code has further fueled this focus. Even before generative AI burst onto the scene, boards were challenged to assess the full range of AI and machine learning risks. These challenges are highlighted in two 2022 surveys performed by Ernst & Young LLP (EY) and the Institute of International Finance (IIF) and are further supported by recent roundtable sessions we’ve held with chief risk officers (CROs) from financial services.
The key AI/ML implementation focus areas for bank risk management teams are credit risk management and fraud detection. Additionally, with generative AI, use cases are being explored in these areas and for broader regulatory compliance and policy frameworks. Generative AI has the potential to bring significant advancements and transform business functions.
However, AI/ML early adopters face increased risks, such as lawsuits arising from the use of web-based copyrighted material in AI outputs, concerns about bias, lack of traceability due to the “black box” nature of AI applications, and threats to data privacy and cybersecurity. As a result, many financial institutions are opting for a cautious approach to AI/ML. They are initially implementing applications in non-customer-facing processes or to aid customer-facing employees where the primary goals are improving operational efficiency and augmenting employee intelligence by offering insights, recommendations and decision-making support.
Lack of clear regulatory direction complicates board oversight. Regulators have expressed concerns about AI use in the business, including the embedding of bias into algorithms used for credit decisions and the sharing of inaccurate information by chatbots. Data privacy and security and the transparency of other models are also on authorities’ radars. Generative AI has amplified these concerns.
With AI usage increasingly democratized, robust, agile governance has become an urgent board priority. Even if companies don’t define or set up controls, boards must be diligent in ensuring that companies take a holistic and strategic approach to overseeing AI usage in risk management and overall business operations.
Four things for boards to consider
1. AI and machine learning are central to digital transformation, and CROs expect risks to increase as a result.
AI/ML are crucial for speeding up digital transformations in financial services over the next three years, alongside modernized platforms, automated processes and cloud technologies. Improvements in generative AI over the last year have only increased this urgency. Directors should be aware that technology risk and project risk are interconnected and can reinforce each other. There is a risk that AI could be overshadowed by project risks as banks strive to modernize core functions and migrate to the cloud.