Podcast transcript: Mitigating GenAI risks in financial services
20 min | 20 February 2024
In conversation with:
Subrahmanyam Oruganti
EY India Business Consulting Partner and Financial Services Risk Quant Leader
Kartik Shinde
EY India Cybersecurity Consulting Partner
Tarannum: Hello and welcome to the EY India Insights Podcast. I am Tarannum, your host for today. In our latest episode of the Generative AI (GenAI) Unplugged series, we will explore the timely topic of trust in GenAI, particularly in the financial services industry. While GenAI holds tremendous potential, it is also cautioned by the risks and limitations associated with this technology. Concerns have been raised related to what might occur from the improper use of these technologies or the absence of adequate guide rails.
To facilitate our discussion further, we have with us Subrahmanyam Oruganti and Kartik Shinde. Subrahmanyam is the Financial Services Risk Quant leader at EY India. With 17 years of rich experience, he leads capital markets modeling, regulatory transformation, and automation. Kartik is EY India Cybersecurity Consulting Partner, who brings with him over 20 years of industry experience and is a leading voice for cyber in the financial services segment. He helps banks and financial institutions device and implement successful information security (infosec) strategies and mitigate risks.
Thank you Subrahmanyam and Kartik for joining us in this episode.
Subrahmanyam and Kartik: Thanks a lot, Tarannum.
Tarannum: Subrahmanyam, if we were to specifically look at the financial services industry, AI adoption was initially concentrated in low-risk areas like marketing and HR but is now increasingly transitioning to more strategic levels. In this landscape, what use cases do you anticipate emerging that would define the industry's evolving approach to GenAI adoption?
Subrahmanyam: From an overall perspective, it is not just dipping toes; we are diving very deep into the strategic waters of GenAI. In case of financial services, we are seeing usage of GenAI in new product development, competitive analysis, and customer segmentation. The financial world has become truly digital and is coming up with a lot of GenAI use cases.
If I look at it from the risk perspective, fraud detection and compliance are becoming smarter – real time alerts, anomaly ignition, early warning signals, etc. are keeping the financial world secure and compliant. From the automation perspective in the back-office, GenAI streamlines the processes, handles contracts, and ensures compliance. So, it is not just about reducing risks anymore; it is about optimizing efficiency and performance.
Then there is this magic of automated document generation. From letters to legal documents, AI takes the manual effort out of the equation, reducing errors and streamlining processes. GenAI driven analytics dive into the customer behavior, preferences, and transactions. In this way, the dynamic set of strategic use cases is completely redefining the financial landscape.
Tarannum: When navigating the intricate terrain of deploying these advanced systems, challenges become paramount. How do these concerns impact the trustworthiness of journey applications, especially within sensitive sectors like banking?
Subrahmanyam: The banking sector is historically known for its tradition of stability. It is facing a seismic shift due to the arrival of GenAI. While AI and machine learning have already woven themselves into the fabric of banking, from fraud detection to personalized recommendations, GenAI is promising a whole new level of power and potential.
But as they say, with great power comes great responsibility. The question looms large - can we trust GenAI in the sensitive domain of finance? The traditional AI and ML models, which have existed for a while now, carry numerous risks such as data privacy. These risks arise due to challenges related to bias and fairness that exist in these models. Now that we are adding the layer of GenAI, these risks are going to the next level. It is of a black box nature, where we interact with the prompts instead of deciphering the underlying logic, which makes it a very slippery slope. It is very difficult to test, evaluate or validate a vendor supplied GenAI model. This opacity breeds a lot of operational risks and business continuity nightmares, leaving us precariously dependent on external vendors for the very models that power our financial systems.
The shadows will deepen further because GenAI can be manipulated to generate deepfakes and fake news in the financial market. Its vulnerability in the adversarial attacks makes it susceptible to hackers, bad actors who steal models, who manipulate outputs or inject harmful prompts.
So, mitigation of all of these things is extremely important. Also, compliance with upcoming regulations globally poses another major challenge to GenAI adoption.
Apart from all the risks that we talked about, another major risk that is coming through is due to the sheer volume of the data and the complexity of the data that a lot of organizations need to deal with. Training these foundational models in-house has its own set of challenges, forcing us to confront the ethical tightrope (walk) of data privacy and copyright in a completely new way. Let us not forget the potential for malicious actors to exploit GenAI, which has a prompt-based nature, extracting sensitive training data and also wreaking havoc on our financial systems.
Remember, trust is the bedrock of any financial institution. With GenAI, we have the opportunity to build secure, ethical, and trustworthy banking. But we all must hold the future together. Brick by brick, we need to construct it in such a way that GenAI actually serves the humanity and not the other way around.
Tarannum: What you are saying is that financial services and solutions need to lay the groundwork for responsible activation when adopting GenAI. Kartik, I would like to understand from you that since the term ‘responsible AI’ now embodies a commitment to ethical, transparent, and fair AI practices, how do you see the notions of trust in GenAI and Responsible AI intertwined?
Kartik: Trust AI and Responsible AI are usually used interchangeably. The concept of Responsible AI emerged as a direct response to some of the rapid advancements and increasing societal impact of artificial intelligence technologies. While AI holds immense potential for positive change, concerns around its potential for bias, discrimination and unintended consequences started rising alongside its widespread adoption.
As GenAI opposes some novel risks, in addition to the traditional risks associated with AI/ML models, trust in GenAI becomes absolutely important. In short, trust in GenAI is about ensuring that AI systems are reliable, safe, and functioning as intended and involves managing new risks, including hallucinations, toxic content, cyber risks, and risks related to data privacy, legal compliance, performance, bias, and intellectual property risks.
Overall, trust in GenAI is based on the following principles:
- Ensuring justice and equity by addressing biases in GenAI algorithms and data through diverse data collection.
- Prioritizing privacy and security with robust measures like data anonymization, encryption and secured access control across the ecosystem.
- Focusing on trust and accuracy with transparency, explainability and regular testing to uphold accountability of the models.
- Commitment to legal and compliance adherence, including data protection laws, intellectual property regulations, customer protection Acts, AI Acts like the EU AI Act that was recently released.
- Human control – This is absolutely essential with humans in the loop for oversight, feedback, and accountability.
- Embracing sustainability by minimizing environmental impact, optimizing energy consumption, and promoting efficient computing practices for a reduced carbon footprint.
Additionally, specific considerations and practices related to AI governance contribute to cultivating confidence, ensuring ethical usage, and fostering accountability in deployment of GenAI technologies, particularly within the banking and the financial services sector across industries.
Tarannum: Thank you for those insights, Kartik. Regulators, both at the country and sector levels, have laid out guiding principles to shape the future regulations. Subrahmanyam, what is your perspective on the evolving regulatory landscape in India and how do you see the current state of regulations? What trends might emerge in this evolution?
Subrahmanyam: In the realm of AI regulation, GenAI is not explicitly targeted, yet regulations are broadly applicable. These regulations have been there 2016 onwards. Now they have to be enhanced towards the GenAI framework as well. The primary goals of these regulations are to foster innovation and still align with the societal expectations. So, to encourage innovation, regulators like those in the EU have come up with the AI Act to categorize use cases based on the potential risks. Higher the risks, stricter the rules around that. Notably, regulation such as the Artificial Intelligence and Data Act that is released in Canada do not restrict open-source AI algorithm development with any regulation.
To meet societal expectations, a human centric approach prevails. So, European regulators emphasize this very clearly as citizens’ fundamental rights, while their Canadian counterparts focus on securing AI systems to build trust. Whether you talk about the US Executive Order or the guiding principle, the common thread is fair and safe AI development. Trusted AI principles or responsible AI principles that we discussed are all designed specifically to meet these expectations.
India has passed the Digital Personal Data Protection Act in 2023, which has its stand on data protection, privacy, and consumer protection as an outcome of AI. India is also collaborating worldwide in terms of the policymaking by being part of the Global Partnership on Artificial Intelligence or GPAI. At the global level, regulators across the board have recognized the need to govern the use cases rather than just the technology itself. So, initial sector agnostic principles paved the way for later sector specific regulations. Additionally, jurisdictions integrate AI rulemaking into broader digital policy priorities like cybersecurity, data privacy and intellectual property protection. These are all led by the comprehensive approach across all the regulations.
Tarannum: Considering the GenAI systems are trained on large datasets, what specific types of data privacy, cybersecurity and IP related risks are likely to emerge? Given these concerns, how important do you think is robust data governance and control?
Kartik: The issue of data privacy, copyright, IP issues, unauthorized data access and exposure of sensitive information - all of these become material in the context of GenAI systems. We are dealing with systems that are trained and fine-tuned on massive datasets, making these potential risk areas truly significant. In a world where data breaches are becoming more common, the challenges in providing appropriate data and technology governance measures and ensuring robust control systems are indeed paramount.
If we look at privacy and legal risks, compliance with all relevant data protection laws, regulations, guidelines are non-negotiable. It may range from complying to the previously released General Data Protection Regulation (GDPR) across the EU to the recent digital data protection law in India. These laws have specific regulations surrounding data consent, data subject rights, data transfer, disclosure practices, and so on.
When discussing the protection of privacy and mitigating legal risks for GenAI systems, data and function whitelisting refers to the process of pre-approving certain elements to ensure safer, more controlled access and operations. Further, the deployment of GenAI systems introduces additional risks, including those relating to vendors who may provide services around the same. GenAI systems could be more vulnerable to adversarial attacks, thereby emphasizing the importance of performing due diligence on vendors from a cybersecurity standpoint. Scrutinizing their policies, practices and history related to data security is of utmost importance.
In terms of cybersecurity and providing an oversight, it is essential to engage in active and continuous monitoring of the systems to detect any unusual activity or potential threats. Similarly, educating all stakeholders on cybersecurity risks and embedding a culture of data ethics within the organization is another effective mitigation strategy. Due to their advance functionalities and data dependencies, GenAI systems or mechanisms that organizations will implement will become an attractive target for cyber adversarial attack. We have seen instances of some of these models undergoing poisoning attack such as the LLM poisoning attack, different kinds of evasion attacks, where you have heard the likes of Chat GPT being able to give out solutions on the malicious side of the world, where people use data to gain unfair advantage in conducting some of the cybersecurity attacks. Vendor risk can be managed by meticulous selection of processes that prioritize vendors with solid security practices and a proven record in handling security issues. But in a sense, while there are undeniable risks inherent in the use of deployment of GenAI systems, if managed correctly and proactively, these risks can be mitigated, leading to a reliable and trustworthy deployment of such cutting-edge AI technology.
Tarannum: Thanks for touching upon that in such detail, Kartik. Before we wrap this episode, we would like to understand from you and Subrahmanyam how you foresee the future of GenAI. What new developments do you see in the space in the next one to two years, especially in the financial services industry?
Subrahmanyam: If we look at GenAI and its advancements, the future holds exciting possibilities. We anticipate very significant advancements in the modeling and capabilities having more realistic and contextually relevant outputs. So, the evolution will not just be limited to traditional finance domains, but will be integrated across diverse disciplines, fostering interdisciplinary collaboration, and sparking more and more innovation. But at the back of it, ethics will have to continue to play a pivotal role in shaping the future of GenAI, especially in the areas of finance and banking, where customer data holds immense value. With heightened focus on responsible AI practices, we anticipate efforts to address biases, ensure fairness, and enhance transparency. These considerations are very crucial for building trust in the deployment of GenAI systems. Let us talk about efficiency. Picture this — breakthrough in training techniques leading to faster and more accessible GenAI models. This will make the models more practical and also more widely adopted.
Personalization is another key thing. The future will see GenAI becoming more tailored to individual preferences, offering a more personalized user experience. Again, a notable trend in finance is the evolution of human-AI collaboration models, where GenAI systems will work seamlessly with financial experts and users to augment creativity, and also for content generation. For instance, imagine a financial analyst collaborating with a GenAI system to generate diverse financial scenarios, providing valuable insights for decision making.
Kartik: Artificial general intelligence (AGI) is becoming a reality and if all the attacks that we talked about previously, such as the poisoning attacks, become a possibility then AGI will become a multi-fold risk factor. If someone is able to poison the models, then the future is bleak. But that is where cybersecurity comes in. Some of the points that we discussed around controls, models, the vendor ecosystem, and the cloud ecosystem if you are using a cloud-based model, all the traditional cybersecurity controls, all of them will apply into this future technology. There is more to come, but if we just stick to the applying the basic hygiene practices of cybersecurity, things will start to fall into place.
Tarannum: Thank you, Kartik and Subrahmanyam, for sharing these insights with our listeners. On that note, we come to the end of this episode. Thank you to all our listeners for joining us in this insightful discussion.
Stay tuned for our next podcast and until then, if you would like us to cover any specific topic, please feel free to share it with us on our website or markets.eyindia@in.ey.com. From all of us at EY India, thank you for tuning in.