5 minute read 12 Jun 2024
Human Finger Pushes Touch Screen Button

AI and fraud: opportunities and risks in the digital age

Authors
Igor Mikhalev

EY Nederland Partner EY-Parthenon, Head of Emerging Technologies Strategy, AI Strategy Lead

Deep strategy. Deep tech. Creative and visionary. Hands-on yet big-picture thinker.

Bernadette Wesdorp

EY Netherlands Financial Services AI Leader; Director, Financial Services Privacy, EY Advisory Netherlands LLP

Privacy Leader for financial services. Trusted AI lead in the Netherlands. Mother of two (son and daughter). Former professional field hockey player. Likes to run, swim and be active.

5 minute read 12 Jun 2024
Related topics AI

AI knowledge development grows; 'Focus on Fraud' webinar highlights prevention of AI-related fraud and opportunities of AI for organizations.

In brief:

  • Find the balance between AI-driven opportunities and fraud prevention in the digital age.
  • A multidisciplinary approach and constant monitoring are crucial in AI usage.
  • Human input and critical thinking are essential in the AI era.

The evolution of AI offers enormous opportunities but also brings risks. Due to rapid developments and still limited knowledge, there is a risk of fraud. Two crucial questions are central here: does AI make committing fraud easier, and can we use AI to detect fraud? Financial professionals, CEOs, and board members are immersing themselves in the rapid developments of AI, given its great importance for their daily responsibilities.

Diana Matroos, journalist and presenter of 'De Big Five' on BNR Nieuwsradio, served as the hostess in the webinar alongside Auke de Bos, an executive at EY accountants. Together, they engaged in conversation with experts to increase awareness of AI.

AI: opportunities and fraud risks

Igor Mikhalev: “In a world where technology advances at an unprecedented pace, AI sometimes worries me. While we aim for creativity and innovation, AI inevitably creates opportunities for people with illegal intentions. The results of our survey show high concern over misinformation and criminal activities, but also recognize efficiency and cognitive improvements as opportunities. I wonder if we can ever fully solve these problems, or if it's a perpetual balance that we must strive to maintain. With advancements in GenAI, we see an increase in sophisticated fraud, from hyper-personalized text scams to voice attacks and deepfakes. Hackers are now capable of carrying out attacks on a large scale.”

Companies must be alert and proactive in recognizing and combating threats such as deepfakes.
Igor Mikhalev
EY Netherlands Partner EY-Parthenon, Head of Emerging Technologies Strategy

Despite 52% of consumers in the survey thinking they can detect a deepfake video, there's a strong chance that this confidence is misplaced. Igor: “The impact of such technology on society and business is profound, and as the proverbial flywheel of innovation spins faster, companies must become more agile in identifying and understanding these threats. They must also proactively deploy technology against such attacks, as human ingenuity alone is insufficient to prevent the advanced level of attacks we now face. Companies need to be agile and smart in detecting and understanding these threats, and proactively use technology to counter them. We need technology that matches the scale of these attacks.”

Critical assessment

It seems to be a complex issue, especially as we see an increase in sophisticated fraud. Igor: “With the current climate of misinformation, this can also lead to trust issues. As an individual, I am very interested in this problem. The enormous amount of data and information being generated, blurs the line between truth and falsehood. We see multiple versions of the truth, some based on what people have actually said, some generated by AI, and all subject to emotionally charged interpretations. This leads to different versions of the truth, depending on who you ask. As business leaders, we need to educate our stakeholders on how to critically assess information. We also need to redefine our understanding of truth in the context of this information overload.

As AI continues to evolve and associated fraud becomes more sophisticated, we must remain vigilant and adaptive. It is essential to invest in education, advanced detection technologies, and robust policy measures to mitigate these risks. We are in a constant race to outsmart cybercriminals, and while our detection methods are advancing, they do not keep pace with the rapid progress of AI-powered threats. As we navigate the unknown waters of AI, we must be willing to face the challenges head-on while embracing the opportunities it offers. The future of AI is a canvas of unlimited possibilities, but it requires a careful and sustainable approach to ensure it benefits all of humanity.”

Human factor

To understand the impact of AI on individuals, we must protect people and focus on leveraging AI for opportunities and anticipating fraud risks. It is important that we prepare people and stimulate critical thinking. Bernadette Wesdorp: “The research indicates that 35% of organizations have not yet deployed AI, while others are already using it for, for example, chatbots. You would expect that the percentage not using AI would have decreased by now, as it is sometimes unknown that AI is already being used in some applications. With the rapid adoption of GenAI, this percentage will likely quickly drop to zero. You can no longer avoid using AI.”

Navigating AI legislation

With the advent of the AI Act from Europe, it is time for organizations to understand the impact of this legislation and to take stock of where AI is used in an organization. Bernadette: “For Europe, the text has been published and by the end of the year, the law will apply to systems that are truly prohibited. This means that organizations now need to think about their approach and the impact of the legislation.”

People must perform the final check - knowledge enhancement and helping to generate the best output are crucial.
Bernadette Wesdorp
EY Netherlands Financial Services AI Leader; Director, Financial Services Privacy, EY Advisory Netherlands LLP

While the legislation will not directly help combat AI-related fraud, it is important to know what your organization is doing. Bernadette: "Fraud risk models are not considered a high-risk system, which means that the impact of the AI Act on these models is limited. I see Responsible AI from the three R's: Regulation, Reputation, and Realisation. Regulation is about the AI Act in Europe, Reputation about ethics and risks to individuals, and Realisation about the value AI can add to an organization.

When looking at survey questions to the audience about the increase in fraud by AI within organizations, the risk seems to be underestimated. With the advent of GenAI, it becomes easier to commit fraud, such as generating personalized phishing emails. This increases the chance of success and therefore the return on investment for criminals. Accountants are continuously expanding their audit with AI to combat AI risks and analyze patterns. At EY, there is attention to AI and the importance of the human factor. Knowledge enhancement and helping to generate the best output are crucial. People must perform the final check.

The importance of retraining employees, combined with insights from our global audits, provides our organization with the opportunity to significantly improve the quality of our audits. With the addition of sector and industry expertise and the experience of people familiar with the industry, we are now able to perform much more effective audits than ten years ago. This represents a huge opportunity for us and enables us to ask our clients more in-depth questions.”

Understanding AI activities

Identity and access management, in other words who has access to which systems, is not a new issue, but AI brings both new and existing risks to light. Organizations must be aware of their current measures and what may be additionally needed. Simple solutions such as two-factor authentication can prevent many problems and fraud, despite the desire for ease of use and quick access. Bernadette: “AI is a multidisciplinary theme that affects all facets of an organization, from data and IT to legislation, compliance, and legal aspects. It is crucial to tackle AI-related fraud from different angles, for example through a multidisciplinary AI board or ethics board that evaluates AI initiatives. Organizations must understand how they deploy AI and continuously monitor it, given the rapid developments and the different roles within AI legislation, such as developer or user. Insight into one's own AI activities is incredibly important.

With the arrival of the AI Act, it is time to take action and assess the impact on your organization. Often there is already a good governance structure in place. It is important to identify where the shortcomings are and what still needs to be done. Take stock of what your organization is already doing and build on this to integrate both new risks and opportunities. You don't have to start from scratch, but it is essential to investigate which steps you need to take now. It is crucial that the responsibility for AI is taken from the top of the organization, such as the board of directors. The most important thing is how you respond to unforeseen events; full preparation is not possible. Focus on the future, learn from experiences, and ensure a quick response.

It is essential to refine our 'AI antenna,' which means we must critically assess whether we can trust the information we receive. As directors, who constantly make decisions, we must be vigilant for the reliability of the data presented: is it true or not? This critical attitude is not only important for ourselves but also something to share and emphasize within our organizations. The human element is indispensable alongside AI, which can both combat fraud and add value. It is an era of opportunities, but also of vigilance and responsibility.”

Focus op Fraude: AI

Rewatch the webinar. Participants: Auke de Bos, Sebastian Kortmann, Diana Matroos, Aad Lensen, Igor Mikhalev, Bernadette Wesdorp, Tom de Kuijper.

Summary

The rapid development of AI presents opportunities as well as risks of fraud. Financial leaders need to delve into AI developments due to their impact on daily responsibilities. The 'Focus on Fraud' webinar, where Diana Matroos and Auke de Bos spoke with AI experts, emphasized the need for awareness and deepening knowledge in AI, while new regulations and the human factor in AI use were central topics.

About this article

Authors
Igor Mikhalev

EY Nederland Partner EY-Parthenon, Head of Emerging Technologies Strategy, AI Strategy Lead

Deep strategy. Deep tech. Creative and visionary. Hands-on yet big-picture thinker.

Bernadette Wesdorp

EY Netherlands Financial Services AI Leader; Director, Financial Services Privacy, EY Advisory Netherlands LLP

Privacy Leader for financial services. Trusted AI lead in the Netherlands. Mother of two (son and daughter). Former professional field hockey player. Likes to run, swim and be active.

Related topics AI