6 minute read 27 Sep 2023
Top view Long exposure shot of modern office lobby with business people walking hero image

How to navigate global trends in Artificial Intelligence regulation

By EY India

Multidisciplinary professional services organization

6 minute read 27 Sep 2023

Show resources

  • The Artificial Intelligence (AI) global regulatory landscape (pdf)

AI’s potential to create positive human impact will depend on a responsible, human-centered approach that focuses on creating value for all.

In brief
  • Legislators are developing distinctly different approaches on policy to regulate AI.
  • EY research has identified five common trends in AI oversight.
  • Companies can take several actions to stay ahead of the rapidly evolving AI regulatory landscape.

This article was written by Nicola Morini Bianzino, EY Global Chief Technology Officer; Marie-Laure Delarue, Global Vice Chair, Assurance; Shawn Maher, EY Global Vice Chair, Public Policy; and Ansgar Koene, EY Global AI Ethics and Regulatory Leader; with contributions by Katie Kummer, Deputy Global Vice Chair, Public Policy; and Fatima Hassan-Szlamka, Associate Director, Global Public Policy.

The accelerating capabilities of Generative Artificial Intelligence (GenAI) — including large language models (LLM) —  as well as systems using real-time geolocation data, facial recognition and advanced cognitive processing, have pushed AI regulation to the top of policy makers’ inboxes.

It isn’t simple. In Europe, for example, while some member countries want to liberalize the use of facial recognition by their police forces, the EU Parliament wants to impose tight restrictions as part of the AI Act.1  In another debate on AI legislation, the Indian Ministry of Electronics and IT published a strong statement in April, opting against AI regulation and stating that India “is implementing necessary policies and infrastructure measures to cultivate a robust AI sector, but does not intend to introduce legislation to regulate its growth.”2 Yet in May, the IT Minister announced India is planning to regulate AI platforms like ChatGPT and is “considering a regulatory framework for AI, which includes areas related to bias of algorithms and copyrights.”3 Similarly, while the US is not likely to pass new federal legislation on AI any time soon, regulators like the Federal Trade Commission (FTC) have responded to public concerns about the impact of Generative AI, by opening expansive investigations into some AI platforms.4

AI is transforming a diverse range of industries, from finance and manufacturing to agriculture and healthcare, by enhancing their operations and reshaping the nature of work. AI is enabling smarter fleet management and logistics, optimizing energy forecasting, creating more efficient use of hospital beds by analyzing patient data and predictive modeling, improving quality control in advanced manufacturing, and creating personalized consumer experiences. It is also being adopted by governments that see its ability to deliver better service to citizens at lower cost to taxpayers. With global private sectors investing in AI, the investment levels are now 18 times higher than in 2013.AI is potentially a powerful driver of economic growth and a key enabler of public services.

However, the risks and unintended consequences of GenAI are also real. A text-generation engine that can convincingly imitate a range of registers is open to misuse; voice-imitation software can mimic an individual’s speech patterns well enough to convince a bank, workplace or friend. Chatbots can cheat at tests. AI platforms can reinforce and perpetuate historical human biases (e.g., based on gender, race or sexual orientation), undermine personal rights, compromise data security, produce misinformation and disinformation, destabilize the financial system and cause other forms of disruption globally. The stakes are high.

Legislators, regulators and standard setters are starting to develop frameworks to maximize AI’s benefits to society while mitigating its risks. These frameworks need to be resilient, transparent and equitable. To provide a snapshot of the evolving regulatory landscape, the EY organization (EY) has analyzed the regulatory approaches of eight jurisdictions: Canada, China, the European Union (EU), Japan, Korea, Singapore, the United Kingdom (UK) and the United States (US). The rules and policy initiatives were sourced from the Organization for Economic Co-operation and Development (OECD) AI policy observatory6 and are listed in the appendix to the full report.

Show resources

  • Download the Artificial Intelligence (AI) global regulatory landscape

Five regulatory trends in Artificial Intelligence

Recognizing that each jurisdiction has taken a different regulatory approach, in line with different cultural norms and legislative contexts, there are five areas of cohesion that unite under the broad principle of mitigating the potential harms of AI while enabling its use for the economic and social benefit of citizens. These areas of unity provide strong fundamentals on which detailed regulations can be built.

  1. Core principles: The AI regulation and guidance under consideration is consistent with the core principles for AI as defined by the OECD and endorsed by the G207. These include respect for human rights, sustainability, transparency and strong risk management.
  2. Risk-based approach: These jurisdictions are taking a risk-based approach to AI regulation. What that means is that they are tailoring their AI regulations to the perceived risks around AI to core values like privacy, non-discrimination, transparency and security. This “tailoring” follows the principle that compliance obligations should be proportionate to the level of risk (low risk means no or very few obligations; high risks mean significant and strict obligations).
  3. Sector-agnostic and sector-specific: Because of the varying use cases of AI, some jurisdictions are focusing on the need for sector-specific rules, in addition to sector-agnostic regulation.
  4. Policy alignment: Jurisdictions are undertaking AI-related rulemaking within the context of other digital policy priorities such as cybersecurity, data privacy and intellectual property protection – with the EU taking the most comprehensive approach.
  5. Private-sector collaboration: Many of these jurisdictions are using regulatory sandboxes as a tool for the private sector to collaborate with policymakers to develop rules that meet the core objective of promoting safe and ethical AI, as well as to consider the implications of higher-risk innovation associated with AI where closer oversight may be appropriate.

Further considerations on AI for policymakers

Other factors to consider in AI policy development include:

  • Ensuring regulators have access to sufficient subject matter expertise to successfully implement, monitor and enforce these policies
  • Ensuring policy clarity, if the intent of rulemaking is to regulate risks arising from the technology itself (e.g., properties such as natural language processing or facial recognition) or from how the AI technology is used (e.g., the application of AI in hiring processes) or both
  • Examining the extent to which risk management policies and procedures, as well as the responsibility for compliance, should apply to third-party vendors supplying AI-related products and services

In addition, policymakers should, to the extent possible, engage in multilateral processes to make AI rules among jurisdictions interoperable and comparable, in order to minimize the risks associated with regulatory arbitrage – that are particularly significant when considering rules governing the use of a transnational technology like AI.

Action steps for companies

For company leaders, understanding the core principles underlying AI rules, even if those rules may not presently apply to them, can serve to instill trust by customers and regulators in their use of AI and thereby potentially provide a competitive advantage in the marketplace. It can also help companies anticipate the governance needs and compliance requirements that may apply to their development and use of AI, making them more agile.

Based on the identified trends, there are at least three actions businesses can take now to remain a step ahead of the rapidly evolving AI regulatory landscape.

  1. Understand AI regulations that are in effect within the markets in which you operate. You can align your internal AI policies with those regulations and any associated supervisory standards.
  2. Establish robust and clear governance and risk management structures and protocols as well as, to the extent where appropriate, accountability mechanisms to enhance how you manage AI technologies.
  3. Engage in dialogue with public sector officials and others to better understand the evolving regulatory landscape, as well as to provide information and insights that might be useful to policymakers.

For governance approaches to strike the right balance between government oversight and innovation, it’s important that companies, policymakers and other stakeholders engage in open conversations. All these parties are testing the waters and working to find new possibilities that are being enabled by AI. New rules will be needed. Fortunately, as our review shows, there is wide agreement among countries on the foundational principles to govern the use of AI. At this unique moment of possibility and peril, now is the time to cooperate on turning those principles into practice.

Summary

As Generative AI begins to transform industries, global policy makers are enacting legislation aimed at optimizing the opportunities while mitigating the risks of the technology.

About this article

By EY India

Multidisciplinary professional services organization