Chapter 1
Navigate the regulatory maze
From ethical principles to tangible policies
As the adoption of AI accelerates, permeating products and services across both private and public sectors, legislators and regulatory bodies worldwide are working hard to keep pace. Countries have been quick to recognise AI as a catalyst for economic growth, but governments also acknowledge its potential impact on citizens, society and our broader environment, as well as the importance of adapting or augmenting existing regulatory frameworks to safeguard established rights.
In the wake of intense public discourse between 2016 and 2019, a global consensus has emerged among governments, businesses and NGOs on the core ethical principles guiding AI usage. The AI Principles of the Organisation for Economic Co-operation and Development (OECD), adopted by the G20 in 2019, exemplify this agreement.4 In an historic move, all 193 UNESCO Member States endorsed the first-ever global standard-setting instrument on AI ethics in November 2021.5
Now, leading nations and international organisations are diligently translating these principles into actionable regulatory approaches. By early 2023, trailblazers in AI regulation, including the EU, US, UK, Canada, Japan, South Korea, Singapore and China, will have either proposed new legislation or published comprehensive guidelines to govern this transformative technology.
Chapter 2
Striking the right balance
How can governments create regulatory objectives without stifling innovation?
Given AI's vast array of application areas and its potential impact on citizens and society, it's crucial to strike a balance between sector-agnostic baselines and sector-specific rulemaking to address different needs and contexts. The question is, what’s the right balance?
The pattern is decidedly more sector-agnostic in countries like the US, EU, Canada, Japan, Singapore and China, where policy initiatives establish overarching regulatory objectives, whilst additional sectoral work creates or amends regulations in areas such as medical devices, industrial machinery, public sector AI usage, agriculture, food safety, financial services and internet information services. For instance, the US's Blueprint for an AI Bill of Rights, the EU's AI Act and China's Ethical Norms for New Generation AI provide deep foundations for sector-agnostic policies.7,8,9
The primary mechanism for maximising cross-sector coherence within these proposals is the ‘risk-based approach’ to AI regulation. A leading example is the EU’s AI Act, which adjusts the degree of regulatory compliance required based on the classification of risk: whilst most AI poses little or no risk, high risk systems, such as those used in critical national infrastructure or in safety-related applications, will be subject to the strictest obligations.10
In contrast, the UK’s pro-innovation approach to AI regulation shifts the balance towards sector-based regulation with additional coordination from government to support regulators on issues requiring cross-cutting collaboration, such as monitoring and evaluation of the framework’s effectiveness, assessing risks across the economy and providing education and awareness to give clarity to businesses.11 The UK’s approach attempts to recognise that regulation is not always the most effective way to support responsible innovation but is, instead, aligned with and supplemented by a variety of tools for trustworthy AI, such as assurance techniques, voluntary guidance and technical standards.
Challenges faced by businesses
In the face of the shifting regulatory landscape, businesses must confront several challenges as they integrate AI technologies into their operations:
Keeping up with technology changes. As generative AI technologies like GPT-4 continue to advance, businesses must question their underlying assumptions about existing AI risks, which are likely to have been based on discrete use cases and data.
Keeping up with regulatory changes. Businesses must stay informed and agile as they adapt to the ever-changing AI regulatory environment, which can be a daunting task given the speed at which new policies and guidelines are introduced.
Allocating resources for compliance. Ensuring that organisations remain within the boundaries of various AI regulations can be resource-intensive, requiring businesses to allocate time, personnel, finances or independent reviewers to meet a diverse set of requirements.
Combining innovation with ethical considerations. Companies must recognise that ethical design drives growth and innovation because systems that adhere to ethical principles and regulations tend to be higher performing whilst also protecting customers and society.
Managing potential liabilities arising from generative AI use: As organisations further integrate AI into business operations, companies must navigate the potential legal liabilities and reputational risks that may arise from deploying these technologies.
Navigating different ethical regimes as well as cross-border legal and regulatory requirements. For businesses operating internationally, remaining sensitive to and complying with ‘softer’ cultural norms as well as myriad cross-border legal and regulatory requirements can be a complex and challenging undertaking.
Chapter 3
Turn principles and policies into trust
A principles-based framework can help organisations create common ethical standards.
In today's rapidly evolving technology landscape, creating trusted AI systems urgently requires organisations to implement a flexible, principles-based approach. Such a framework would offer a systematic way for businesses to ensure that their AI systems adhere to the common ethical standards and best practices demanded by governments, whilst providing clear actions for dealing with the tailored requirements of particular jurisdictions or sector-specific regulators.
Seven steps for operationalising trusted AI:
Establish a consistent ethical framework.
Develop an ethical framework tailored to your organisation, drawing on existing principles established by the business, the OECD's AI Principles, or an independent reviewer as a foundation. This framework should provide clear guidance on ethical goals, considerations and boundaries within the context of the company and the industry sector in which it operates.Create a cross-functional team.
Assemble a diverse, multi-disciplinary team with representation from various areas, such as domain experts, ethicists, data scientists, IT, legal, human resources, technology risk and compliance. This team will oversee the implementation of your ethical framework, allowing the business to align AI technologies, including generative AI, with pertinent values, such as inclusivity, transparency, robustness and accountability, ultimately fostering trust and driving positive planetary impact.Build an inventory of current AI systems.
The risk and internal audit functions in many organisations remain largely unaware of the scale at which AI systems are deployed across the enterprise. Creating a baseline inventory of data and a consistent framework for assessing the inherent risk of each AI use case and should guide the level of governance and control required to mitigate that risk and maximise value. Available guidance in this area is largely based on draft regulation which seeks to protect human beings and the environment and organisations must not forget to consider commercial risk.Develop clear AI auditing procedures.
Create a set of guidelines that translate your ethical framework into practical, actionable steps for AI developers and engineers, as well as those who use AI to partially or fully automate their activities. These guidelines should encompass the entire AI lifecycle, from design to deployment, addressing data collection, model development, performance monitoring and third-party risks.Integrate ethics into AI development.
Embed ethical considerations into every stage of the AI development process, ensuring that developers, engineers, product owners and users understand the legal and ethical considerations of AI they are building or buying and their responsibility to apply appropriate safeguards. This might include implementing ethical checkpoints or gate-based reviews at crucial development milestones and incorporating ethics-based metrics and KPIs to evaluate AI performance and impact on business outcomes.Build awareness and training.
Ensure that everyone in the organisation, from business leaders to back-office professionals, are aware of AI and the ethical principles associated with its development and use; In our experience, although ethical frameworks are essential, they can sometimes fail to become properly embedded and operationalised when leadership is not fully appreciative of the risks.Monitor and continuously improve.
Consider an independent, regular audit of AI systems to assess their ethical performance, addressing any shortcomings or adverse effects. Maintain a central inventory of AI systems, to support risk management and regulatory compliance. Additionally, gather feedback from stakeholders and users to refine the AI auditing guidelines, ensuring that the organisation’s ethical framework remains relevant and up to date.
Related articles
Summary
In the face of a patchwork of proposed regulations and the rise of generative AI, businesses face the daunting challenge of building trust in their AI-driven products and services. This requires a proactive approach to managing risk and a culture of responsibility. A principles-based framework for trusted AI offers a flexible solution to navigating the complexities of AI ethics and regulation.
By doing so, organisations can demonstrate their commitment to transparency, accountability and fairness and drive AI-powered innovation that benefits stakeholders and shapes a more equitable future.
GPT-4 was used for help with wording, formatting, and styling throughout this work.