Chapter 1
Embedding trust into every facet of AI
Principles designed to foster confidence
The first step in minimizing the risks of AI is to promote awareness of them at the executive level as well as among the designers, architects
Then, the organization must commit to proactively design trust into every facet of the AI system from day one. This trust should extend to the strategic purpose of the system, the integrity of data collection and management, the governance of model training and the rigor of techniques used to monitor system and algorithmic performance.
Adopting a set of core principles to guide AI-related design, decisions, investments and future innovations will help organizations cultivate the necessary confidence and discipline as these technologies evolve.
Remember, AI is constantly changing, both in how organizations use it
In our ongoing dialogues with clients, regulators
- Purposeful design: Design and build systems that purposefully integrate the right balance of robotic, intelligent and autonomous capabilities to advance well-defined business goals, mindful of context, constraints, readiness
and risks. - Agile governance: Track emergent issues across social, regulatory, reputational and ethical domains to inform processes that govern the integrity of a system, its uses, architecture and embedded components, data sourcing
and management, model training and monitoring. - Vigilant supervision: Continuously fine-tune, curate and monitor systems to achieve reliability in performance, identify and remediate bias, promote transparency and inclusiveness.
What makes these principles specific to AI? It’s the qualifiers in each one: purposeful, agile and vigilant. These characteristics address the unique facets of AI that can pose the greatest challenges.
For example, the use of AI in historically “human-only” areas is challenging the conventional design process. After all, the whole point of AI is to incorporate and, in effect,
Similarly, as the technologies and applications of AI are evolving at breakneck speed, governance must be sufficiently agile to keep pace with its expanding capabilities and potential impacts. And lastly, while all new innovations thrive with monitoring and supervision, the sheer stakes at play, plus the ongoing, dynamic “learning” nature of AI (which means it continues to change after it has been put in place) require more vigilance than organizations have typically adopted.
With these guiding principles at the core, the organization can then move purposefully to assess each AI project against a series of conditions or criteria. Evaluating each AI project against these conditions, which extend beyond those used for legacy technology, brings much-needed discipline to the process of considering the broader contexts and potential impacts of AI.
Assessing AI risks:
Let’s look at four conditions that you can use to assess the risk exposure of an AI initiative:
- Ethics — The AI system needs to comply with ethical and social norms, including corporate values. This includes the human behavior in designing, developing and operating AI, as well as the behavior of AI as a virtual agent. This condition, more than any other, introduces considerations that have historically not been mainstream for traditional technology, including moral behavior, respect, fairness, bias
and transparency. - Social responsibility — The potential societal impact of the AI system should be carefully considered, including its impact on the financial, physical and mental well-being of humans and our natural environment. For example, potential impacts might include workforce disruption, skills retraining, discrimination and environmental effects.
- Accountability and “explainability” — The AI system should have a clear line of accountability to an individual. Also, the AI operator should be able to explain the AI system’s decision framework and how it works. This is more than simply being transparent; this is about demonstrating a clear grasp of how AI will use and interpret data, what decisions it will make with it, how it may evolve and the consistency of its decisions across subgroups. Not only does this support compliance with laws, regulations and social norms, it also flags potential gaps in essential safeguards.
- Reliability — Of course, the AI system should be reliable and perform as intended. This involves testing the functionality and decision framework of the AI system to detect unintended outcomes, system degradation or operational shifts — not just during the initial training or
modelling but also throughout its ongoing “learning” and evolution.
Taking the time to assess a proposed AI initiative against these criteria before proceeding can help flag potential deficiencies so you can mitigate potential risks before they arise.
Chapter 2
Taking a holistic view of AI risks
Understand risk to unlock attributes of trusted AI
Having met these conditions for AI confidence, the organization can now action the next layer of checks and balances.
To truly achieve and sustain trust in AI, an organization must understand, govern, fine-tune and protect all of the components embedded within and around the AI system. These components can include data sources, sensors, firmware, software, hardware, user interfaces, networks as well as human operators and users.
This holistic view requires a deeper understanding of the unique risks across the whole AI chain. We have developed a framework to help enterprises explore the risks that go beyond the underlying mathematics and algorithms of AI and extend to the systems in which AI is embedded.
Our unique “systems view” enables the organization to develop five key attributes of a trusted AI ecosystem:
- Transparency: From the outset, end users must know and understand when they are interacting with AI. They must be given appropriate notification and be provided with an opportunity to (a) select their level of interaction and (b) give (or refuse) informed consent for any data captured and used.
- “Explainability”: The concept of explainability is growing in influence and importance in the AI discipline. Simply put, it means the organization should be able to clearly explain the AI system; that is, the system shouldn’t outpace the ability of the humans to explain its training and learning methods, as well as the decision criteria it uses. These criteria should be documented and readily available for human operators to review, challenge and validate throughout the AI system as it continues to “learn.”
- Bias: Inherent biases in AI may be inadvertent, but they can be highly damaging both to AI outcomes and trust in the system. Biases may be rooted in the composition of the development team, or the data and training/learning methods, or elsewhere in the design and implementation process. These biases must be identified and addressed through the entire AI design chain.
- Resiliency: The data used by the AI system components and the algorithms themselves must be secured against the evolving threats of unauthorized access, corruption and attack.
- Performance: The AI’s outcomes should be aligned with stakeholder expectations and perform at a desired level of precision and consistency.
Those organizations that anchor their AI strategy and systems in these guiding principles and key attributes will be better positioned for success in their AI investments. Achieving this state of trusted AI takes not only a shift in mindset toward more purposeful AI design and governance, but also specific tactics designed to build that trust.
Chapter 3
Leading tactics for managing risk and building trust
Emerging AI governance practices
With the increasing impact AI is having on business operations, boards need to understand how AI technologies will impact their organization’s business strategy, culture, operating model and sector. They need to consider how their dashboards are changing and how they can evaluate the sufficiency of management’s governance over AI, including ethical, societal and functional impacts.
To truly apply trusted AI principles, organizations need the right governance in place.
Let’s explore some of the leading tactics that we have observed with our clients to help build a trusted AI ecosystems:
AI ethics board — A multi-disciplinary advisory board, reporting to and/or governed by the board of directors can provide independent guidance on ethical considerations in AI development and capture perspectives that go beyond a purely technological focus. Advisors should be drawn from ethics, law, philosophy, privacy, regulations and science to provide a diversity of perspectives and insights on issues and impacts that may have been overlooked by the development team.
AI design standards —Design policies and standards for the development of AI, including a code of conduct and design principles, help define the AI governance and accountability mechanisms. They can also enable management to identify what is and is not acceptable in AI implementation. For example, these standards could help the organization define whether or not it will develop autonomous agents that could physically harm humans.
AI inventory and impact assessment — Conducting a regular inventory of all AI algorithms can reveal any orphan AI technologies being developed without appropriate oversight or governance. In turn, each algorithm in the inventory should be assessed to flag potential risks and evaluate the impact on different stakeholders.
Validation tools — Validation tools and techniques can help make certain that the algorithms are performing as intended and are producing accurate, fair and unbiased outcomes. These tools can also be used to track changes to the algorithm’s decision framework and should evolve as new data science techniques become available.
Awareness training — Educating executives and AI developers on the potential legal and ethical considerations around AI and their responsibility to safeguard users’ rights, freedoms and interests is an important component of building trust in AI.
Independent audits — Regular independent AI ethical and design audits by a third party are valuable in testing and validating AI systems. Applying a range of assessment frameworks and testing methods, these audits assess the system against existing AI and technology policies and standards. They also evaluate the governance model and controls across the entire AI life cycle. Given that AI is still in its infancy, this rigorous approach to testing is critically important for safeguarding against unintended outcomes.
A foundation of trust to enable a confident future
As AI and its technologies continue to evolve at an astonishing rate — and as we find new and innovative uses for them — it is more important than ever for organizations to embed the principles and attributes of trust into their AI ecosystem from the very start.
Those who embrace leading practices in ethical design and governance will be better equipped to mitigate risks, safeguard against harmful outcomes and, most importantly, sustain the essential confidence that their stakeholders seek. Enabled by the advantages of trusted AI, these organizations will be better positioned to reap the potential rewards of this tremendously exciting, yet still largely uncharted journey.
What questions should leaders be asking?
- How can my organization minimize risks in our AI journey while still enabling us to harness the full potential of these exciting new technologies?
- How can my organization use these technologies to augment human intelligence and unlock innovation?
- What steps can we take to build our AI strategy and systems on a foundation of trust and accountability?
Summary
The potential of AI to transform our world is tremendous, but the risks are significant, complex and fast-evolving. Those who embed the principles of trust in AI from the start are better positioned to reap AI’s greatest rewards.