Question 1: Why is AI different from other technologies in terms of trust?
Unlike other technologies, AI adapts on its own, learning through use, so the decisions it makes today may be different from those it makes tomorrow. It’s important that those changes are continuously monitored to validate that those decisions continue to be appropriate and high quality and reflect corporate values.
For instance, risk can be introduced when AI systems are trained using historical data. Consider how that applies to hiring decisions. Does the historical data account for biases that women and minorities have faced? Do the algorithms reproduce past mistakes even though governance processes were implemented to prevent them? Does the system prevent unfairness and comply with laws?
"AI can be a great tool to augment humans, but we must understand its limitations,” says Nigel Duffy, former EY Global AI Leader. “The best answer from AI may still not be appropriate based on cultural and corporate values.”
AI’s decisions must be aligned with corporate values, as well as broader ethical and social norms, yet humans’ ethical standards are based on many things: our families, our cultures, our religions, our communities. And development teams are often mostly composed of men who are white or Asian, instead of reflecting our diverse world. Do their personal values reflect the specific corporate values we want applied in these situations?
But we also need to ask ourselves whether these systems are doing what we expect them to do. AI use is spreading, yet few organizations have mature capabilities to monitor performance. There are many examples of corporations that have gone bankrupt because of poorly managed automated decision-based systems; a company can torpedo itself in a day because of a runaway automated system.