1. Plan around risk as well as rewards
The potential gains from AI are so attractive that companies often jump at the opportunity to set up the technology, especially if pilot projects suggest that a change can yield significant results. Instead, companies should conduct an inventory of where AI might be used, focusing on potential benefits in areas that align with existing corporate strategies. Reputational risks must be evaluated for each use case, alongside financial and execution risks.
Ideally, this should take place before AI is deployed. Companies already using the technology should audit the risks of current deployments and pause them if necessary.
2. Invest in data
An AI model is only as good as the data that goes into it. There is no point in investing in leading-class data scientists if you do not have meaningful datasets on which they can train models. AI projects consist of multiple parts — from data quality frameworks to machine learning operations (MLOps) and change management. Therefore, the strategy should consider all these factors and not just focus around building a few quick proofs of concepts (POCs). To reflect fast-moving and slow-moving data signals, the ingestion of data must be ongoing, allowing the AI to reflect up-to-date information.
3. Use transparent models
AI may make a different decision today than it did yesterday as it learns and adapts to new inputs. But business decisions, for example, handing out promotions or choosing who to grant loans to, must be explainable, justifiable and auditable. If not, then a company will not be able to quantify the risks and justify the outcomes to customers, employees and regulators. Ultimately, managers still need to be held accountable for the decisions that they make.
For both the models and the underlying data, it is essential that people’s privacy is protected and that the model is robust against unforeseen incidents. The best practice here is to embed solutions in robust cybersecurity frameworks.
4. Audit before deployment — and after
Ideally, before deploying an AI model, it should be peer-reviewed and audited. In the future, we will see more of the role of an AI auditor, who examines and passes a model, or suggests improvements to mitigate risk. Once deployed, an AI must be continuously monitored and re-evaluated. Companies must use the best tools and techniques available to continuously monitor outputs and ensure they continue to deliver against corporate objectives.
5. Hire diverse teams
Even a perfectly designed AI may lead to conclusions that are unacceptable to a company due to ethical values. Models must therefore be subjected to more than simply technical checks and a variety of different disciplines should be involved in designing and evaluating AI applications. A business line leader or designer, for example, might have different insights from a data scientist or a data engineer.
Teams should reflect a diversity of roles and backgrounds, including gender, religion and race. This increases the chance of bias being identified early on and of designing a data collection process that is truly representative.
MENA companies have much to gain from deploying AI more widely, as long as they do so wisely. That means gathering usable data, using transparent models, and building interdisciplinary teams.
Related articles
Summary
MENA companies are halfway there when it comes to successfully embedding AI in decision-making. Improving the current process should be the key focus, especially when it comes to trustworthy data, algorithms and setting up as they play a major role in deciding the fate of an AI model that is programmed to garner accurate results.