While AI can bring big benefits for society, it can also erode presently accepted standards of privacy and civil liberties.
Progress brings risk
While AI holds significant promise for improving public safety while reducing costs, citizens and governments also need to consider its potential risks. The main risks in this area are:
- Imperfect technology. AI isn’t perfect, and much of it doesn’t meet the standards needed to use it with confidence in the public safety realm. One frequently cited example is Amazon’s facial recognition software, which matched 28 members of the US Congress to criminal mugshots in 2018. A facial recognition trial at the 2017 Notting Hill carnival was also only 2% accurate.
These examples show that at this point in its development, governments and public safety departments can’t simply “turn on” AI and take its findings as fact. On top of that, while its analysis can avoid human error and biases, AI actually compounds any errors that are engrained into system programming. Machine learning can absorb the biases of its designers, and codify them through learned associations based on inaccurate foundational premises. Governments need to know that algorithms are fair and trustworthy before they apply AI.
- Civil liberty and privacy concerns. Human rights groups globally have expressed their concerns about AI’s potential to erode civil liberties and citizen privacy, particularly around public safety. Organizations from police forces to airlines can currently collect and use all kinds of personal data without the knowledge or permission of citizens. Every day, individuals cede control of their data profile, often without realizing it, through activities such as commercial DNA testing, tax filings, and priority security screening lines at airports. Governments can then use that data to, for example, assess someone’s ability to post bail while awaiting trial. They can even “rank” citizens, as in China’s controversial social credit score program.
In a recently released report, the American Civil Liberties Union put forward some worrying potential scenarios. For example, a sheriff might receive a daily list of citizens who appear to be under the influence in public. The indicator could be something like changes to gait, speech or other patterns caught on surveillance cameras.
So while AI can bring big benefits for society, it can also erode presently accepted standards of privacy and civil liberties.
Factors governments need to consider
To guard against erosions of civil liberties and citizen privacy, governments will need to apply a set of fundamental principles for the use of AI in public safety. These should include:
- Building safeguards to protect privacy and prevent biases
- Ensuring AI efforts are consistent with applicable legal principles
- Creating trust by communicating with communities in an active and transparent way
- Including human insights and judgment in the final analysis of AI activities
If governments adhere to these principles, they could more effectively manage the transformative impact of AI and positively impact the lives of its citizens while protecting their privacy.
Summary
AI can play an important role in areas of public safety as far-ranging as narcotics, crime deterrence, natural disaster response and crowd control. But as governments explore the promises AI offers, they must also consider its risks to civil liberty and privacy, as well as its mixed record on accuracy and bias.