1. Many lawmakers are concerned with the implications of AI for national security
Many lawmakers are concerned with the implications of AI for national security, including the pace of adoption by the US defense and intelligence communities and how AI is being used by geopolitical adversaries. For example, congressional hearings1 have examined2 barriers to the Department of Defense (DoD) adopting AI technologies and considered risks from adversarial AI. There have also been calls for guidelines to govern the responsible use of AI in military operations, including weapons systems, to avoid unintended actions when AI is used.3
Establishing and maintaining a competitive advantage on the global stage is a top priority of many lawmakers. Launching a bipartisan initiative to develop AI regulation, Senate Majority Leader Chuck Schumer (D-NY) expressed4 the need for the “U.S. to stay ahead of China and shape and leverage this powerful technology.”
2. Policymakers have raised concerns about AI’s potential impact on jobs
Many policymakers have raised concerns about AI’s potential impact on jobs, particularly in areas where workers could eventually be replaced, and who should bear the cost of displacement and retraining workers. In a new world powered by AI, there are also questions about how to train a workforce to adjust to the rapidly evolving technology and whether AI-reliant companies should be regulated and taxed differently than companies staffed by humans. While concerns about the impacts of technology on workers are not new, the rapid pace of companies adopting AI technology is unparalleled, creating additional challenges and pressure.
3. Policymakers are focused on the risk AI technologies carry in making discriminatory decisions
Policymakers are focused on the risk AI technologies carry in making discriminatory decisions.
Bias issues have been examined in several congressional hearings on AI and will continue to be a key concern as regulatory approaches are considered. Policymakers are focused on the risk AI technologies carry in making discriminatory decisions — just as human decision-makers do — and how AI technologies are only as effective as the data sets and algorithms they are built upon and the large language models that underpin them. In congressional hearings⁵, policymakers have expressed concerns about the potential for AI to discriminate and have heard testimony about the misidentification of individuals, particularly those in minority groups, by facial recognition software.
A report⁶ from the National Institute of Standards and Technology (NIST) provides an “initial socio-technical framing for AI bias” that focuses on mitigation through appropriate representation in AI data sets; testing, evaluation, validation, and verification of AI systems; and the impacts of human factors (including societal and historical biases).
4. Some policymakers are focused on the need for consumers to understand how and why AI technologies work