On Monday, President Joe Biden issued an executive order on AI that outlines the federal government’s first comprehensive regulations on generative AI systems. The order includes testing mandates for advanced AI models to ensure they can’t be used for creating weapons, suggestions for watermarking AI-generated media, and provisions addressing privacy and job displacement.
In the United States, an executive order allows the president to manage and operate the federal government. Using his authority to set terms for government contracts, Biden aims to influence AI standards by stipulating that federal agencies must only enter into contracts with companies that comply with the government’s newly outlined AI regulations. This approach utilizes the federal government’s purchasing power to drive compliance with the newly set standards.
As of press time Monday, the White House had not yet released the full text of the executive order, but from the Fact Sheet authored by the administration and through reporting on drafts of the order by Politico and The New York Times, we can relay a picture of its content. Some parts of the order reflect positions first specified in Biden’s 2022 “AI Bill of Rights” guidelines, which we covered last October.
Amid fears of existential AI harms that made big news earlier this year, the executive order includes a notable focus on AI safety and security. For the first time, developers of powerful AI systems that pose risks to national security, economic stability, or public health will be required to notify the federal government when training a model. They will also have to share safety test results and other critical information with the US government in accordance with the Defense Production Act before making them public.
Moreover, the National Institute of Standards and Technology (NIST) and the Department of Homeland Security will develop and implement standards for “red team” testing, aimed at ensuring that AI systems are safe and secure before public release. Implementing those efforts is likely easier said than done because what constitutes a “foundation model” or a “risk” could be subject to vague interpretation.
The order also suggests, but doesn’t mandate, the watermarking of photos, videos, and audio produced by AI. This reflects growing concerns about the potential for AI-generated deepfakes and disinformation, particularly in the context of the upcoming 2024 presidential campaign. To ensure accurate communications that are free of AI meddling, the Fact Sheet says federal agencies will develop and use tools to “make it easy for Americans to know that the communications they receive from their government are authentic—and set an example for the private sector and governments around the world.”
Under the order, several agencies are directed to establish clear safety standards for the use of AI. For instance, the Department of Health and Human Services is tasked with creating safety standards, while the Department of Labor and the National Economic Council are to study AI’s impact on the job market and potential job displacement. While the order itself can’t prevent job losses due to AI advancements, the administration appears to be taking initial steps to understand and possibly mitigate the socioeconomic impact of AI adoption. According to the Fact Sheet, these studies aim to inform future policy decisions that could offer a safety net for workers in industries most likely to be affected by AI.