Skip to content
The Biden Administration’s Approach to AI, Explained
Go to my account

The Biden Administration’s Approach to AI, Explained

The government already regulates artificial intelligence, but future federal policy will depend on how Congress understands its risks.

OpenAI and ChatGPT logos. (Photo by LIONEL BONAVENTURE/AFP via Getty Images)

Congress seems ready to act on new challenges posed by artificial intelligence platforms such as OpenAI’s ChatGPT, with some suggesting creation of a new, AI-specific federal agency. 

Three congressional committees held hearings last week on “generative” AI, which spawns responses to prompts in the form of text, visual images, and audio, or combinations thereof. 

But it’s not as if the federal government is currently doing nothing. Agencies across the government already interact with AI, and the Biden administration is laying the groundwork for future action.

How is the federal government regulating AI now?

For narrow uses of artificial intelligence, the agencies’ approach is fairly straightforward: The Food and Drug Administration, for example, regulates algorithmic medical devices (such as diagnostic tools), the Department of Transportation regulates self-driving car technology, and the Department of Defense regulates lethal autonomous weapons (drones). 

Recent Biden administration decisions suggest more of the same. On April 25, for instance, the leaders of four government agencies—the Federal Trade Commission (FTC), the Civil Rights Division of the Department of Justice (DOJ), the Consumer Financial Protection Bureau (CFPB), and the Equal Employment Opportunity Commission (EEOC)—issued a joint statement emphasizing that their “existing legal authorities apply to the use of automated systems and innovative new technologies just as they apply to other practices.” In the case of the FTC and the CFPB, that authority means going after unfair and deceptive business practices and products; and for the DOJ and the EEOC, it means protecting people from discrimination.

What is the federal government considering for the future?

With all of this alphabet soup activity, a new agency dedicated to AI regulation could be redundant. In a federal government with over 2 million employees, “somebody’s got some authority to do something on AI,” said Adam Thierer, a senior fellow at the R Street Institute. “It’s not like we’re dealing with a complete policy vacuum.”

But the “foundation models” that power generative AI tools pose a different set of potential risks. Trained on unprecedented amounts of data, they “can be easily adapted to perform a wide range of downstream tasks,” according to Stanford University Human-Centered Artificial Intelligence.  

That adaptability and range could contribute to a variety of policy problems. At last Tuesday’s hearing, Sen. Richard Blumenthal listed “weaponized disinformation, housing discrimination, harassment of women, and impersonation, fraud, voice cloning, deep fakes … the displacement of millions of workers, the loss of huge numbers of jobs.”

With problems like these in mind, Congress and the White House want to do more—and the administration has indicated it might push the bounds of its already broad statutory authority if Congress doesn’t act. Last month, the National Telecommunications and Information Administration (NTIA), a division of the Department of Commerce, opened itself up for public comment on AI accountability policy, including the possibility of audits and assessments of AI systems.

These impact assessments (undertaken on the front end of AI development) or audits (on the back end) would be meant to “help provide assurance that an AI system is trustworthy,” according to NTIA’s website, which also favorably compares the proposed processes to financial audits. But Thierer is worried that the National Environmental Protection Act (NEPA)—a 1970 law that has ironically hampered the transition to cleaner energy infrastructure—is a better analog.

“In the world of finance, if you’re auditing the books, then the numbers either add up or they don’t,” he noted. In contrast, deciding whether an algorithm is unfairly biased is more subjective. Whether the government has the resources and personnel to competently implement and enforce AI audits or assessments is also an open question, given that about half of federal agencies failed to submit inventories of their own AI use cases even when a 2020 executive order required them to do so.

But beyond deepfakes, foundation models present more existential risks as well. Instead of narrow algorithms prescribing the wrong dose of a drug, bombing the wrong house, or crashing a Tesla, a single foundation model with enough computing power could come up with new cancer treatments or write a TV show—or synthesize a biological weapon and trick human agents into releasing it.

What about accounting for future AI development?

OpenAI CEO Sam Altman, a witness at a Senate Judiciary Committee hearing last week, is especially concerned about the implications of “frontier models” approaching artificial general intelligence (AGI)—hypothetical future AI systems that could think or reason in a way that matches or exceeds human capacity. Last week, he took to Twitter to reiterate that he thinks only models with certain capabilities should be regulated, heading off critics who say he wants industry regulation in order to keep his company on top.

“We need at some point to demarcate, ‘here be dragons,’” said Samuel Hammond, an economist at the Foundation for American Innovation. That wouldn’t mean embracing AI regulation uncritically—an approval process modeled on the FDA, as AI researcher Gary Marcus proposed in Tuesday’s hearing, could “crush the entire ecosystem outside of the biggest players,” Hammond said. But it could mean the government taking a more active role, not only in regulation, but also in research and development. Instead of the FDA, Hammond thinks we should look to early NASA or the Manhattan Project.

“The whole benefit of a new agency would be to have something that’s more nimble and can update more quickly than the U.S. government can, because the changes that are going to happen over the next five to 10 years are going to make it very difficult for the government to respond otherwise,” he said.

Price St. Clair is a former reporter for The Dispatch.

Share with a friend

Your membership includes the ability to share articles with friends. Share this article with a friend by clicking the button below.

Please note that we at The Dispatch hold ourselves, our work, and our commenters to a higher standard than other places on the internet. We welcome comments that foster genuine debate or discussion—including comments critical of us or our work—but responses that include ad hominem attacks on fellow Dispatch members or are intended to stoke fear and anger may be moderated.

You are currently using a limited time guest pass and do not have access to commenting. Consider subscribing to join the conversation.

With your membership, you only have the ability to comment on The Morning Dispatch articles. Consider upgrading to join the conversation everywhere.