Skip to content
The Gaps Between the Law and Artificial Intelligence
Go to my account

The Gaps Between the Law and Artificial Intelligence

The burgeoning AI industry is raising questions for which federal law has few answers.

A visitor watches an animated screen at the Mobile World Congress, the telecom industry's biggest annual gathering, in Barcelona. (Photo by JOSEP LAGO/AFP via Getty Images)

Artificial intelligence has rocketed to the fore of American consciousness in the past few months, thanks to the launches of generative AI programs like ChatGPT and Midjourney. But as AI use has grown, so have the regulatory questions around it.

The Senate Homeland Security and Governmental Affairs Committee is holding a hearing Wednesday to address some of these questions. But the lack of AI-specific policy has already started to play out in the legal arena.

Last month, for example, the U.S. Copyright Office rejected an attempt to copyright an AI-generated piece of art because it lacked the “human authorship” necessary for protection. The copyright request was filed on behalf of an AI program, and while a human seeking copyright protection for a work generated through AI might have a different result, the review board noted that copyright law does not outline rules for non-human authorship. But AI will almost certainly play a bigger role in content production in the future, which will spur more such requests.

Similar questions abound in other areas, including self-driving cars and liability laws, as well as AI voice cloning and deepfake software that raise questions about what rights individuals have to their likenesses. In foreign policy, lawmakers will have to determine how best to navigate the minefield of American businesses providing AI research and tech to states unfriendly to the American interests.

“With ChatGPT and everything, as AI has kind of come into the center of the cultural zeitgeist, people seem to be strangely sorting themselves into two camps,” Matthew Mittelsteadt, a research fellow at George Mason University’s Mercatus Center, told The Dispatch. “They’re either super hyped about the technology and they think it can have no downsides and it’s going to save us all, and then on the other end, there’s the people who are completely fearful.”

Both camps are “misrepresenting the tech as it looks today,” Mittelsteadt said. 

He’s geared some of his work toward encouraging lawmakers to study up on AI as they look to craft policy. “Just as policymakers need a working knowledge of economics, they need a working understanding of AI,” he wrote in a recent paper meant to be an AI introduction for lawmakers. “Why? Because AI is likely to affect all policy domains.”

Some elected officials have already taken that message to heart. Democratic Rep. Don Beyer of Virginia has been taking classes at George Mason University, working toward a master’s degree in the subject. Republican Rep. Jay Obernolte of California is the leader of the AI caucus in the House of Representatives and obtained an AI master’s degree before entering politics.

It is perhaps because of the lack of in-depth understanding of AI in the general public that the federal government has not done much to address the subject. The Biden administration has published an AI bill of rights, which outlines five principles the president believes should guide artificial intelligence development and policy and “protect the American public in the age of artificial intelligence.” 

But little has come policy-wise, though have been rumblings as of late, in addition to today’s hearing. Axios reported that the Federal Trade Commission is considering new AI rules, while Democratic California Rep. Ted Lieu has called for a new government agency to oversee artificial intelligence.

States and cities have already begun to address AI policy on a piecemeal basis. In 2021, New York City passed a law forbidding employers from using AI to make hiring decisions without an independent bias audit. Virginia has banned the dissemination of deepfake pornography, while California has gone further and banned the dissemination and creation of such content. A 2022 Connecticut privacy law allows consumers to opt out of data profiling by automated systems.

States have set a good example for the federal government to follow in singling out individual issues, according to Mittelsteadt.

“The reality of it is that AI is not a monolithic thing, and that sort of approach, I think, is misguided,” said Mittelsteadt. “Thankfully at the state level, we’ve seen a much more application-specific view in the regulations that we have seen.”

To get Congress to pursue application-specific decisions, Mittelsteadt recommends a simple first step: Get off Twitter.

“I think most people tend to look at Twitter and look at the funny outputs that these systems are creating, the politically charged outputs,” said Mittelsteadt. “And I think in most cases, lawmakers actually haven’t touched the technology.”

Alec Dent is a former culture editor and staff writer for The Dispatch.

Share with a friend

Your membership includes the ability to share articles with friends. Share this article with a friend by clicking the button below.

Please note that we at The Dispatch hold ourselves, our work, and our commenters to a higher standard than other places on the internet. We welcome comments that foster genuine debate or discussion—including comments critical of us or our work—but responses that include ad hominem attacks on fellow Dispatch members or are intended to stoke fear and anger may be moderated.

You are currently using a limited time guest pass and do not have access to commenting. Consider subscribing to join the conversation.

With your membership, you only have the ability to comment on The Morning Dispatch articles. Consider upgrading to join the conversation everywhere.