Skip to content
The States’ Approach to Artificial Intelligence, Explained
Go to my account

The States’ Approach to Artificial Intelligence, Explained

Privacy concerns take precedence in the rush to regulate AI.

(Photo Illustration by Jonathan Raa/NurPhoto via Getty Images)

Public interest in and concern about artificial intelligence continues to grow, and not just at the federal level. In fact, some state and local governments are moving faster than policymakers in Washington, D.C.

Will this increasingly complex patchwork of regulations protect consumers, slow down AI innovation—or both?

What are the states doing now?

Generative AI—the kind that generates responses to prompts in the form of text, images, and audio—has dominated news coverage since the advent of OpenAI’s ChatGPT last year. Although ChatGPT may have played a role in spurring lawmakers to action, the recent spate of legislation in the states has more to do with older forms of AI and broader concerns about data privacy on the internet. (Some 25 percent of American businesses have already deployed AI in some form, according to a 2022 report by IBM.)

“Now we’re just seeing publicly accessible and commonly used forms of artificial intelligence generate more steam around these issues,” said Tom Romanoff, director of the Technology Project at the Bipartisan Policy Center. “A lot of what you see at the state level is actually what people have been talking about for close to 10 years now.”

AI’s effects on employment in particular have drawn states’ attention. The algorithms that drive many online advertisements and job boards used by recruiters and applicants alike can lead to discriminatory outcomes. For example, “personalized job boards like ZipRecruiter aim to automatically learn recruiters’ preferences and use those predictions to solicit similar applicants,” Harvard Business Review reported in 2019. “If the system notices that recruiters happen to interact more frequently with white men, it may well find proxies for those characteristics (like being named Jared or playing high school lacrosse) and replicate that pattern.”

As is true at the federal level, all 50 states already have employment discrimination laws that could be brought to bear on algorithmic discrimination, but local and state governments have also pursued further regulation. New York City, for instance, will begin enforcing new rules requiring companies inform job candidates when an “automated employment decision tool” is being used and to subject those tools to annual bias audits. An AI bill that recently died in committee in California—AB 331—would have required transparency and impact assessments and created a private right of action so people could file lawsuits for AI-driven discrimination.

Other state-level activity relevant to AI has been rolled into broader, “omnibus” consumer privacy laws, said Goli Mahdavi, counsel at Bryan Cave Leighton Paisner in San Francisco. Such laws aim to give consumers more knowledge and control over how tech companies collect and use their personal data. California was an early adopter, but eight other states have followed its lead in passing data privacy laws, including four so far this year. 

What is the patchwork problem?

The most recent state data privacy bill, in Montana, was signed earlier this month and allows people to opt out of “profiling in furtherance of solely automated decisions”—a precedent that could serve as a reference point for future laws governing AI.

“The silver lining is that there are a lot of parallels with data privacy compliance work and AI compliance work,” Mahdavi said. “I think businesses can take some comfort in the fact that their AI compliance can be layered on to their existing data privacy compliance frameworks.”

Broadly written laws can be burdensome, even for small- and medium-sized businesses outside of the tech sector. The proposed Stop Discrimination by Algorithms Act in Washington, D.C., would apply to any organization that “possesses or controls personal information on more than 25,000 District residents,” requiring annual audits and raising the possibility of civil lawsuits.

“Even some of the dry cleaners in the area of D.C., they’ve got way more customers than 25,000,” said Jordan Crenshaw, senior vice president of the U.S. Chamber of Commerce’s Technology Engagement Center. He added that if the bill becomes law, the requirements would be “incredibly onerous on small businesses.”

The inconsistency of the rules among the states could become a concern—especially given that “data does not adhere to spatial boundaries,” Romanoff said. One risk would be the acceleration of a “race to the bottom” dynamic in which the biggest national tech companies adhere to the regulations of the strictest state while smaller businesses struggle to keep up with compliance costs. Another would be that different states’ requirements end up coming into direct conflict.

For example, if one state were to determine that companies cannot collect certain kinds of user data while a neighboring state required the collection of that data for impact assessments, a company that operated in both states would be forced to pick sides—or at least to spend more money on compliance.

That in turn could hurt American competitiveness abroad. “Having a patchwork of regulations, especially with as complex a technology as artificial intelligence, that would make it difficult to comply across state lines would put us at a unique disadvantage to both China and to Europe,” Crenshaw said.

What comes next?

Federal legislation on consumer privacy and other AI-adjacent issues would protect against the patchwork problem, business advocates say. The basic framework for such legislation already exists: The American Data Privacy and Protection Act (ADPPA), which would have limited tech companies’ ability to collect and sell their users’ data, already has bipartisan support. But it was cut from last year’s omnibus spending bill, in large part due to opposition from California lawmakers who wanted their state’s consumer privacy law to be exempt from federal preemption.

With that history in mind, the death in committee of California’s broad AI bill, AB 331, could signal to federal lawmakers that their next effort may not meet the same pushback as the ADPPA.

“California’s failure to advance a broad AI bill this year could potentially clear the path for Congress to act to regulate AI without fear of obstruction from the California congressional delegation,” Mahdavi said. And it could provide state legislators, in California and elsewhere, an opportunity to step back and deliberate: “We want to shape up a law that’s going to be future-proofed to a certain extent.”

While a national privacy law could lay the foundation for future technology policy, including regulation of generative AI as it evolves, state legislatures “should be considering the impact of their regulations from the perspective of the unknown unknowns,” Romanoff said. “I don’t think it’s an option for them to just wait for the federal government to do something.”

Price St. Clair is a former reporter for The Dispatch.

Share with a friend

Your membership includes the ability to share articles with friends. Share this article with a friend by clicking the button below.

Please note that we at The Dispatch hold ourselves, our work, and our commenters to a higher standard than other places on the internet. We welcome comments that foster genuine debate or discussion—including comments critical of us or our work—but responses that include ad hominem attacks on fellow Dispatch members or are intended to stoke fear and anger may be moderated.

You are currently using a limited time guest pass and do not have access to commenting. Consider subscribing to join the conversation.

With your membership, you only have the ability to comment on The Morning Dispatch articles. Consider upgrading to join the conversation everywhere.