Welcome back to Techne! Today is August 1, so Happy Rabbit’s Day! For some reason, my family still follows the old tradition of saying “rabbit” on the first day of the month for good luck. So I hope you have a productive month!
The Four Fault Lines in AI Policy
For a while in my life, I thought I wanted to be a communications professor. I even pursued a master’s in new media and communication studies for two years. That time resulted in some bittersweet memories—I never finished this graduate degree, opting instead to study economics later—but one of the best things it gave me was a proper liberal arts education.
I read Foucault, Habermas, Bauman, and countless other scholars (and charlatans). I studied the history of media, and how democracy interacts with technology—the classics for tech policy. And especially relevant to today’s fights over artificial intelligence, I was also exposed to what was back then the latest technical methods: semantic analysis, early natural language processing, word count methods, and latent analysis, among others.
So for the last decade, I have been following developments in AI from afar, reading a technical paper here and there, and writing about the issue when it intersects with public policy. Just a couple of years ago, as the AI conversation centered on predictions of massive job losses, I observed the following:
The conflict over the competing methodologies points to a much deeper problem that policymakers should understand. Not only is there a lack of consensus on the best way to model AI-based labor changes, but more important, there is no consensus as to the best policy path to help us prepare for these changes.
That world no longer exists. What’s changed since 2019 is that a new interest group that focuses on AI has cropped up. In a series of posts, Politico’s Brendan Bordelon has reported on how “a small army of adherents to ‘effective altruism’ has descended on the nation’s capital and is dominating how the White House, Congress and think tanks approach the technology.” Andrew Maranz called them “AI doomsayers” in The New Yorker. They are connected to tech, often have ties with the effective altruism (EA) movement or to the rationalist movement, are singularly focused on AI, and are relatively new to policy in general.
In the past year, I’ve had probably a dozen meetings with people loosely affiliated with this group. Today’s edition of Techne is a high-level report of sorts from these conversations. Maybe it’s because I just picked up his new book, but I’m reminded of a phrase economist Glenn Loury sometimes repeats: “The sky isn’t falling, the tectonic plates are shifting.”
Here are four fault lines in AI policy.
- The two cultures of D.C. and San Francisco.
AI policy often echoes the misunderstood Kipling line: “Oh, East is East, and West is West, and never the twain shall meet.” In the East—in Washington, D.C., statehouses, and other centers of political power—AI is driven by questions of regulatory scope, legislative action, law, and litigation. And in the West—in Silicon Valley, Palo Alto, and other tech hubs—AI is driven by questions of safety, risk, and alignment. D.C. and San Francisco inhabit two different AI cultures.
There is a common trope that policymakers don’t understand tech. But the obverse is even more true: Those in tech aren’t often legally conversant. Only once in those dozen or so conversations did the other person know about, for example, the First Amendment problems with all AI regulation, and that’s because he read my work on the topic. As I said in my piece, “Would It Even Be Constitutional to Pause AI?”
Discussions surrounding the AI pause idea have similarly neglected the essential legal foundations. In September, the Effective Altruism Forum held a symposium on the AI pause. While there were many insightful arguments, underscoring the ethical, societal, and safety considerations inherent in the continued advancement of AI, there was no discussion on the legal underpinnings that would implement a ban. The Forum has been one of the primary outlets for the AI safety community, along with Less Wrong, and yet, when searching both sites for the key legal cases that might interact with an AI pause, nothing comes up.
The problem is quite serious. California’s proposed SB 1047, which would regulate the most advanced AI models, likely violates the First Amendment and the Stored Communications Act at a minimum, as well as the dormant Commerce Clause. (The May 9 edition of Techne was all about SB 1047, by the way!) And yet, few seem to care that the bill will probably not survive the courts. A lack of legal understanding is a very odd blind spot to have when trying to enact federal and state policy.
- AI timelines and probabilities.
What’s been most surprising about these conversations is that existential risk, or x-risk, is the prime motivator for nearly everyone.
If you’re in the know, you know x-risk denotes an extreme risk. It is the worry that an AI agent might go rogue and cause astronomically large negative consequences for humanity such as human extinction or permanent global totalitarianism. Sometimes this is expressed as p(doom) or the probability of doom.
And the p(doom) origin stories are all typically similar: I worked on AI tech or close to it, saw what it was capable of, I saw the growth in capability, learned about x-risk, and now I want guardrails.
Most people I encountered had very concrete dates for when they thought artificial general intelligence (AGI) would be achieved. In practice, however, we tended to discuss whether or not prediction markets are correctly estimating this event. Metaculus, a popular prediction market, is currently predicting AGI will be achieved on May 24, 2033.
Beyond AGI is the notion of an artificial superintelligence (ASI). Philosopher Nick Bostrom popularized the term, defining it in a 1997 paper as intelligence “much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.” But writer Eliezer Yudkowsky (Yud for short) took this idea and ran with it. In what he dubbed the “hard takeoff” scenario, Yud explained that AI might reach a point where “recursive self-improvement” is possible with “an exactly right law of diminishing returns that lets the system fly through” progress. In this scenario, when “AI go FOOM,” there is a discontinuity, in the way that “the advent of human intelligence was a discontinuity with the past.” However, progress between AGI and ASI might not occur via a hard takeoff (or “FOOM”). ASI might take longer, from perhaps 2029 to 2045 in what is known as a “soft takeoff.” Or, it might not be possible to achieve at all.
- The construction of x-risk.
Broadly speaking, all of the conversations tended to follow some common lines of questions:
- When will AGI occur? Is your prediction faster or slower than the markets and everyone else?
- Do you think ASI is possible? If so, when will that occur? Will we experience FOOM or a slow takeoff?
- What is the relationship between FOOM and x-risk? Does FOOM mean higher x-risk?
Most everyone I talked with thought that prediction markets were underestimating how long it would take to get to AGI. But more importantly, they strongly disagreed about the relationship between all of these timelines and x-risk. There seems to be a common assumption that faster development times between AGI and ASI necessarily mean a high risk for doom.
Color me skeptical. I tend to agree with economist Tyler Cowen, who wrote,
When people predict a high degree of existential risk from AGI, I don’t actually think “arguing back” on their chosen terms is the correct response. Radical agnosticism is the correct response, where all specific scenarios are pretty unlikely.
He continued,
Existential risk from AI is indeed a distant possibility, just like every other future you might be trying to imagine. All the possibilities are distant, I cannot stress that enough. The mere fact that AGI risk can be put on a par with those other also distant possibilities simply should not impress you very much.
And there are a lot of possibilities. So I tend to take the advice of math professor Noah Giansiracusa, who warned that “so many people rush to state their p(AI doom) without defining what the heck this is. A probability estimate is meaningless if the event is not well defined.” Defining x-risk is critically important.
- “We’ve got to get ahead of it.”
I was recently at an event at the Bipartisan Policy Center, a Washington think tank, listening to a talk from Sen. Amy Klobuchar about AI deep fakes. She offered a call for action that I have heard over and over in my discussions: “We’ve got to get ahead of it.”
Nathan Calvin, senior policy counsel at the Center for AI Safety Action Fund and supporter of California’s SB 1047, which I discussed previously at Techne, framed the issue in a similar way, saying,
AI is poised to fuel profound advancements that will improve our quality of life but the industry’s potential is hamstrung by a lack of public trust. The common sense safety standards for AI developers in this legislation will help ensure society gets the best AI has to offer while reducing risks that it will cause catastrophic harm.
Generally, this notion is known as the “precautionary principle.” Economists Kenneth Arrow and Anthony Fisher formalized the idea in a 1974 paper that showed risk-neutral societies should favor precaution since it allows for more flexibility in the decision space in the future. But there is one significant caveat to this line of logic that was critically added by Avinash Dixit and Robert Pindyck in 1994: Being risk-neutral in the decision space can come at the expense of potential returns, since not making a decision has a cost, after all. The same logic applies to innovation. There is a clear time value to innovation that often isn’t properly accounted for with treatments of the precautionary principle. There is an opportunity cost embedded in the precautionary principle.
The assumption behind Arrow and Fisher’s research, the precautionary principle more generally, and most everything that follows is built on risk neutrality.
I tend to think that we should be more culturally tolerant of risk. If, for example, advanced AI reduces mortality, then we should be willing to bear even large existential risks. I also tend to care a lot more about growth. By hesitating to adopt new technologies or approaches because of uncertainty about their long-term consequences, societies may forego potential gains in efficiency, productivity, and quality of life. Apple just recently rolled out its latest update to the iPhone operating system and didn’t include its AI product in Europe because of the strict regulations. Bad laws are a real threat.
The challenge becomes striking a balance between prudence and progress. But for what it’s worth, we should be pressing the pedal on progress.
And then there is everything else.
Of course, there’s a lot more than just these four fault lines.
For one, I tend to find that most overestimate just how easy it will be to implement an AI system. Again, I’m skeptical because it’s not easy to transition to new production methods, as I have explained in Techne before. One new report from UpWork found that, “Nearly half (47%) of workers using AI say they have no idea how to achieve the productivity gains their employers expect. Over three in four (77%) say AI tools have decreased their productivity and added to their workload in at least one way.” When I pressed this point in conversations, about half of the people I talked to said that AI would be frictionlessly adopted. That seems wrong.
People also seem to be split on open source. Some thought it just exacerbated x-risk while others thought it could be a useful corrective. For my own part, I’m fairly pro-open source because I think it is part of the project of searching for safe AI systems. And in an odd alignment of interests, Sen. J.D. Vance, Federal Trade Commission Chair Lina Khan, and the National Telecommunications and Information Administration (NTIA) at the Department of Commerce have been supportive of open source for competitive reasons. For a fuller treatment of this idea, check out analyst Adam Thierer’s article explaining why regulators “are misguided in efforts to restrict open-source AI.”
In an upcoming edition of Techne, I’ll cover some of the other issues in AI that seemingly aren’t being talked about, like efforts to automate property taxes, how deep fakes might change evidence in court cases, and the cost of AI regulation.
Until next week,
🚀 Will
Notes and Quotes
- In some good news for a change, no guinea worm cases have been detected in the first three months of 2024. If this continues, we could soon achieve global eradication of the third disease in history, after smallpox and rinderpest.
- The 5th Circuit last week ruled against the Federal Communication Commission’s Universal Service Fund, deeming the $8 billion subsidy program unconstitutional. The USF supports rural broadband infrastructure and provides internet discounts for schools, libraries, health care centers, and low-income households. Broadband groups denounced the ruling.
- The federal government has filed its response in TikTok v. Garland, a case brought by TikTok to stop a bill that would ban the platform. There are a lot of redactions in the government’s filing. Oral arguments before the Supreme Court are scheduled for September 16. To recap some of my thoughts on the case, check out this old Techne edition from earlier this year.
- I just learned about dark oxygen: “Scientists have discovered that metallic nodules on the seafloor produce their own oxygen in the dark depths of the Pacific Ocean. These polymetallic nodules, generating electricity like AA batteries, challenge the belief that only photosynthetic organisms create oxygen, potentially altering our understanding of how life began on Earth.”
- The Federal Trade Commission is serving eight companies—including Mastercard, JPMorgan Chase, and Accenture—with requests for information as part of an investigation into so-called “surveillance pricing.” The agency is exploring how artificial intelligence is used to change pricing rapidly based on data about customer behavior and characteristics.
- Nike’s market capital fell $24 billion when it reported earnings last month. CEO John Donahoe, appointed in 2020, made three key decisions that have likely contributed to the company’s decline: eliminating product categories, ending Nike’s long-standing wholesale leadership, and transitioning to a predominantly online marketing model.
- The list of diseases Ozempic treats continues to grow. It seems that the drug is effective with Type 2 diabetes, obesity, cardiovascular disease, metabolic liver disease, kidney disease, inflammation-related conditions, neurodegenerative diseases (Alzheimer’s and Parkinson’s), addictions, psychiatric disorders, and could treat infertility.
- Low-income households are dropping their internet service just two months after the Affordable Connectivity Program (ACP) ended. ACP, which provided $30 monthly broadband discounts to qualifying households, ended in May after Congress failed to allocate additional funding. Broadband provider Charter Communications recently reported it had 154,000 subscriber cancellations, 100,000 of whom were receiving the ACP benefit.
- Energy utility provider Constellation Energy is in talks with Pennsylvania lawmakers to help fund a partial restart of the Three Mile Island power facility. In other energy news, the Nuclear Company is “a new startup with ambitious aims of spurring the construction of fleets of new nuclear power plants in the U.S.”
- Florida’s high-speed rail line reported a financial loss of $116 million for the first quarter of 2024.
- A federal court ruled that border officials must obtain a warrant based on probable cause before searching travelers’ electronic devices.
- Mariam Kvaratskhelia advocates for the U.S. to work toward surpassing China in space policy rather than pursuing cooperation.
- This comment on the Marginal Revolution blog got me thinking: “I don’t think you all are considering how much an ice free Arctic Ocean will change international trade routes.” The commenter seems to think we should be more bullish on Anchorage.
AI Roundup
- Nick Whitaker outlined four potential priorities for the GOP’s AI policy: 1) retaining and investing in the U.S.’s strategic lead, 2) protecting against AI threats from external actors, 3) building state capacity for AI, and 4) protecting human integrity and dignity.
- Terence Parr and Jeremy Howard explain the math behind deep learning in this paper.
- Sen. John Hickenlooper introduced the “Validation and Evaluation for Trustworthy Artificial Intelligence (VET AI) Act,” which calls upon the National Institute of Standards And Technology to develop AI guidelines.
- The AI Arms Race narrative is picking up traction among tech moguls, with Sundar Pichai and Mark Zuckerburg emphasizing the dangers of underinvesting in AI.
- Computer scientists are working on a large language model (LLM) that can offer casual estimates for problems. It uses a “novel family of causal effect estimators built with LLMs that operate over datasets of unstructured text.”
- Here’s a fascinating thread by Neil Chilson, walking through U.S. and EU approaches to the AI ecosystem.
Research and Reports
- A new working paper finds: “The empirical evidence relating to concentration trends, markup trends, and the effects of mergers does not actually show a widespread decline in competition. Nor does it provide a basis for dramatic changes in antitrust policy. To the contrary, in many respects the evidence indicates that the observed changes in many industries are likely to reflect competition in action.”
- A study shows that the decline in nuclear power plant growth after Chernobyl led to increased fossil fuel lobbying in the U.S. and U.K., exploiting nuclear fears. This shift resulted in greater air pollution because of reduced nuclear investment. The study estimates this caused a loss of 141 million expected life years in the U.S., 33 million in the U.K., and 318 million globally.
Please note that we at The Dispatch hold ourselves, our work, and our commenters to a higher standard than other places on the internet. We welcome comments that foster genuine debate or discussion—including comments critical of us or our work—but responses that include ad hominem attacks on fellow Dispatch members or are intended to stoke fear and anger may be moderated.
You are currently using a limited time guest pass and do not have access to commenting. Consider subscribing to join the conversation.
With your membership, you only have the ability to comment on The Morning Dispatch articles. Consider upgrading to join the conversation everywhere.