Welcome back to Techne! In 1969, the BBC aired the documentary series “Civilisation: A Personal View by Kenneth Clark,” which outlined the history of Western art, architecture, and philosophy since the Dark Ages. Narrated by art historian Kenneth Clark, the series is a product of its time and for that it is fascinating. The first five episodes can be viewed on YouTube.
Notes and Quotes
- Google has signed a deal with Fervo Energy, a Nevada utility and geothermal energy company, to run data centers on renewable energy generated from underground heat. This collaboration aligns with Google’s target of operating exclusively on carbon-free sources by 2030.
- Amid calls for giving kids screen-free school experiences, a straightforward reform could end government subsidies for internet access on school buses. In this article, Joel Thayer and Nathan Leamer make the case for the Eyes on the Board Act.
- Gregory Brew, an analyst at the Eurasia Group, has a great thread on X debunking internet talk of an old U.S.-Saudi deal ending and causing the dollar to lose its power as the world’s lead currency.
- The International Center for Law and Economics, where I used to work, recently published “Dynamic Competition in Broadband Markets: A 2024 Update.” The report finds that there has been a rapid broadband evolution in the U.S. since 2021, with more houses connected, faster internet speeds, lower prices, and more active competition amongst providers.
- Leopold Aschenbrenner’s “Situational Awareness: The Decade Ahead” has been making the rounds. Formerly of OpenAI, Aschenbrenner’s whitepaper looks at the big picture of artificial general intelligence: the fast progress in deep learning, the massive technological leaps happening, the international situation with different countries racing for AI superiority, and the efforts underway to actually achieve general human-level AI.
- In the latest update of the city-building game, Cities: Skylines 2, the developers addressed players’ concerns about high rent by eliminating landlords entirely. Rent calculations now follow a new formula based on land value, building level, lot size, and space multiplier, with upkeep costs shared among renters.
- In 2022, the European Union adopted the Digital Services Act that imposes due process, transparency, and due diligence requirements for the content on social media platforms. However, the end result hasn’t been the removal of “illegal” content, but rather the suppression of speech, according to a report authored by the Future of Free Speech, a think tank at Vanderbilt University: “Legal online speech made up most of the removed content from posts on Facebook and YouTube in France, Germany, and Sweden. Of the deleted comments examined across platforms and countries, between 87.5% and 99.7%, depending on the sample, were legally permissible.”
- Brian Potter, who writes the incomparable Construction Physics Substack, released an explainer detailing the complexities of building AI data centers in the U.S.
- Mining firm Rare Earths Norway claims to have discovered Europe’s largest deposit of rare earth elements, a major find. This will bolster Europe’s efforts to reduce reliance on China’s dominance in the rare earths market.
The Lurking Dangers in State-Level AI Regulation
An ordinary bill signing became extraordinary last month when Colorado Gov. Jared Polis used the occasion to deliver a detailed critique of the bill he was signing, Senate Bill 24-205. The bill is one of the first pieces of legislation that regulates cutting-edge artificial intelligence (AI) systems. Still, Polis isn’t totally convinced that it’s ready for prime time: “Today, with reservations, I signed Senate Bill 24-205, ‘Concerning Consumer Protections in Interactions with Artificial Intelligence Systems.’ This is an important conversation to have, and in signing this bill I hope that it furthers the conversation, especially at the national level.”
It is a shame that Polis didn’t veto the bill, though, because I think he was ultimately right when he said:
I am concerned about the impact this law may have on an industry that is fueling critical technological advancements across our state for consumers and enterprises alike. Government regulation that is applied at the state level in a patchwork across the country can have the effect to tamper innovation and deter competition in an open market.
Polis has a long pedigree in tech. He started and then sold two different tech companies, including American Information Systems (AIS), which he founded in college. After selling an online florist business, he helped to found TechStars, a respected pre-seed investor in Colorado. In 2008, he was elected to Congress, representing Colorado’s 2nd District for five terms.
I first became aware of Polis in 2009 while interning in Washington, D.C. and I still rank him as one of the few leaders that truly understoods technology regulation. I also respect him because he was the only Democratic member of the Liberty Caucus, made up of libertarians and conservatives. So it’s sad to see him not put up a fight against the bill, especially since the signing letter explains the problems that plague every AI bill:
- AI is regulated, regardless of intent. The bill deviates from traditional laws by regulating the outcomes of AI system use, regardless of the developers’ or deployers’ intent. This is a shift from focusing solely on intentional discriminatory conduct.
- The compliance regime is complex. The bill imposes a detailed and possibly burdensome compliance framework on all developers and deployers of AI in Colorado, which may be particularly challenging for small deployers.
- AI systems will now have reporting requirements. New reporting obligations require notifying consumers about how AI was used in significant decisions, the types of data processed, and the data sources. These requirements could be cumbersome and complex to implement.
- Consumer rights to appeal decisions mean costly human review. The bill requires deployers to allow consumers to correct data inputs and appeal decisions made by AI, necessitating potentially costly and resource-intensive human review processes.
- The bill will negatively impact innovation and competition. Polis expressed concern about the potential adverse effects on the AI industry, which is critical for technological advancements in Colorado. He noted that state-level regulation might stifle innovation and deter competition due to a patchwork of varying laws across the country.
Polis ended by encouraging the legislature “to significantly improve on this before it takes effect” in 2026. Rarely do you find a governor denouncing a bill as they sign it, but Polis did just that. To me, it is a deeply telling signifier of what’s coming: AI will probably be regulated at the state level.
One of the biggest problems plaguing AI policy today is that state policymakers are taking the wrong lessons from recent history. In pushing his bill that would regulate AI, California state Sen. Scott Wiener said that there is “an opportunity to apply hard lessons learned over the last decade, as we’ve seen the consequences of allowing the unchecked growth of new technology without evaluating, understanding, or mitigating the risks.”
But regulating AI isn’t like regulating privacy, which is where the lessons were supposedly learned. Privacy law establishes a bundle of rights over personal data. But AI will affect each industry differently and problems will arise sector by sector. In a highly dynamic situation like this, it makes much more sense to rely upon the systems already in place to police bad behavior, like the consumer protection power of the Federal Trade Commission, the product recall at the National Highway Traffic Safety Administration (NHTSA), and the myriad of other governance mechanisms that exist, which I laid out here.
Still, state legislatures have been incredibly active on AI this year. According to the state government affairs firm Multistate, there were at least 762 bills on AI filed in the states this past year. I was closely watching California’s Senate Bill 1047, which I have written about previously, the Colorado bill, and Connecticut’s SB 2.
Connecticut’s SB 2 passed the Senate but stalled after Gov. Ned Lamont threatened to veto it. In explaining why he was going to veto the bill, Lamont told CT Insider, “I’m just not convinced that you want 50 states each doing their own thing. I’m not convinced you want Connecticut to be the first one to do it,” adding that the bill is too much too soon.
What’s telling is how Connecticut state Sen. James Maroney, the bill’s author, explained the effects of the bill in comments made to the CT Mirror: “We know that nothing we’re doing here today is going to change the world. The world has always been changing and it always will be.” He added, “Today we’re doing our piece to make sure the world is changing for the good.”
Maroney expresses an increasingly common viewpoint when it comes to all kinds of data regulation, especially regarding AI: These bills aren’t all that costly and, even if they have problems, they are still a good thing to do.
I disagree on both counts.
Falling back on the precautionary principle.
Instead of a wait-and-see approach, Wiener and others are pushing for a precautionary approach. This notion was formalized in economics by Kenneth Arrow and Anthony C. Fisher, who showed that the irreversibility of some future events means that risk-neutral societies should favor precaution since it allows for more flexibility in the decision space in the future. Their 1974 paper kicked off the discussion for the precautionary principle.
But one significant caveat to this line of logic was critically added by Robert K. Dixit and Robert S. Pindyck in 1994. Being risk-neutral in the decision space can come at the expense of potential returns, since not making a decision has a cost. The same logic applies to innovation. There is a clear time value to innovation that often isn’t properly accounted for with treatments of the precautionary principle. There is an opportunity cost embedded in the precautionary principle, which is the cost to innovation. And to be sure, there are costs involved with AI legislation.
Hawaii’s Senate Bill 2572 is the purest expression of the precautionary approach, referencing the idea in the bill’s text:
… it is crucial that the State adhere to the precautionary principle, which requires the government to take preventive action in the face of uncertainty; shifts the burden of proof to those who want to undertake an innovation to show that it does not cause harm; and holds that regulation is required whenever an activity creates a substantial possible risk to health, safety, or the environment, even if the supporting evidence is speculative. In the context of artificial intelligence and products, it is essential to strike a balance between fostering innovation and safeguarding the well-being of the State’s residents by adopting and enforcing proactive and precautionary regulation to prevent potentially severe societal-scale risks and harms, require affirmative proof of safety by artificial intelligence developers, and prioritize public welfare over private gain.
It is an impossible standard to meet, as regulatory analyst and noted author Adam Thierer explained, because no technology can ever be proven perfectly safe before deployment. But more to the point, I think it shows how fast AI is moving and where it is moving. It is not a positive future.
States are considering big and aggressive bills that will be costly for a technology that is only now being developed. There is real danger if each state crafts its own rules. The result will be a chaotic landscape where compliance becomes a nightmare for developers.
There are policymakers who think, “Nothing we’re doing here today is going to change the world,” but that to me is exactly the wrong attitude. Stopping AI innovation now is very much a possibility with bad state bills.
Until next week,
🚀 Will
AI Roundup
- AI research company Anthropic released a guide to red teaming, which methods work best in certain situations, and the pros and cons of different approaches. It is meant to serve as a guide for other companies and help policymakers understand how AI testing works.
- A paper by the University of Glasgow’s Michael Townsen Hicks, James Humphries, and Joe Slater titled “ChatGPT is Bullshit” argues that large language models often produce outputs that are false or inaccurate, not because they are intentionally deceptive but because they are fundamentally indifferent to the truth.
- House Majority Leader Steve Scalise said he doesn’t think any new AI legislation should be passed: “We want to make sure we don’t have government getting in the way of the innovation that’s happening that’s allowed America to be dominant in the technology industry, and we want to continue to be able to hold that advantage going forward.”
- Despite no official comments from Apple or OpenAI, sources suggest their partnership involves no direct financial compensation. Instead, it is viewed as a mutually beneficial arrangement: Apple gains access to OpenAI’s advanced chatbot technology and OpenAI benefits from the exposure of having its brand and AI capabilities promoted to hundreds of millions of Apple device users.
Research and Reports
- A report by Arizona State University researcher Charles Perreault and doctoral graduate Jonathan Paige finds that humans began to solve problems via technical tools around 600,000 years ago thanks to social learning.
Please note that we at The Dispatch hold ourselves, our work, and our commenters to a higher standard than other places on the internet. We welcome comments that foster genuine debate or discussion—including comments critical of us or our work—but responses that include ad hominem attacks on fellow Dispatch members or are intended to stoke fear and anger may be moderated.
You are currently using a limited time guest pass and do not have access to commenting. Consider subscribing to join the conversation.
With your membership, you only have the ability to comment on The Morning Dispatch articles. Consider upgrading to join the conversation everywhere.