Quick Take: Proactive vs. Reactive AI Legislation
When should lawmakers act quickly, and when should they wait?
“The optimal amount of crime is not zero.”
There’s a popular saying amount economists: “The optimal amount of crime is not zero.” The idea here is that, while crime has obvious costs to society, law enforcement also has costs to society. These include both direct costs (it’s expensive to hire police officers and detectives) and indirect / intangible costs (like the cost of losing your privacy in a surveillance state). And like with many things, there are diminishing marginal returns to law enforcement. Catching the 10th-percentile-competent criminals is fairly easy; catching the 90th-percentile-competent criminals takes a lot more effort. Stopping all crime in a society might be literally impossible, and even if it’s technically possible, the cost of catching the last few criminals will likely exceed the cost of the crimes themselves. Therefore, the optimal amount of any given crime is the point at which the marginal cost of extra law enforcement exactly matches the marginal cost of extra crime. And for almost any crime, that optimal amount is somewhere above zero.
But note that I said almost. There are a few extreme kinds of crime that are so destructive that it’s worth spending resources to make sure they never happen, not even once. Take, for instance, the crime of unauthorized detonation of a nuclear bomb. If even a single bad actor gains access to a nuke and detonates it where they’re not supposed to, the consequences would be devastating. It could kill millions of people and cause permanent environmental damage. It’s therefore justified for the government to proactively police the supply of plutonium and other fissile materials to ensure that this never happens.
I would also add “building a misaligned artificial superintelligence” to this list. If even a single misaligned superintelligence is created, it could literally drive humanity extinct. It’s worth spending lots of resources to mitigate this risk, because the downsides of us failing — complete death for ourselves, and the loss of all future generations of humanity — outweigh nearly anything that we could possibly spend on risk mitigation. The optimal number of evil robots trying to take over the world really is zero.
Proactive and Reactive Legislation
That’s what I was thinking when I read
’s recent article about AI law. Dean writes:Where, in the dozens and dozens of pages of the European Union’s AI Act, might one find any reference to model sycophancy as a risk? Where might one find it in Colorado’s sweeping SB 205 statute, or in California’s now-vetoed SB 1047 (or the currently pending SB 53)? Nowhere. Despite unfathomable man hours having been poured into each of those laws, not once in any of them is the issue of sycophancy even mentioned. Meanwhile, for all the ink spilled (and laws passed) about AI being used to manipulate elections, there are precious few examples of such a thing ever happening.
And for all the millions of dollars spent on AI safety non-profits, advocacy organizations, and other research efforts, almost none of it has been devoted specifically to the issues implicated in Raine. Indeed, we have more robust methods for measuring and mitigating LLM-enabled bioweapon development risk than we do for the rather more mundane, but far more visceral, issue of how a chatbot should deal with a mentally troubled teenager. This critique applies to my own writing as well.
Regulators and policy researchers, eyeing the world from the top down, consistently fail to differentiate between actual risks and their own neuroses. The think tank conference table is where hypothetical risks are speculated about; it is on the judges’ bench that actual harms are contemplated and adjudicated.
This is all true. Our initial reaction to news of lawmakers trying to proactively regulate emerging technologies should be skepticism. Legislators are prone to over-emphasizing imagined harms while missing real harms, and inadvertently smothering the real benefits of technological innovation. Regulation should almost never be proactive. If a technology must be regulated, that regulation should happen after the technology is deployed, and it should be narrowly tailored to real harms that the technology has caused rather than imagined harms cooked up by neurotic politicians.
I agree with Dean that the EU’s AI Act and Colorado’s SB 205 were both badly misguided pieces of legislation. When it comes to the “mundane” risks of artificial intelligence (things like privacy violations, copyright violations, algorithmic discrimination, and model sycophancy), there’s really no need for this kind of heavy-handed, pre-deployment regulation. Instead, we can use the tools that our society already has for dealing with corporate harms: public pressure, tort liability, and (if necessarily) post-deployment regulation. We don’t need to figure everything out ahead of time; we can solve problems once they emerge, just as we’ve done with previous waves of emerging technologies.
With that said, however, existential risks are different. By the time we realize something is wrong, it may already be too late. You can’t litigate your way out of a robot killing you after the robot has already killed you. If the optimal level of a crime is zero, then that level can only be achieved through proactive efforts. When it comes to existential risks, reactive legislation simply isn’t enough.
SB 1047 didn’t cover model sycophancy, because it wasn’t trying to cover model sycophancy. Neither does SB 53, nor does New York’s pending RAISE Act. What these laws are trying to do is mitigate existential risks by responding only to frontier model development. I think this is exactly the right legislative approach, and I commend leaders like California’s Scott Wiener and New York’s Alex Bores for adopting it. Lawmakers should proactively deal with AI existential risks, while only reactively dealing with other kinds of AI risk.