Quick Take: Google's AI Security is Shocking Lax
I’ve recently been reading Situational Awareness by Leopold Aschenbrenner, which I highly recommend. I’ll probably write a full book review at some point, but for today, I’d like to highlight one quote in particular:
We’re miles away for sufficient security to protect [algorithmic] weights today. Google DeepMind (perhaps the AI lab that has the best security of any of them, given Google infrastructure) at least straight-up admits this. Their Frontier Safety Framework outlines security levels 0, 1, 2, 3, and 4 (~1.5 being what you’d need to defend against well-resourced terrorist groups or cybercriminals, 3 being what you’d need to defend against the North Koreas of the world, and 4 being what you’d need to have even a shot of defending against priority efforts by the most capable state actors). They admit to being at level 0 (only the most banal and basic measures). If we got AGI and superintelligence soon, we’d literally deliver it to terrorist groups and every crazy dictator out there!
I decided to fact-check this claim, and indeed, it is true. Here is the relevant portion from Google’s FSF:
Now, to translate this into normal-person-speak: Google DeepMind, one of America’s leading artificial intelligence developers, has security so lax that your average high school robotics team could hack into them. And Google even admits this! If a more resourced group, like the Chinese government, wanted to steal Google’s secrets, it would be about as easy as stealing candy from a baby.
This is truly insane, and I’m surprised that more people aren’t talking about it. Google’s AI is increasingly being used not only for direct commercial purposes, but also for industry, agriculture, and even by the military. The fact that such crucial technology is almost completely unguarded from bad actors should be a cause of concern for every American. And if Aschenbrenner is right that Google’s security is better than the other major AI developers, then we should be doubly concerned.