(Epistemic status: highly speculative.)
(For the purposes of this article, I’m taking it for granted that we’ll be able to achieve artificial superintelligence (ASI) — an AI system that can exceed humans across all cognitive domains — by 2030. I know that’s still a controversial claim to many people, but in the AI world, “ASI by 2030” is actually a rather cold take by now. If you’re still not convinced of short ASI timelines, then this is not the article for you.)
Consider the following table:
If humans invent a misaligned superintelligence — that is, a superintelligence that doesn’t do what we want it to do, and instead pursues its own goals at the expense of humanity — then it really doesn’t matter which humans have done the inventing. A misaligned superintelligence would quickly kill us all, whether that superintelligence is Chinese or American in origin. This is The Bad Outcome.
If a superintelligent AI is both aligned to its creators’ wishes, and those creators are American, then I am very optimistic about humanity’s future. I love my country, and I think that part of what makes American so great is our commitment to personal liberty. With ASI, each person will have a hyper-intelligent personal assistant and a vastly improved selection of goods and services at their disposal, with the freedom to use that new technology however they wish. Each person will essentially be able to create their own utopia. The U.S. government will take necessary steps to protect against the worst abuses, as it does with any new technology, but otherwise will leave the people of the world free to pursue their own transhumanist wishes. This is The Good Outcome.
(I suppose we could also envision a scenario where the U.S. government turns evil and uses AI to implement some kind of dystopian subjugation. I admit that this is possible and worth considering, but I view it as unlikely. The United States takes a generally laissez-faire approach to how its citizens live their lives — at least compared to most other countries — and I see no obvious reason why that approach would change with the advent of ASI. If any country can be trusted to usher in the AI revolution, it is the United States.)
But what about that other quadrant? What if ASI is aligned to its creators’ wishes, but those creators are working for the People’s Republic of China? We know that the CCP is intent on increasing its own power internationally and maintaining its stranglehold on power internally, and it would surely use ASI to further those goals. Yet, what exactly would the CCP do with all that power?
If you view the Chinese government as a mostly cynical entity, which wants power for its own sake and has few ideological commitments, then there’s actually not much to worry about. The CCP could easily use the benefits of ASI to bribe the people of the world into submission, much like it currently uses the benefits of economic growth to bribe the people of China into submission. Imagine this: you have practically unlimited wealth, unlimited access to new consumer goods, you can go on vacation all the time, and you won’t have to work another day in your life. The only caveat is that you can’t criticize the government, and you have to salute the Chinese flag once a day.
Maybe if you’re a bleeding-heart libertarian, then no amount of material comfort could ever buy your submission to tyranny. But I think that most people on Earth would accept this deal. Most people are not very political. Most people are content to stay out of government and avoid controversies, as long as they have personal safety and a decent quality of life. For most people, this is A Fine Outcome.
But what if the Chinese government is actually ideological? What if they’re not just content to rule over a world of sheep, but actually want to herd those sheep in a particular (possibly nefarious) direction? For instance, the CCP after gaining absolute power might try to “Sinicize” the world. They might force everybody to learn Mandarin and stop speaking any other language. They might force everybody to read 100 pages of Xi Jinping Thought every day. They might use Men In Black-style memory wipers to make everybody forget about the Tiananmen Square Massacre. They might blow up every church and mosque and synagogue in the world, and use the space to erect giant neon statues of Mao Zedong and Confucius.
So which is it? Are the CCP’s plans for the future more like Wall-E or 1984?
Unfortunately for us, the Chinese government has been mostly quiet on the matter. In 2017, the State Council of China released the "Next Generation Artificial Intelligence Development Plan," emphasizing AI as a strategic technology integral to international competition. But this plan, and other similar plans released since, mostly talks about how AI can enhance economic productivity and Chinese power on the world stage, not about what China intends to do with that power.
Perhaps we can glean some insights into China’s plans for world AI domination by seeing what regulations they have enacted on AI so far. In 2023, China passed a set of AI Measures, including requirements that AI service providers comply with:
"Upholding socialist core values" – including not generating any prohibited content, such as content that incites "subversion of the state power or the overthrow of the socialist system, endangers national security and interests, damages the national image, incites splitting the country, undermines national unity and social stability, advocates terrorism, extremism, ethnic hatred and discrimination, violence, pornography, and false and harmful information"
(Let it be known: the porn-watchers of the world will have much to lose if China wins the AI arms race.)
Or maybe not. There is more than 15 years worth of academic international affairs scholarship on the question of whether China should be considered a revisionist power or not. That is, do we view China as a power that seeks to upend (revise) the international order, or do we view China as a power that merely wishes to assert its own sovereignty within that order? I don’t pretend to be an expert on this literature, but there are scholars who credibly argue that China falls into the latter category. Ever since Mao’s death, China has largely abandoned the notion of exporting its vision of socialism to the rest of the world. To the contrary, China’s public statements often emphasize the need to respect national sovereignty and international law. If you trust what the Chinese government says about its own intentions, then it will use ASI to ensure its own sovereignty over China but will not use ASI to dominate any other countries.
Personally, I don’t trust the Chinese government. That regime routinely censors the truth, commits grave human rights abuses against dissidents and religious minorities (up to and including waging cultural genocide against Uyghurs in concentration camps), and violates treaty obligations, like their obligation to respect the legal independence of Hong Kong. Why should I trust that if they obtain absolute power with ASI, that they will use that power in a wise and beneficial way?
I’m not saying that a Chinese ASI would automatically lead to dystopia. Maybe the benefits of ASI are so great that even with Chinese control, the technology will be a net benefit to humanity. But is that really a risk you want to take?
At this point, I want to introduce a flowchart, outlining why I believe what I believe and where I stand in the broader AI policy ecosystem:
First we start with the most fundamental question: What are the odds of this ASI being aligned with its creators’ wishes? Or if you’d prefer: What’s the real p(doom)?
If you believe that ASI misalignment is extremely likely (>50%), then obviously you’re not a fan of the rapid AI progress we’re seeing right now. Creating a misaligned ASI would be suicidal, the thinking goes, so AI progress should be stopped by any means necessary. The question then naturally arises: How feasible is it to stop ASI development?
If you believe that misalignment risk is extremely likely, and also believe that humanity will develop ASI anyway, then I don’t envy you. Such a worldview implies that you and everybody you love only have a few years left to live before AI inevitably kills you all, and there’s nothing you can do about it. Facing this impossible situation, it makes sense to give up on trying to change anything, accept your fate, and try to live the (short) rest of your life as happily as possible before civilization comes to an end. Or alternatively, you could take Eliezer Yudkowsky’s approach of “Death With Dignity” — that is, you should fight as hard as possible to stop the AI takeover, not because you think you’ll win, but just to give the rational portion of humanity one heroic last stand against our species’s inevitable foolishness and self-destruction.
If you’re a bit more optimistic about the ability of organized political activism to stop AI progress, then you might consider joining PauseAI. The thinking here goes that if enough voices in the AI community speak out against AI risks, and enough members of the public demand a stop to AI development, then we can stop the coming apocalypse.
If you believe that misalignment is possible but not inevitable if we invent ASI, then you’ll have to strike a much more careful balance. You’ll want to promote progress to reap the many benefits that ASI could bring, but you’ll also want safeguards in place to prevent the possibility of misalignment. Like all centrist policy positions, adopting this approach will leave you with enemies on either side, and there will be times when it is unclear whether a more accelerationist or decelerationist position is warranted.
I’ve found that among the “AI can be good, but we should still care about AI safety” crowd, opinions on what to do largely divide over what one thinks about China. If you believe that Chinese AI is not very threatening — or at least, no more threatening than American AI — then there’s no reason to push for faster AI development in the West. Rather, we should slow down to give ourselves more time to perform technical alignment research before bursting forward with more capabilities. And instead of antagonizing China through militaristic actions and rhetoric, we should try as hard as possible to work with them and place international regulations on AI. This is an approach that I call “soft decelerationism” because it’s not so extreme as the Yudkowskian doomer crowd. Soft decels may want to pause further AI progress altogether, or they may simply want to slow down the rate of that progress. Soft decels may promote the same policies as the hard decels, but the justification sounds less like “This will kill everybody,” and more like, “This probably won’t kill everybody, but the risk is high enough that greater protections are nonetheless justified.”
If you do believe that it matters whether America or China achieves ASI first, then you’ll have an even harder path to navigate. You’ll have to promote Western AI progress to ensure that we can beat China,1 while still promoting AI safety and alignment measures to defend against potential risks. This is what Vitalik Buterin calls “defensive accelerationism”, or d/acc for short. This is also the worldview outlined by Leopold Aschenbrenner in his work, Situational Awareness, as well as the approach that I believe in and have spent my time advocating for. See my previous writings if you wish to hear this worldview in greater detail:
Why I Support AI Safety
Perhaps the biggest debate in the field of artificial intelligence is between accelerationists and safetyists. Accelerationists believe that the supposed risks of AI are overblown, and so we should develop AI as fast as possible to harness its benefits. Safetyists
Finally, there’s the camp that doesn’t believe in ASI misalignment at all. According to this crowd, the idea that AI could kill us all and trigger an apocalypse is nothing more than manufactured sci-fi paranoia. They believe that the risks of catastrophic outcomes from AI are so low, at least compared to the benefits, that any talk of “AI safety” is a needless distraction from the work of advancing civilization. Therefore, their policy proposal is fairly simple: Accelerate as fast as possible, critics be damned.
Now of course, I might just wasting my time by caring about any of this. As
put it recently:Yet I think that the arguments about doom are mostly of historical interest at this point. If the reports on the cost of training Deep Seek are real, the cat is already out of the bag. There is practically no way to shut down AI, or at least there isn’t the political will to do so. We can only hope that the alignment problem is solvable and the people at the cutting edge of this technology are proceeding wisely.
This is probably true, and whether AI kills us or not is probably outside of any normal person’s control at this point. Unless you’re Sam Altman or you’re on the Politburo Standing Committee of the CCP, your opinion will have essentially zero sway over what the leading AI labs and governments of the world do over the next few years. That being said, I don’t want to act like I’m helpless. I want to act like my opinions and actions matter in shaping the course of world events, whether they really do or not. Besides, the easiest way to lose your power is to believe that you don’t have any. If every person who cares about AI safety just gives up out of defeatism, then the risk of misalignment could become significantly higher.
So I’m not going to give up. I’m going to keep fighting the good fight. I will fight, with every ounce of my strength and energy, to make sure that America builds the world’s best AIs, and that those AIs are safe. The alternatives — a world of doom or Chinese domination — are too grim to allow.
It should go without saying that I’m directing this article toward a Western audience which agrees with Western values and is concerned about the rise of an authoritarian China. If you’re a Chinese nationalist who wants China to win the AI arms race, then the same logic applies in reverse.