Why Are We Rushing to Build the AGI Manhattan Project?
Some critical reflection is warranted.
“Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.” — Jurassic Park
Practically every day, it seems like more and more people are talking about a Manhattan Project for AGI — a U.S. government program to speed-run the development of artificial general intelligence which can entirely surpass humans across all domains. It’s an intriguing proposal, harkening both to futuristic visions of a world where humans are fully replaced, and past memories about the golden age of Oppenheimer and Einstein. And there is plenty of evidence that the incoming Trump administration takes the idea very seriously. Trump has repeatedly spoken of the need to “compete with China and other countries on artificial intelligence” and “win the A.I. arms race with China”. Several of Trump’s closest allies have reportedly endorsed the idea of an AGI Manhattan Project. And perhaps most tellingly, Donald Trump recently appointed Jacob Helberg as Under Secretary of State, a man best known for creating the recent U.S.-China Economic and Security Review Commission Report which advocates for an AGI Manhattan Project.
In this context, it is worthwhile to ask: Why? Why exactly is this idea — fringe until just a few weeks ago — suddenly receiving so much support? Why are we rushing into this? I can offer several reasons:
Reason 1: Desire for Fame and Historical Importance
AGI, if invented, will be perhaps the most important invention in all of history, with the potential to create a utopian, post-scarcity society where humans no longer have to perform labor because machines can do everything for us. It follows, then, that the people who create this technology will be venerated for all time as harbingers of this wonderful new age. Like Prometheus bringing fire to humanity, or Newton bringing us the laws of physics, or the original Manhattan Project scientists ushering in the nuclear age, today’s generation of AI scientists can leave an epoch-defining historical legacy.
And honestly, isn’t that cool?
Some men are born with an innate desire to be magnificent — to go out and make their mark on the world, to shine bright and accomplish extraordinary things. And this ambition is especially present in the kind of person who rises up the echelons of power in Washington, DC or Silicon Valley. After all, why would you go through all the sacrifice of making a name for yourself in such stressful and competitive environments if you didn’t have some urge to reshape the world in your image? I think that many scientists and policymakers saw Oppenheimer and thought to themselves, “Gee, I sure wish somebody makes a movie like that about me someday!”
Hell, even if AGI goes badly, a bad legacy is still a legacy. Would you want to go down as one of the people who ends human civilization in a robot uprising? Certain people, if answering honestly, would say yes. Playing such a central role in the story of humanity, even as the villain, is a tantalizing offer.
Reason 2: Conformity and Careerism
Imagine that you’re an advisor to the President, sitting at a table with the Commander-in-Chief and with his dozens of other advisors. The President asks for guidance about whether we need to start an AGI Manhattan Project to compete with China. As the advisors take turns speaking, they all say some version of the same answer. “Well of course; America has to stay ahead!” “This is a matter of national security. We can’t show any weakness!” “Now is not the time to go soft on China.” “The CCP is attempting to outsmart and outcompete us at every turn. We must win this race!” Then, eyes turn to you. You have misgivings about the Project. You’re not sure that the Chinese government is even interested in AGI, and you’re worried about what could happen if the U.S. government accidentally produces a misaligned AI. You fear that this Project might only exacerbate, rather than alleviate, national security risks. Yet, you know that this is a minority opinion. You don’t want your boss and colleagues to think less of you, and you certainly don’t want to come off as some kind of Chinese sympathizer. Do you share your concerns, or do you go along with the group?
Some version of the scenario above is likely playing out right now in Washington DC, in the Pentagon, and at Mar-a-Lago. And I fear that conformity and careerism is preventing officials from sharing their real opinions, and instead pushing them to promote more hawkish actions and rhetoric than the situation truly justifies.
History provides no shortage of examples where leaders engaging in groupthink have made poor decisions. Particularly in times of war and geopolitical crisis, nationalism and jingoism have a tendency to provoke rash responses in otherwise calm and reasonable people. One of the most infamous examples is the Cuban Missile Crisis, where nearly the entire Joint Chiefs of Staff demanded an American invasion of Cuba — a move which almost certainly would have provoked a Soviet response and possible nuclear war. Without the presence of President Kennedy, who had the courage and wisdom to say no to this demand, it is possible that October 1962 would have been the start of WWIII.
If another existential crisis occurs within the next few years, can we rely on our current leaders to guide us through safely? I am not confident. In particular, I am concerned about what kind of people Trump is likely to surround himself with in the near future. It is well-known that Trump prioritizes personal loyalty over integrity or record of public service. In a cabinet full of sycophants, will anybody have the courage to tell inconvenient truths? If the China hawks are wrong or overly aggressive, will any countervailing voices be present to push against them?
If supporting the AGI Manhattan Project becomes a mainstream position within Trump-world, which it likely already is, then it may be quite hard for anybody in that world to fight against.
(For a more detailed account of how pressure to conform to hawkish attitudes on China is affecting American attitudes today, I recommend reading this working paper by political scientists Michael B. Cerny and Rory Truex. After interviewing more than 50 American foreign policy professionals, they found that “many participants perceived a degree of what they referred to as ‘hawkflation’ or ‘groupthink.’ Roughly one fourth of survey respondents noted instances of professional pressure to voice a more hawkish point of view towards China, and many feared being perceived as naïve or compromised by their views on, ties to, and experiences in China.”)
Reason 3: An Opaque China
Much of the impetus and alleged urgency for the AGI Manhattan Project comes from perceptions that Americans have about China. American leaders assume that the Chinese are racing toward AGI, that Chinese leaders have no desire to cooperate with the West on AI, and that the only way to prevent a Chinese AI takeover is if America reaches AGI first. Where do these assumptions come from? Certainly not from anything that the Chinese government has stated on the matter. As others have documented, the Chinese government has given no public indication that they are working toward developing AGI, or that they see AI as a “race” to be “won.” If anything, there is some evidence that Chinese leaders are more cautious around AI and its risks than American leaders are.
There is an obvious objection to this: We can’t just take what the Chinese government says at face value! Obviously, if they’re building AI technologies to outcompete America, then they’ll want to keep that a secret from us as long as possible. They’ll pretend to favor cooperation while secretly working to sabotage us. Since we can’t know for sure what the Chinese are really thinking, we have to assume the worst.
There, I think, lies the crux of the issue. As Westerners, we really don’t know the inner workings of the Chinese government, since those channels are made deliberately opaque to outsiders. And in the absence of information, people can find ways to justify whatever pre-existing beliefs they have about China. Do you think that the Chinese government is completely evil and hell-bent on American destruction? You can probably find some statement to support that position. Do you think that the Chinese government is misunderstood and actually wants peaceful coexistence with the West? You can probably also find some statement to support that position.
Without clear information about China, there is no way to confirm or deny claims made about Chinese intentions. Thus, it is easy for the most sensationalist, nationalist narratives to take hold, whether or not they are true.
Reason 4: Informational Complexity and Easy Answers
We’re dealing with complicated subjects here. Artificial intelligence, international relations, and national security are all subjects that experts spend entire careers studying. To come to an informed opinion about any one of these issues is hard enough — let alone all three, at a time when each of them is rapidly changing. But if you know anything about human nature, you’ll know that a lack of reliable information has never stopped people from confidently asserting their opinions anyway. People want assurance in the face of uncertainty, and they want to believe that the situation is under control.
How should we deal with threats to national security, and with the danger of AGI to wreak untold destruction on humanity? One simple answer is: Give more power to the military, and let them figure it out! It’s a straightforward proposal, with a lot of obvious appeal. We’ve done it before during WWII, and it worked out then. So why not do the same thing today?
I would caution: beware any solutions to complicated problems that seem too simple. While several prominent people have advocated an AGI Manhattan Project — most notably the aforementioned USCC report — none of them have given much detail about what such a Project would look like in practice. As the AI policy researcher Dean W. Ball observes:
You would think, with such a detailed report from such a well-regarded (and federally commissioned) group, there would be a great deal of explanation for a proposal of this magnitude—the very first recommendation the authors make.
But you would be wrong. […] Here are just some of the questions this report could have addressed [but did not.] […]
How useful, or wise, or productive, does a “Manhattan Project” for electricity, or a “Manhattan Project” for computers, sound to you? Does that sound anything like the quest to create a handful of atomic bombs? Does the Manhattan Project seem like an optimal template to think about how to structure the organization that builds AGI? Do nuclear weapons seem like the optimal technological analogy for reasoning about AI? […]
Are we sure that kicking off a “race” to AGI with China is the wisest course of action? Set aside the question of whether such a thing would be the safest course of action—instead, consider that, in a command-and-control race for AGI, it is not obvious that the US wins. […]
And what about practical implementation? Who will lead this initiative? Will the top AI companies be merged so that their resources can be pooled? What interpersonal, organizational, technological, economic, and legal barriers might there be to merging the top AI labs? Given that these companies all have distinct technical approaches to AGI development, who will make the final decision about which approach we should take? Will the many non-US citizens who work at these companies be allowed to work on an American “Manhattan Project?” Will the military release AGI to the public, or will they preserve access to it for the federal government (and various international and corporate partners) alone? […]
In theory, all these questions can be answered. In practice, I suspect some of them cannot be answered quite yet. Some of them are sure to the subject of vigorous debate. Yet in a nearly 800-page report, precisely none of these questions are asked, and precisely none are answered.
Far from being an answer to all of our questions, the AGI Manhattan Project proposal should really prompt us to be asking way more questions of our political leaders. Unfortunately, many people — both policymakers and commonfolk alike — may be overwhelmed by the complexity of our current situation, may neglect to ask the relevant questions, and may accept a simple approach, whether or not that approach is the best one.
Reason 5: Maybe it’s a good idea.
This article has been quite critical of the AGI Manhattan Project proposal — or at least, critical of the supporters of such a proposal. But just because an idea is supported for the wrong reasons doesn’t necessarily mean that the idea itself is wrong.
AGI does pose extreme threats to national security, whether those risks come from the Chinese government using AI to impose their will on the West, or whether those risks come from a totally misaligned AI that could emerge from either China or the West. There is a serious case to be made that the U.S. national security state is the only entity capable of handling these threats in a responsible manner. There is also a serious case to be made that without centralized control of advanced AI labs by the government, those labs will be ripe for infiltration by Chinese spies and other bad actors. Perhaps, in a sea of mediocre options, an AGI Manhattan Project really is our best shot at mitigating existential risks and reaching a positive AI future.
I should be clear that this article is not staking a claim about whether or not we should start the AGI Manhattan Project. There are legitimate arguments for and against the idea that ought to be discussed. I am scared, however, that our current political and informational climate is not conducive to such a healthy discussion. We are plagued by misaligned incentive structures, by nationalistic and tribal thinking, by a lack of accurate information about China, and by a lack of critical thinking. I pray that our political leaders will be able to make the correct decision in spite of these limitations, but I am not optimistic.
For all the reasons above and more, I predict that the AGI Manhattan Project will come to pass. There is a critical mass of support in favor of the proposal, and there is not yet any major resistance to it. In fact, I predict with 70% confidence that 2025 will see the beginning of this Project. Whether it is a good idea or not, it is coming.


