“Computer” was originally a job title, like a plumber or a carpenter or a baker. A computer was a human being whose job was to do lots of calculations. If you've ever seen the movie Hidden Figures, you'll know that NASA used to hire thousands of human computers to run their numbers. And as you'll also remember from Hidden Figures, those human computers were phased out after the 1960s. Machines became capable of running calculations far cheaper, faster, and more reliably than even the fastest human computers. Before long, the entire computer profession was automated. Nowadays, the calculator app on your phone can do in less than one second the work that an entire team would've needed a whole workday to do. For all intents and purposes, human computers no longer exist.
As I have discussed multiple times on this blog, AI is rapidly developing skills that previously only humans had, and it shows no signs of stopping. There is no cognitive task that AI won't soon be able to do better, faster, and cheaper than you can. Within 10 years (and probably much earlier than that), your job will be taken by a machine.
Whenever I tell this to people, I encounter a lot of resistance to the idea. Skeptics usually sputter out some platitude like “There will always be jobs for humans, just different jobs.” or “As long as you’re smart, you'll find something to do.” Yet, they give little valid explanation for why they believe these things.
I’ll go through various objections to the idea that AI will replace us soon, and why I think they are wrong.
“AI progress will plateau.”
It’s been more than 2 years since OpenAI publicly released GPT-3.5, and more than 1 year since they released GPT-4. Both of those models, as well as similar models released around the same time by other corporations, were vast improvements over what came before, creating quite a lot of buzz and investment for what might come next. ChatGPT can write all my essays and generate funny pictures for me—how nifty! Yet, in the time since, generative AI has not had much impact on most people’s lives. These models still lag behind humans in key areas—namely, in having a unique personal tone, in telling the truth instead of hallucinating (i.e., confidently stating false information), and in applying creative thinking to unfamiliar situations. As a result, the promised economic gains from AI have yet to materialize.
Some people, seeing this situation, have declared that we are in an “AI Winter.” We have finished picking the low-hanging fruit, so the thinking goes, and it is impossible to make much more progress in the immediate future. Many technical problems remain unsolved, and compute limits remain a bottleneck. It will take many more years for AI to become substantially better.
My simple rebuttal to the AI skeptics is that they really ought to have more patience. It’s only been about 16 months since GPT-4 came out. I know that in the fast-paced world of tech and finance where quarterly profits reign supreme, that may seem like a lot of time. But in the grand scheme of things, it’s a minor delay that does not necessarily imply a long-term plateau. We have no idea what the leading AI labs are currently developing that they haven’t announced publicly yet. For all we know, the next groundbreaking model release may be just around the corner.
The longer, more technical rebuttal to the AI skeptics comes from Leopold Aschenbrenner in his magnum opus Situational Awareness. In short, he argues that we have several orders of magnitude (OOMs) worth of progress in key areas that we can easily make within the next few years. This includes:
Improvements in computing hardware, such as the advent of quantum computing, will provide several more OOMs of compute per microchip.
Companies will also likely invest several OOMs more money into buying more compute.
Algorithmic efficiencies and unhobbling will mean that we need several OOMs less effective compute per task.
With such exponential progress across multiple areas, it seems highly unlikely that AI capabilities will just stay where they are indefinitely, or even stay where they are for more than about 2 years. (I highly recommend reading Situational Awareness in full if you’re curious about this. Aschenbrenner is far smarter than I am, and gives far more details to support his argument than I do.)
In short, we are quickly hurdling toward the advent of artificial general intelligence (AGI), the point at which AI will be able to perform all cognitive tasks better than humans can. It may not come this year, or next year, or even the year after that. But make no mistake, it is coming. There is every reason to believe that by the end of this decade, AI will be able to do everything that you can, better than you can.1
“AI makes mistakes, so you’ll always need humans to look over the AI.”
This is mostly untrue, and mostly irrelevant even if it were true.
Right now, AI models might make mistakes often enough that they require constant supervision, but that will not remain the case for long. With every generation of AI models that come out, errors get smoothed out, and accuracy improves. Just think of how much better GPT-4 is at solving basic math problems compared to GPT-2. Pretty soon it will be the case (and it already is in some fields) that AI is far more consistent than humans, consistent enough that it almost never makes mistakes. In other words, AI systems will require significantly less oversight than human workers do today.
And to the extent we have overseers, why do those overseers have to be human? If, say, an AI programmer produces a bug in its code, why can’t it be detected and fixed by another AI system? Presumably, AI overseers (that is, AI models which oversee other AI models) would be able to work much faster than human overseers; plus, AI overseers have the benefit that they would be more familiar with the inner workings of other AI systems than a human would be.
I admit, we should probably still have a human scientist on standby at the nuclear power plant to step in in the event of a power outage or Chernobyl-Bot 9000 malfunctioning. But other than in very rare cases, I simply don’t see human overseers being useful for much.
Besides, even if human overseers are necessary in perpetuity, there still won’t be nearly enough overseer jobs to employ most people. I doubt there will be enough overseer jobs to employ even 1% of the humans displaced by automation. Even a large factory run by AI would only need a few dozen overseers at any given moment.
If your job plan is to keep watch over the AI model that just took all of your coworkers’ jobs, then…let’s just say your career path is precarious.
“Machines can automate mechanical tasks, but not creative ones.”
This argument was a lot more common a few years ago, before we had convincing text, image, music, and video generators. But since a few people still believe this line, it’s worth officially debunking.
What does creativity even mean? Different dictionaries will give different definitions, but I personally would define creativity as the ability to think in novel ways to produce a unique output. In other words, if I wrote a story about a lobster committing espionage to steal nuclear secrets from the US government, then that story would be considered creative because (as far as I’m aware) nobody else has ever written that story before. A creative person is somebody who can consistently come up with new ideas, thinking in ways that have never been thought before.
If you define creativity similarly, then I have some news for you. AI already shows remarkable levels of creativity. Multiple teams of researchers have given AI models creativity tests, and they usually outperform most humans. Generative AI has already won several art and photography competitions against humans. And what about that lobster story I described? Well, you can see for yourself:
Not only was Claude Sonnet able to write that story instantly, but it was even able to write it in Shakespearean verse, and then set the whole thing to music. I doubt I could have done better manually had I tried.
Some people may object that this is not real creativity. Claude Sonnet doesn’t actually know who Shakespeare or Larry the Lobster are; it’s just processing thousands of poems and spy stories from its training data and generating something that looks similar. No real thought, intention, or emotion went into it. It’s just a more complicated version of copying, that has no bearing on real human art.
But personally, I’m not concerned with the philosophical question of what is “real” art or “real” creativity. I’m much more concerned with the practical questions of how AI art compares, on a qualitative level, with human art. And clearly, AI can (or at least, will soon be able to) produce art that is just as entertaining, compelling, and dare I say beautiful as art made by humans. Only the most talented writers, musicians, and videographers have any kind of creative edge over AI, and that edge is shrinking by the day.
If you depend on selling your creative work to an audience, prepare to be automated very soon. That is already happening with many commissioned artists and stock photographers. You may feebly protest, “B-b-but the AI is just copying from human artists! It’s unethical! It’s not real art!” That may or may not be true, but try explaining that to the people with the money, who are ditching you for the newest DALL-E model.
“Even if AI work is indistinguishable from human work, there is still value to having a human.”
Which is more valuable: a hand-knitted sweater that your grandma gave you, or a sweater mass-produced by a machine in Bangladesh? Most people would probably say the grandma sweater. Even if the texture is the exact same—hell, even if the texture is significantly worse—the emotional value of knowing that your sweater was produced with love by another human being trumps any material value of the machine-made sweater.
Another example of this is art. In 2012, Coachella debuted a hologram of Tupac Shakur to give a concert performance. The experience was in most ways the exact same as if the real Tupac had been there. The hologram looked realistic, and the music came straight from Tupac’s old recordings. Technically, you can say they automated Tupac. Yet, I imagine that attendance and excitement would’ve been a lot higher had the actual Tupac been there, instead of merely a hologram. A large portion of a concert’s appeal comes not from the performance or the music, but from the very celebrity and presence of the human onstage.
Even if an automated product or service is objectively the same or better than a human-made product, consumers aren’t always objective in what they buy. Subjective factors like how much you admire the manufacturer, and whether that manufacturer is made of flesh or silicon, can have a huge impact on the goods you buy and how much you’re willing to pay for those goods. In other words, the human is part of the product.
While I concede that this effect is real, it is not enough to save the vast majority of jobs from automation. Just consider: even if you like humans better than AI, would you really be willing to wait longer, pay more, and suffer worse quality just for the privilege of saying that your product was made by a human? I imagine that for most consumers and most products, the answer is no.
It's the same reason why “Made in America” never really took off. Sure, some patriotic folks are willing to pay more for American goods than for comparable-yet-cheaper foreign goods, but the vast majority of consumers are not. American nationalism wasn't enough to save millions of blue-collar workers from outsourcing, just as anthropocentric bias won't be enough to save millions of human workers from automation.
Willingness to use AI will also increase once it becomes more familiar and less stigmatized. A few years ago, when generative AI art first became widely available, there was a huge movement (at least online) to “Boycott AI Art”. Yet, that movement seems to have fizzled out. Once people realized that AI art was inevitable, and realized how many things they can do with it, they became much more willing to adopt it (or at least not actively oppose it).
Perhaps if you’re a lovable grandma or famous celebrity, your humanity can still give you a leg up over the machines. But for everyone else, just being human simply is not enough.
“Even if automation is technically possible, stubbornness and bureaucracy will prevent it from happening anytime soon.”
We all know that corporations, especially larger ones, can be slow to respond to changing social and economic situations. Any large-scale change in company operations and employment will have to be beta-tested, analyzed by HR, reviewed by corporate lawyers, approved by company management, approved by shareholders, and only then implemented. That process can take several years, and in the meantime, humans will still have a place in the company hierarchy.
Nonetheless, you cannot ignore market pressures forever. In the long run, companies still need to compete with each other. A company that uses AI will make products and services better, faster, and cheaper than a company that relies on humans. Humans need to be paid wages and benefits; AI doesn't. Humans may require hours to write full reports; AI can write them instantly. Humans can only work on one task at once; an AI system can work on many things simultaneously. Humans can file lawsuits against your company; AI can't. Any company that does not adapt and replace their human workers will simply be outcompeted economically by one that does.
I imagine that jobs in the government and non-profit sectors will be somewhat more resistant to automation, since they are not subject to the same competitive forces. Bureaucrats do have an awful tendency to cling onto old practices long after they've become clearly inefficient. Yet even here, I expect that basic technical competence will still win out in the end. Administrators always want to cut costs, and once AI workers become available, humans will be the first liability to be chopped off the budget sheet.
I admit, automation will not be instant. Stubbornness, status-quo bias, anthropocentric bias, and bureaucratic slowness will all contribute to delaying automation until long after it becomes technically possible. But how long exactly? I predict 5 years maximum for most jobs.2 Institutional incompetence may buy you some time, but I doubt it will buy you more than half a decade.
“What about physical labor?”
So far, most discussions of AI have been about generative AI — that is, AI that can create new pieces of digital media, like images and text files. Comparatively little progress has been made in any kind of technology with physical applications. Machines can now win world-class math competitions, but they still can’t mop your floor.
The simple reason for this is that hardware develops much more slowly than software. Hardware is saddled by physical and material constraints; software is not. Hardware requires expensive and lengthy real-world testing; software can be tested relatively cheaply and quickly. Hardware requires comparatively greater supply chain coordination and warehousing than software does. Software can be updated instantly, whereas hardware updates are lengthy and expensive.
It takes a long time for new robotic designs to be economically viable. Because of this, I expect that the very last people to be automated will be manual laborers (janitors, gardeners, house cleaners, cooks, etc.).
But even their time is coming eventually. New AI-powered robotics products are being developed every year, such as automatic lawn mowers and garbage collectors. Plus, once the mechanics and engineers are automated, it logically follows that the rate of hardware development will increase. That will allow us to make even more robotic equipment, which will allow us to automate even more physical tasks.
If you just read all that, and you’re still not convinced that you will be automated within the next decade, then I honestly don’t know what to tell you. If you are still clinging on to some dream of becoming a nurse or a lawyer or an author or even a computer scientist, then you are simply misguided. Someday in the next 10 years, automation will hit you like an uppercut from Mike Tyson. You’ll go flying backward, the speed of change overwhelming you as you were completely unprepared to handle it. As you hit the ground and start spitting blood, it will dawn on you that the life you knew is over. You will feel your broken, throbbing jaw and realize that all your hopes and dreams have come to naught. Your career prospects will be as broken as the teeth falling out of your mouth. Mike Tyson will stand over you, triumphant. The message will be clear: Machines have won.
I am not making any normative claims here. I am not necessarily stating that this is what ought to happen, merely that it is what will happen.
I am also not saying this to scare you or make you think that automation is a bad thing. I am, in fact, quite optimistic about the potential for AI to do good. A world where every job is done more efficiently, and every product is made better and more cheaply, will be a world of vastly increased prosperity. Don’t think, “Oh no, I’ll be unemployed! I’ll be destitute with no way to support myself!” Instead, think, “Hooray, I won’t have to work anymore! Now, with far more wealth and technology in the world, I’ll be able to spend my newfound free time doing whatever I want!”
So Liam, you may be thinking, if I shouldn’t expect to have a career more than 10 years from now, then what should I do with my life? How am I supposed to act, knowing that an AI takeover will happen in just a few years? That is, ultimately, a question you’ll have to decide for yourself based on your own values and preferences. But I’ll offer some options:
Work to promote AI safety. Although I am confident that AI will bring revolutionary changes within the next decade, I do not know whether those changes will be positive or negative. Far more work needs to be done to align superintelligent AI systems with human values and welfare. That is the work I am choosing to take on, and if you care about the future of humanity, I suggest you do the same. Read my blog post about AI safety to learn more:
Live a short-term, hedonistic lifestyle. If the world as we know it is gonna end anyway in a couple of years, then why not go out partying? Why prepare for the long-term future when there basically is no long-term future? I’m not personally suited to a lifestyle of partying every day and living without a moral purpose, but if you are, then I see no reason not to give in to your hedonistic impulses.
Spend more time with friends and family instead of working/studying. We’ve all seen the trope in Hallmark movies where the overworked father neglects to spend time with his wife and children, but then eventually learns to value family over career. It’s cheesy, but it’s a good message. Having a good time and forming lasting memories with loved ones matters more in the end than grinding an extra day at the office. That is especially true if your office will be automated in a few years.
Ignore everything I’ve just told you, and live your life as if AI won’t soon revolutionize everything. This is, unfortunately, the most common reaction I see. When confronted with a chaotic, confusing, and uncertain future, most people prefer to just stick their heads in the sand and pretend that nothing will fundamentally change. While I understand why people do this, I do not recommend it. If you choose to ignore the impact of AI, then you will be in for a rude awakening several years down the line.
We are truly at an unprecedented time in human history. We are fast approaching the point at which our own actions and thoughts will become irrelevant in the face of much smarter and more capable machines. Whether you choose to react to this fact with joy, sadness, hope, anxiety, or indifference is up to you. But whatever you choose, you should do so with a perceptiveness and clarity of what’s to come in the future — a future that will not involve you working a job.
Since I’m such a fan of prediction markets, I’ll give you my official AGI forecast:
5% chance that we’ll get AGI by the end of this year
10% chance that we’ll get AGI sometime in 2025
15% chance that we’ll get AGI sometime in 2026
25% chance that we’ll get AGI sometime in 2027
20% chance that we’ll get AGI sometime in 2028
10% chance that we’ll get AGI sometime in 2029
14% chance that we’ll get AGI sometime in the 2030s
1% chance that we'll get AGI after 2040, or won’t get it at all
To put this into more specific forecasting terms: 5 years after the advent of AGI, at least 90% of American white-collar jobs will have been automated. (95% confidence)