AI optimism is a class privilege
A while back, in a slightly earlier era of AI1, a project was making the rounds which would read your GitHub profile, and create a personalized roast based on the contents.
It was intended, I assume, as a harmless, lighthearted novelty. I wanted to join in on the fun, so I put my profile in and tried it out.
I didn’t laugh at my roast.
It wasn’t clever, or funny, or even particularly unexpected. A tech-savvy stranger on Fiverr probably could’ve done better.
But more than that: I remember being surprised at how mean it was. Little of what the model produced even felt like a joke; instead, it just read as a slew of very personal insults.
And then I remember being surprised that the artificial cruelty actually affected me.
Despite knowing this was all a soulless (and as it turns out, humorless) machine making a poor attempt at comedy—one that nobody else even saw!—reading those words hurt. AI actually managed to hurt me.
And that was the first time I remember thinking about what AI was going to do to my children.
If I—a grown man with thick skin, hardened by decades of internet usage—can still be susceptible to highly personalized online bullying, what will it be like for my son, when some mean kid inevitably gets their hands on this technology and decides to put it to malicious use?
I thought: oh no, we’re building the perfect bullying tool.
I was never exactly an optimist when it comes to AI. But that was the first time I realized exactly how bleak the future I foresaw actually was.
I ’ll use the term “AI optimist” a lot in this post because, while it might not be an entirely correct description of the general group I’m referring to, it’s at least a serviceable label.
That group, to be a bit more descriptive, is made up of people who are excited about AI, both now and in the future—but especially, are happy about the current state of AI. That part’s important. You might call them enthusiasts, or even believers, maybe. But in any case, they generally consider the current state of the technology to be good, and either aren’t aware of the bad parts, or aren’t overly concerned with them.
You almost certainly know at least one or two of these people. Maybe you even are one. (If so: I’m not naive enough to think I’ll change your mind with this post, but I hope I’ll at least give you some things to think about.)
It seems to me that to be in this group—to regard AI, as it exists currently, with optimism—requires at least a certain degree of class privilege. (Hence, the somewhat blunted title of this post.)
I had long struggled to put that thought into words: AI optimism is a class privilege. But once it crystallized, I felt as though it shifted a great deal into perspective.
So, that’s why I wrote this post; to share that perspective. It’s mine, and you can take it (or not) as you like.
It’s late 2025, and so you don’t need me to tell you how extreme opposing views on AI can be. Everyone has an opinion of AI, and the overwhelming majority fall one far end of the spectrum or the other. There’s a vast divide between my so-called optimists and pessimists, each fiercely passionate in their own entirely opposite ways.
For my part, you can put me firmly on the pessimist side of the chasm, for many reasons. Some I’ll get into here; others, I’ll mostly pass over, as they’ve been well covered elsewhere.
But for now, suffice to say: when I look around me, at the impact AI is currently having, I see little reason for optimism or enthusiasm—let alone the little-questioned, quasi-religious belief that this fundamentally flawed technology will one imminent day bring about some sort of economic, societal, and/or scientific revolution all on its own.
Come to think of it, “religious” might be a good word to describe how AI optimism feels, from the outside. It has fervent believers, prophecies from prominent figures to be taken on faith, and—of course, as with any religion—a central object of worship which can at all times be offered as The Answer, no matter what the question might happen to be.
In fairness: that’s not all AI optimists. I’m mostly describing the extreme ones.
Even among the more moderate optimists, though—the fairly ordinary people who just seem to like the tech—the enthusiasm has always seemed…disproportionate, let’s say.
It seemed perplexing to me that so many of my peers seemed so eager to be on that other side of the divide. They didn’t seem particularly different than me. In fact, many were my friends, connections, and people I looked up to.
We were looking at the same tech, with the same outcomes, and drawing entirely different conclusions. What was I missing?
The answer eventually hit me:
They see themselves as belonging to a different class than me.
They see themselves as safe.
I concede AI can occasionally be helpful for certain tasks, and I can understand the enthusiasm, as far as that goes. I don’t use it often, but admittedly some. It’s helpful for generating reference images to use as starting points for my illustrations, and occasionally for ideating, as a “rubber duck” of sorts. I also use it once in a while to compensate for my own colorblindness. But mostly, it helps me with code.
Even if my new coding buddy is severely prone to overconfidence and hallucination (and vulnerabilities, and counterproductivity), it’s still admittedly exciting when it makes tasks that would’ve been previously time-consuming and/or challenging quick and easy.
In order to be an AI optimist about this, however: that’s where I would have to stop thinking about it.
I would be forced to ignore what else my little coding buddy is getting up to when I’m not looking; the other impacts he’s having on other people’s lives.
Let’s take layoffs, as an example.
In order to be an AI optimist, it seems to me you have to believe yours is not one of the jobs that risk being automated or downsized, and that you are not among the potentially millions of workers staring down displacement. After all, how could you feel enthusiastic about something that’s a threat to your own livelihood?
You need to be high enough in the org chart; far enough up the pyramid; advanced enough along the career ladder.
To be an AI optimist, I’m guessing you must not be worried about where your next job is going to come from, or whether you can even find one. The current dire state of the job market, I have to assume, doesn’t scare you. You must feel secure.
Maybe it’s because you’ve already made a name for yourself. Maybe you’re known at conferences, or on podcasts. Maybe you’re just senior enough that you don’t need to worry about being hireable anymore; you can take that for granted.
You definitely aren’t a junior, though, or an intern, or somebody trying to break into the field, or anybody else near the level of the rising tide engulfing entry-level workers across our industry and a wide range of others. Because, infamously, nobody is hiring juniors anymore. They’re the first ones against the wall.
Seems fairly safe to assume you aren’t in that group, if you’re excited about what AI is doing.
You probably aren’t a contractor, either, or working at a consultancy. And for that matter: you almost certainly aren’t an artist, or illustrator, or writer. You probably haven’t watched clients evaporate as their dollars are funnelled upwards, somewhere beyond and above you, by the same AI-fueled disruption, knowing that the machine which is bankrupting you is only made possible by plagiarizing from workers just like you.
AI optimism probably means you’re in a position where nobody is taking your work, or tracking your productivity, or trying to measure your impact against arbitrary benchmarks.
It probably means nobody is forcing you to use AI—or especially, to fix something AI-generated, as part of your job. You probably aren’t watching a career you once enjoyed suddenly morph into something very different than what you signed up for, as you shift away from doing fulfilling work, and instead spend your days micro-managing an error-prone machine, laboriously correcting its unending mistakes.
To be an AI optimist, you’re probably not a student anymore. Or, for that matter, a teacher. Because beyond wrecking the job market, AI is wrecking education, too.
It’s bleak being in education right now, either as a student or a teacher. Thanks largely to AI, students are looking at the worst job market since 2008, and teachers, they tell us repeatedly and desperately, are working in the most difficult learning environment since 2020.
But I suppose most AI optimists probably aren’t concerned about that. I guess that must be how they can afford the optimism.
That’s the thing about being bullish on AI: to focus on its benefits to you, you’re forced to ignore its costs to others.
AI optimism requires believing that you (and your loved ones) are not among those who will be driven to psychosis, to violence, or even to suicide by LLM usage. At the very least, this means you feel secure in your own mental health; likely, it also means you have a wider and more substantial support system propping up your wellbeing.
(Not to put too fine a point on it, but: those things are otherwise known as privileges.)
AI optimism requires you to believe that, whoever will be impacted by the sprawling data centers, the massive electricity demands, the water consumption, and the other environmental hazards of the AI boom2, it won’t be you. Whatever the disaster might be, your house, in your neighborhood, will be safe from it. Probably far from it.
Those people on the news? I guess you must assume you won’t know them.
Dictators, fascists, and malicious state actors are wielding AI as a ruthlessly efficient propaganda machine, disseminating more convincing disinformation than ever, faster than was ever previously possible.
I suppose being an optimist about this technology must mean you believe that ultimately won’t affect you—or at the very least, it’s a worthwhile tradeoff. Whoever those in power are going after, and whoever the victims of violence are, they won’t be too close to you, writing emails more efficiently than ever.
AI is now (horrifyingly) being used in the legal system, too. From facial recognition to criminal justice, AI is increasingly being put in charge of deciding who is guilty and innocent, and what punishment those people might deserve.
This obviously should’ve never happened, because highly predictably, these models are very wrong a far-from-trivial percentage of the time, and highly racist even more often. (Tech is always a mirror of its creators.) This deployment of AI has already had utterly devastating impact on real people’s lives.
Forgive me, but I can’t imagine being excited that this technology sending innocent people to prison is helping me save a little time on coding.
I have to imagine such excitement would require me to believe that couldn’t happen to me, or to anybody who matters to me; that the system isn’t made for us.
Or, at the very least: that it’s all undeniably unfortunate, but ultimately, in service to some greater good. Merely a bug to be ironed out.
AI optimism requires you to see the lives of at least some of your fellow humans as worthwhile sacrifices.
To this point, I’ve talked about AI more or less in a vacuum. But it isn’t even a standalone issue; it’s become a part of other technologies, exacerbated their existing problems, and accelerated the damage they’re doing.
Propaganda and disinformation are the easy examples, but the tech is being used for rampant harm even when there’s not necessarily any political intent or purpose behind it.
Recently, for example, Facebook was flooded with AI-generated videos of women being violently strangled. There was no apparent deeper purpose behind this horrifying wave of misogynistic terrorism, however; it just happened to be what the algorithm rewarded. It generated engagement.
Sometimes this effect is more or less benign (see: Shrimp Jesus); other times, a machine working out the most effective way to generate engagement will decide to make videos of vulnerable people (like immigrants, from one recent real example) being horrifically brutalized.
AI isn’t just harmful on its own; it’s a force multiplier for existing harms.
What sort of viewpoint must it take to see this as merely a negligible side effect? A worthy compromise for faster jpegs?
I don’t know. I only know there’s no amount of productivity that could possibly be worth this price to me.
Some might argue there’s another reason for optimism about AI: simply, what it could be.
Forget what AI actually is now; the models will get better, the data centers more energy-efficient, the mistakes rarer, the harms mitigated, and so on, the argument goes, until we have something that truly changes the world for the better; an actual benevolent technology that elevates human existence, in whatever way. Maybe it even leads to “AGI” (actual human-level artificial intelligence; the thing “AI” used to mean before 2022).
I take issue with these predictions for many reasons:
They’re predictions. Anyone can predict anything. Predictions may or may not come true. They often don’t, and even when they do, the effect is often unpredictable. (Regardless: currently, such predictions are based on literally nothing but hype.)
Most of the world-altering promises center on the idea that AI is sentient, which it categorically, factually, is not. Language and statistics can simply mimic cognition easily, and the human brain is eager to anthropomorphize anything that vaguely looks like human behavior.3
Many experts (including OpenAI’s own researchers) tell us the models are already approaching their realistic ceiling, and that it’s literally impossible to stop them from making things up. (Really: actual people from OpenAI publicly admitted LLMs will never stop lying. It’s an un-fixable bug, because it’s a core component of how LLMs work.)
Literally no new technology in history has ever worked this way. Tech doesn’t free workers; it forces them to do more in the same amount of time, for the same rate of pay (or less). It distributes risk to the many at the bottom, while consolidating benefit to the few at the top, and there has arguably never been a more efficient mechanism for this than AI.
Besides, AI models exist in the consolidated hands of a precious few mega corporations, which are quite obviously giddy at the prospect of doing away with as many of their workers as they possibly can. I’m not sure I see how the theoretical future is being connected with the obvious current reality. AI will serve (and is already serving) corporate interests first and foremost, and will not serve user or public interest so much as an inch past where the two diverge.4
Regardless, even if you naively believe in the tech: you’re still willing to put up with all the harms and dangers of AI until that imagined potential future arrives—which brings us back to the original point.
My final example, like the one from the opening of this post, is a personal one: I have a newborn daughter.
I began writing this post before she was born, and, mostly because of her, I’m now finishing it up several weeks later.
And I can’t shake the thought that I’m welcoming her into a world where so much of the potential malicious misuse of AI could one day be directed at her.
Technology in general has made things like stalking and abuse easier than ever. But AI goes even further. I live knowing AI will allow any creep with an internet connection to create deepfakes of her—up to and including pornography—without her consent or anyone else’s, at barely the click of a button. (If this sounds like a horrifying, disturbed thought: it is! It absolutely is! But this is already happening to countless women, many of whom are not adults.)
To be an AI optimist, I would need to turn away from this. Ignore it. Consider it all just part of the plan; a price to be casually paid.
I guess I would have to hope and believe that my kids probably won’t experience this. Maybe they’ll be in better schools. Better neighborhoods. Have better friends.
Won’t ever go on a date the wrong guy, or piss off the wrong girl.
AI optimism requires that you believe you and your loved ones are safe from AI.
And broadly, that group can be defined as either those with the requisite class privileges to think of themselves as safe—or at least, those who see themselves as belonging to that class.
The ones who see themselves riding in the self-driving car, and not as the pedestrian it might run over.
The rest of us?
I guess it’s hard to see the convenience as worth the price, when you know you could be among those paying for it.