
Save this article to read it later.
Find this story in your account’s‘Saved for Later’ section.
Comment

In the space of a day, two AI stories broke into the mainstream. They were, in different ways and from different insider perspectives, about the same thing: becoming suddenly, and profoundly, worried about the future.
The first was a resignation letter from Mrinank Sharma, a safety researcher at Anthropic. Sharma, who joined the company in 2023 and briefly led a division within its Safeguards team,issued a warning:
I continuously find myself reckoning with our situation. The world is in peril. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment. We appear to be approaching a threshold where our wisdom must grow in equal measure to our capacity to affect the world, lest we face the consequences.
In his short time working in AI, he wrote, he had “repeatedly seen how hard it is to truly let our values govern our actions.” For now, the self-described “mystic and ecstatic dance DJ” — who happened to be one of the world’s leading AI risk researchers and had clear visibility into frontier AI models — was stepping away from a lucrative job at a leading firm to “explore a poetry degree and devote myself to the practice of courageous speech.”
Anthropic has positioned itself as the model-builder most concerned about safety, which, in the context of AI, encompasses more than keeping a platform or service secure or free of bad actors. To work on “safety” at an AI company can mean many things. It might mean preventing your models from giving bad advice, reproducing harmful bias, becoming too sycophantic, or being deployed in scams. It might mean making sure your coding tools aren’t used to make software viruses or that they can’t be used to engineeractual human viruses as a weapon. It might also mean thinking about more forward-looking questions of risk and alignment: Will these models exceed human capabilities in ways that will be hard to control? Will their actions remain legible to humans? Do they engage in deception, and will they develop or run with priorities of their own? Will those priorities conflict with ours, and will that be, well, a disaster?
Sharma’s departure and apparent disillusionment were read and spread with alarm, shared with mordant captions about how this was allprobably nothingand alongside implications that he must have seen nonpublic, classified information that put him over the edge. Maybe so: Sharma wrote on X that he’ll have “more to say” when he’s “ready.”
People leaving regular companies: Time for a change! Excited for my next chapter!
— Jack Clark (@jackclarkSF)September 10, 2025
People leaving AI companies: I have gazed into the endless night and there are shapes out there. We must be kind to one another. I am moving on to study philosophy.
As an Anthropic co-founder half-joked above, anticipating Sharma’s post well in advance, people in roles like this have unusual relationships with their jobs. But Sharma isn’t alone in leaving, or losing, a position like this recently — as a sector within a sector, AI “safety” appears to be collapsing, losing influence as the broader industry goes through a fresh period of acceleration following a brief lull in which tech executives talked nervously of abubble. On Tuesday,The Wall Street Journalreported that an OpenAI safety executive who had opposed a new “adult mode” and raised questions about how the company was handling young users had been fired by the company, whichtold her it was due to unrelated “sexual discrimination” against a male employee. Hers was the latest in a long string of safety-adjacent departures. On Wednesday, according to Casey Newton at Platformer, the company disbanded its “mission alignment” team, giving its leader anew job as the company’s “chief futurist.” The same day, OpenAI researcher Zoë Hitzigexplained herseparate resignation in the New York Times:
This week, OpenAI started testing ads on ChatGPT. I also resigned from the company after spending two years as a researcher helping to shape how A.I. models were built and priced, and guiding early safety policies before standards were set in stone.
I once believed I could help the people building A.I. get ahead of the problems it would create. This week confirmed my slow realization that OpenAI seems to have stopped asking the questions I’d joined to help answer.
Hitzig and Sharma were working for different companies and doing meaningfully distinct jobs, which you can glean from the space between their warnings: from Hitzig, that OpenAI is making the same mistakes that Facebook did with sensitive, seductively monetizable user data and risks becoming a technology that “manipulates the people who use it at no cost” through ads while benefiting only the people who can pay for it; from Sharma, who worked at the company started by alignment-concerned OpenAI exiles, that humanity is in peril and that AI is contributing to a terrifying “poly-crisis.”
For an utterly different take on quitting your big AI job — and a reminder that for all the similarities in underlying AI models, it appears the people building them have quite diverse ideas about what exactly they’re working on — here’s someone from xAI, which has seen its own series of recent departures:
I left xAI a few weeks ago. That was short! IMO, all AI labs are building the exact same thing, and it's boring. I think there's room for more creativity. So, I'm starting something new.
— Vahid Kazemi (@VahidK)February 11, 2026
Then again, from one of the multiple xAI co-founders who left recently:
Last day at xAI.
— Jimmy Ba (@jimmybajimmyba)February 11, 2026
xAI's mission is push humanity up the Kardashev tech tree. Grateful to have helped cofound at the start. And enormous thanks to@elonmusk for bringing us together on this incredible journey. So proud of what the xAI team has done and will continue to stay close…
Anyway, taken together and in context, the departures of Hitzig and Sharma tell a similar story: Employees who were brought in to make AI products safer — or, in some moral or normative sense,better— are feeling either sidelined or inadequate to the task. Reading recent headlines about AI business strategies, it’s easy to see why. Consider this 2023 post from Sam Altman from a few months after ChatGPT first blew up:
things we need for a good AGI future:
— Sam Altman (@sama)March 30, 2023
1) the technical ability to align a superintelligence
2) sufficient coordination among most of the leading AGI efforts
3) an effective global regulatory framework including democratic governance
Imagine you work in AI alignment or safety; are receptive to the possibility that AGI, or some sort of broadly powerful and disruptive version of artificial-intelligence technology, is imminent; and believe that a mandatory condition of its creation is control, care, and right-minded coordination at corporate, national, and international levels. In 2026, whether your alignment goal is not letting chatbots turn into social-media-like manipulation engines for profitorto maintain control of a technology you worry might get away from us in more fundamental ways, the situation looks pretty bleak. From a position within OpenAI, surrounded by ex-Meta employees working on monetization strategies and engineers charged with winning the AI race at all costs but also with churning outdeepfake TikTok clones and chatbots for sex, you might worry that, actually, none of this is being taken seriously and that you now work at just another big tech company — but worse. If you work at Anthropic, which at least still talks about alignment and safety a lot, you might feel slightly conflicted about your CEO’slengthy, worried manifestos that nonetheless conclude that rapid AI development is governed by the logic of an international arms race and therefore must proceed as quickly as possible. You both might feel as though you — and the rest of us — are accelerating uncontrollably up a curve that’s about to exceed its vertical axis.
Which brings us to thesecond AI story that broke X containment this week: a long post from an AI entrepreneurcalled Something Big Is Happening. Citing his authority as someone who “lives in this world,” the writer, Matt Shumer, asks readers to think back to February 2020 and says we’re in the beginning phases of something “much, much bigger than Covid.” It’s a come-to-Jesus talk about AI, delivered by someone who says that “the gap between what I’ve been saying” about the technology and “what is actually happening” has gotten “far too big” and that the people he cares about “deserve to hear what is coming, even if it sounds crazy.”
What’s coming, he says, is total economic disruption. Recent advances in AI coding — in the form of tools like Claude Code and Codex — have shocked him, despite years of building AI tools himself, and models now do his job better than he can. AI companies targeted software first, because that’s their business, and as a force multiplier; now, he says, “they’re moving on to everything else,” and the disorientation and shock experienced in his world are “coming to yours.” In the medium term, “nothing that can be done on a computer is safe,” he writes, not “to make you feel helpless” but to make it clear that “the single biggest advantage you can have right now is simply beingearly.”
The essay went about as viral as something can go on X these days, and it’s worth thinking a little bit about why. X is where the AI industrytalks to itself, and from within that conversation — which is informed by the presence of real insiders as well as grifters, consumed by millions of spectators, and shaped by the strange and distorting dynamics of X itself — what Shumer is saying is, if not quite conventional wisdom, the kind of thing that gets discussed alot. Sometimes conversations revolve around the new essay by Dario Amodei, who runs through the same story with a sense ofexecutive trepidation, or focus on something like a 2024 geopoliticalwar-gaming exercise from a former alignment researcher at OpenAI. There aregauzy blog posts from Altman about the coming singularity and cryptic tweets from his employees talking about acceleration, velocity, takeoff, and feelings of alienation about how the rest of the world doesn’t yet see what they do. The models’ rapid increase in coding proficiency triggered an industrywide reevaluation, driven in part by rational prediction about utility but, if we’re being honest,significantly by people who can code using these new models for the first time — and feeling shock and despair when confronted by tools that will clearly change how they do their jobs — and who then go on to tweet about it. In an essay about “Claude Code psychosis,” Jasmine Sun tried tocapture some of this common recent experience:
I now get why software engineers were AGI-pilled first — using Claude Code has fundamentally rewired my understanding of what AI can do. I knew in theory about coding agents but wasn’t impressed until I built something. It’s the kind of thing you don’t get until you try …
She also complicated, ahead of time, the sort of straightforward case for AI coding generalization that Shumer would summarize a few weeks later:
The second-order effect of Claude Code was realizing how many of my problems are not software-shaped. Having these new tools did not make me more productive; on the contrary, Claudecrastination probably delayed this post by a week.
This is genuinely fun stuff to think about and experiment with, but the people sharing Shumer’s post mostly weren’t seeing it that way. Instead, it was written and passed along as a necessary, urgent, and awaited work of translation from one world — where, to put it mildly, people are pretty keyed up — to another. To that end, it effectively distilled the multiple crazy-makingvibes of the AI community into something potent, portable, and ready for external consumption: the collective episodes of manic acceleration and excitement, which dissipate but also gradually accumulate; the open despair and constant invocations of inevitability by nearby workers; the mutual surveillance for signals and clues about big breakthroughs; and, of course, the legions of trailing hustlers and productivity gurus. This last category is represented at the end of 26-year-old Shumer’s post by an unsatisfying litany of advice: “Lean into what’s hardest to replace”; “Build the habit of adapting”; because while this all might sound very disruptive, your “dreams just got a lot closer.”
The essay took the increasingly common experience ofstarting to feel sort of insane from using, thinking, or just consuming content about AI and bottled it for mass distribution. It was explicitly positioned as a way to let people in on these fears, to shake them out of complacency, and to help them figure out what to do. In practice, and because we’re talking about social media, it seemed most potent and popular among people who were, mostly, already on the same page. This might explain why it has gotten a bit of a pass — as well as a somewhat more muted response from the kinds of core AI insiders whose positions he’s summarizing — on a few things: Shumer’s last encounter with AI virality, which involved tuning a model of his own and beingaccused of misrepresenting its abilities, followed by an admission that he “got ahead of himself”; the post’s LinkedIn-via-GPT structure, format, and illustration, which all resemble the outputs of popular AI models because, to some extent, they literally were; Shumer’s current start-up being an “AI writing assistant,” placing this essay in a long tradition of maybe-it’s-marketing manifestos by entrepreneurs who understand how you make a name for yourself in an industry that spends so much time online.
I don't know if you've noticed, but there's a wave of mass psychosis rolling through tech Twitter, very similar to what we experienced in spring 2023 and spring 2020 (interesting that the periodicity is exactly 3 years)
— François Chollet (@fchollet)February 12, 2026
But the vibes are much darker now than they were last time
None of this undermines Shumer’s central argument, which is that the technology you’ve been hearing about is, in fact, a very big deal; that progress has been fast; and that it’s time for everyone to get their heads out of the sand, et cetera. (And in a future like this, why wouldn’t the machines write most of the posts about themselves? It didn’t seem to matter this time around!) If you want to argue with it, you might question the current scaling paradigm or talk to a labor economist of acertain persuasion. If you want tocomplicate it a bit, you might point out the recurring historical tendency in which fears of automation gather into near-future scenarios of total, sudden, and practically unaddressable transformation, manifesting instead as decades of unpredictable, contestable change and, indeed, social and political upheaval. That, too, would be missing the point, which is that the millions of people passing this post around, and others like it, don’t need to be convinced to be worried. They’re already there, no recursive self-improvement, the-whole-world-is-code argument required. They’ve been waiting for an easy way tocommunicate that the biggest story in the economy makes them feel kind of helpless and left behind in advance and that the people insisting that it doesn’t matter make them feel even worse. They don’t need to be persuaded. They just want to talk about it.
In their own way, the markets are suddenly talking too. Private AI valuations have been ballooning for years, and AI-adjacent tech stocks have been propping up the indexes. In recent weeks, though, seemingly disparate clusters of stocks have gone through rapid, preemptive sell-offs in what analysts arecalling the “AI scare trade”: enterprise software, legal services, insurance brokerages, commercial real estate, and even logistics. Each sector fell victim to a slightly different story, of course. Breakthroughs in AI software development presented plausible threats to, for example, SaaS companies, while legal- and financial-research tools from Anthropic read like a declaration of intent. Logistics companies, on the other hand, dumped without much real news at all:
Today it's logistics companies that are getting crushed by the AI scare trade.
— Joe Weisenthal (@TheStalwart)February 12, 2026
But here's what's weird. The company that announced the new AI freight product is not some advanced AI lab. It's some Florida-based penny stock that sells karaoke machines.pic.twitter.com/e9hBmxX5cr
Here, again, the specifics of the argument weren’t the point. Confronted with the question of whether or not it was time to freak out, and whether a rapidly improving general-purpose tool might disrupt a given part of the economy, a critical mass of investors answered, basically,Why not?
At the superheated center of the AI boom, safety and alignment researchers are observing their employers up close, concluding there’s nothing left for them to do and acting on their realization that the industry’s plan for the future does not seem to involve them. Meanwhile, observing from afar, millions of people long ago intuited much of the same thing: that the companies able to raise infinite money on the promise of automating diverse categories of labor are serious, appear to be making early progress, and are charging ahead as fast as they can.
In other words, the animating narrative of the AI industry — the inevitable singularity, rendered first in sci-fi, then in theory, then in mission statements, manifestos, and funding pitches — broke through right away, diffusing into the mainstream well ahead of the technologies these companies would end up building. It’s a compelling narrative that benefits from the impossibility of thinking clearly about intelligence, easy to talk yourself into and hard to reason your way out of. It also, as millions of people seem eager to discuss more openly, feels like a story of helplessness. The AI industry’s foundational story is finally going viral — just for being depressing as hell.
Things you buy through our links may earnVox Media a commission.
This email will be used to sign into allNew York sites. By submitting your email, you agree to ourTerms andPrivacy Policy and to receive email correspondence from us.
Password must be at least 8 characters and contain:
As part of your account, you’ll receive occasional updates and offers fromNew York, which you can opt out of anytime.
This email will be used to sign into allNew York sites. By submitting your email, you agree to ourTerms andPrivacy Policy and to receive email correspondence from us.
Password must be at least 8 characters and contain:
As part of your account, you’ll receive occasional updates and offers fromNew York, which you can opt out of anytime.
* please try a longer search























