AI Layoffs 2025 2026: Why Companies That Fired Workers for AI Are Now Rehiring Them

AI Layoffs 2025 2026: Why Companies That Fired Workers for AI Are Now Rehiring Them

Let me start with a number. Five hundred and seventy eight days.

That is roughly how long it took for Klarna’s CEO Sebastian Siemiatkowski to go from publicly declaring, with the confidence of someone who had never once worried about a rent payment, that “AI can already do all of the jobs that we as humans do”… to sheepishly telling Bloomberg that his company had focused “too much on cost” and was now recruiting humans again because the whole thing had produced, in his own words, lower quality.

Five hundred and seventy eight days. And in those five hundred and seventy eight days, 700 people lost their jobs. Over a thousand more were quietly managed out through a hiring freeze and what the company politely called “natural attrition.” Families were disrupted. Careers were derailed. People moved cities, moved back in with parents, made decisions that changed the shape of their lives, all because a CEO wanted a headline about being an AI-first company and a board that rewarded him for delivering one.

And now he wants them back. Not at their old salaries. Not with their old benefits. Klarna’s brilliant new plan is an “Uber-style” workforce model. Gig workers. Flexible hours. No stability. The empathy they amputated, now available for purchase at contractor rates.

I am supposed to be balanced about this. I am going to try. But let me be honest with you first: I find the audacity genuinely difficult to process.


The Headline Nobody Wanted to Write

For about eighteen months, Klarna was the darling example of the AI productivity story. Every tech newsletter, every investor memo, every LinkedIn thought leader who had recently discovered the word “disruption” cited Klarna as proof that the future was here. The AI chatbot handled 2.3 million conversations in its first month. It performed the work of 700 agents. Resolution times improved. Costs fell. The CEO was invited to conferences to explain how it was done.

What got far less attention, because it is harder to put in a chart, was what was actually happening to the customers. The chatbot gave generic responses. It looped people in circles. It could not handle nuance. It could not handle anger. It could not handle the specific human situation that does not fit the script: the customer whose payment failed because of a hospital visit, the person dealing with a fraudulent charge during a family crisis, the user who just needed someone to actually listen before explaining the policy.

Customers complained about robotic responses, inflexible scripts, and the Kafkaesque loop of repeating their issue to a human after the bot failed. Satisfaction ratings dropped. Complaints rose. The numbers that looked good on the investor slide were hiding a slow bleed of trust that is genuinely expensive to recover from. Research found that 55 percent of companies that rushed to replace human workers with AI now regret their decision, discovering that apparent cost savings often led to hidden expenses in the form of increased customer churn and reputation damage.

Klarna was not a visionary. Klarna was a case study in exactly what happens when a company optimizes for the metric that looks good in a press release and ignores the ones that are harder to measure.


But Klarna Is Not the Villain. Klarna Is the Example.

Here is where I want to say something that is going to make the easy narrative harder.

Klarna is not uniquely evil. Klarna is not even particularly unusual. What Klarna did, it did loudly, which is why we are discussing it. But the same calculation happened in quieter offices across every industry, in companies whose names you will never see in a headline because they were careful enough to avoid making the grand public declarations.

Amazon announced plans to cut 14,000 corporate roles, stating that AI enables leaner structures and faster innovation. Workday cut 8.5 percent of its workforce to reallocate resources toward AI investments. Microsoft cut about 15,000 jobs, framing AI as central to reshaping its productivity model. Salesforce reduced its customer support workforce by 4,000, with CEO Marc Benioff stating AI now handles up to half of the company’s work.

Nearly 55,000 job cuts were directly attributed to AI in 2025, according to Challenger, Gray and Christmas, out of a total 1.17 million layoffs.

Those are not just numbers. Each one of those is a person who updated a resume at a kitchen table at midnight. Each one is someone who had a conversation with their partner about whether to pull the kids from after-school activities. Each one is a human being who was told, politely, with a severance package if they were lucky, that a spreadsheet had determined their labor was now redundant.

And the reason all of this is worth your fury is not because AI is bad. It is because the framing around it was dishonest in a specific, calculated, and consequential way.


The Lie That Was Never Technically a Lie

Nobody stood in front of a crowd and said “we are going to fire people, use AI that is not ready, discover it does not work, and then quietly hire contract workers at lower wages.” That sentence was never spoken. That would have been a scandal.

What they said instead was: we are becoming an AI-first company. We are investing in the future. We are building leaner, faster, smarter operations. The language of inevitability. The language that makes the person being laid off feel like they are not a victim of a decision but a casualty of history, which is a much more comfortable thing for the person making the decision to believe.

Just one in four AI projects delivers on the return on investment it promised, according to an IBM survey of 2,000 CEOs. An even smaller portion, 16 percent, are scaled across the enterprise. Despite this dismal success rate, companies went all-in on AI, driven largely by the belief that everyone else was doing it. Nearly two thirds of CEOs say “the risk of falling behind drives them to invest in some technologies before they have a clear understanding of the value they bring to the organization.”

Read that again. Two thirds of the people making irreversible decisions about other people’s livelihoods admitted they did not understand the value of the technology they were deploying. They were not making a calculated bet on something they had rigorously tested. They were making a fear-based decision dressed up in the language of strategy. They were watching other CEOs get applause for AI announcements and they wanted the applause too.

The people who got fired were not casualties of technological progress. They were casualties of executive peer pressure.


The Generation That Is Not Even Getting a Chance

If you are outraged about the people who were fired, there is a second story running parallel to it that is somehow receiving less attention. The story of the people who were never hired at all.

Entry level tech hiring decreased 25 percent year over year in 2024. Employment for software developers aged 22 to 25 has declined nearly 20 percent from its peak in late 2022. A Stanford Digital Economy Study documented this. Stack Overflow’s own research confirmed it. These are not contested figures. They are the quiet, unremarkable, barely-covered collapse of the on-ramp into the technology industry for an entire generation.

Among 400 classmates at the Indian Institute of Information Technology, Design and Manufacturing, fewer than 25 percent have secured job offers, with their course ending in May 2026 and a sense of panic on campus. Entry level hiring at big tech companies has dropped by more than 50 percent over the last three years.

Think about what that actually means. These are people who spent three or four years, in some cases borrowed money, in some cases moved across countries, specifically to enter an industry that was explicitly recruiting them. The industry sent signals to an entire generation: learn to code, this is where the opportunity is. And then, while those people were in school learning to code, the industry quietly decided it did not need the entry level anymore.

The Harvard study found that after late 2022, AI-adopting companies hired five fewer junior workers per quarter than before. The change did not come from layoffs. It came from a complete freeze in new hiring. That is how an entire generation starts disappearing from the workforce. Not through pink slips, but through silence.

Not through pink slips. Through silence. There is something particularly brutal about that. A layoff is a moment. You can point to it. You can be angry about it. You can tell the story. The hiring freeze just means the phone never rings and nobody owes you an explanation.


The Part That Will Make You Furious, Regardless of Which Side You Are On

Here is the thing about this situation that manages to make everyone angry, and I mean that genuinely. There are two completely legitimate readings of this story, and they lead to completely different conclusions, and both of them are infuriating in their own way.

Reading one: the companies lied. They used AI as cover for cost cutting they wanted to do anyway. The technology was not actually ready, they knew it was not actually ready, and they used it as a pretext to eliminate roles that hurt their margins. The executives got bonuses for announcing efficiency gains. The workers got severance packages and LinkedIn tips about “staying resilient.” The accountability gap between the person who made the decision and the person who lived with its consequences was total and complete. If this reading makes you angry, you are angry at corporate structure, at the asymmetry of risk, at the fact that a decision made in a boardroom lands on a kitchen table and the person at the kitchen table has no recourse.

Reading two: the companies were not lying, they were just wrong. They genuinely believed the technology was ready. They genuinely expected the numbers to hold. They made a bad bet, and when the bet failed they course-corrected, and that is actually how markets are supposed to work. The problem is not moral failure, it is the brutal reality that in the process of testing a hypothesis about the future, real human lives become variables in the experiment. If this reading makes you angry, you are angry at something harder to name: the way progress has always, without exception, produced casualties, and the uncomfortable question of whether there is actually a version of technological change that does not do this.

I do not think there is a clean resolution between those two readings. I think both of them are true simultaneously, which is what makes this genuinely complicated and genuinely worth arguing about rather than just picking a team.


What the Companies Got Wrong, and It Was Not What They Think

The prevailing post-mortem on cases like Klarna tends to focus on the technology. The AI was not good enough yet. The models could not handle nuance. The chatbot needed more training. The lesson, as most companies are reading it, is “implement AI more carefully next time.”

That is the wrong lesson. The technology was a variable. The real error was structural and it was about what gets measured and what does not.

When Klarna announced that its AI chatbot handled 2.3 million conversations, that was a number. When customer satisfaction quietly eroded, that was harder to quantify and therefore easier to ignore until it was not. The entire incentive structure around AI adoption in corporate environments rewards the announcement, rewards the headline metric, rewards the cost saving on the income statement… and systematically underweights the things that do not show up cleanly in a quarterly report.

Forrester Research documented a growing segment of employees it calls coasters. These are disengaged workers who do not think their employer deserves their energy. This group is expected to rise to 28 percent in 2026. When a quarter of your workforce is actively withholding discretionary effort, no amount of AI will compensate for the productivity loss.

The employees who watched their colleagues get fired for an AI that then did not work, who were told to be grateful they still had jobs while also being told to use AI for half their work… these are not people who are going to give you discretionary effort. These are people who are going to do exactly what is required and nothing more, which in knowledge work means the company is bleeding value in a way that will never appear on a slide.


The Question Nobody in a Position of Power Is Answering

Here is what I have not seen any CEO or board member asked and forced to actually answer.

If the AI was good enough to fire 700 people, and then it turned out the AI was not good enough and you had to hire people back, who is responsible for the 700 people’s lives in between? Not legally. We know the legal answer. Legally, nobody. Legally, the severance package closes the loop.

Who is responsible in any other sense? What accountability exists? What changed? The CEO who made the call is still the CEO. The board that approved it is still the board. The people who are being rehired are being rehired at lower wages into less stable arrangements. The institutional memory of the error has been processed into a lesson about “implementation speed” rather than a lesson about what you owe to people whose lives you are treating as rounding errors in a productivity calculation.

Forrester predicts that half of AI-attributed layoffs will be quietly rehired, but offshore or at significantly lower salaries. Companies are laying off workers for AI capabilities that do not exist yet, betting on future promises rather than proven technology. When that bet fails, companies face a choice: admit the mistake and rehire at previous salaries, or quietly fill gaps with lower cost labor. Most will choose the latter.

The experiment cost the company some customer satisfaction metrics and a few news cycles of bad press. It cost the workers something considerably harder to recover from. And the asymmetry of that is not a side effect of the AI revolution. It is a choice. It is a set of priorities baked into who we hold accountable and who we expect to absorb risk.

That is the conversation this industry needs to have. Not about model accuracy. Not about implementation timelines. About who bears the cost when the confident predictions turn out to be wrong, and whether the people making those predictions should face any of it themselves.

Because right now, they do not. And as long as they do not, the headline will keep coming: company fires hundreds for AI, AI underperforms, company quietly hires back. The names will change. The numbers will change. The kitchen tables will always be someone else’s.


Where Do You Land On This?

There are people reading this who are going to say I am being unfair. That companies have to make hard decisions. That nobody is guaranteed a job. That the market is not a charity and expecting it to behave like one is naive. That progress has always required adaptation and the people complaining need to upskill and move forward.

There are people reading this who are going to say I have not gone nearly far enough. That this is not an AI story, it is a labor story. That we have been here before, with the textile workers, with the factory closures, with every wave of automation that transferred wealth upward while distributing disruption downward. That nothing actually changes until the accountability structures change.

Both of those people are going to be in the comments. I genuinely want to hear from both of them.

Because the thing I am most certain of, after looking at all of this data, all of these decisions, all of these quietly rehired contractors at Uber-style wages… is that the polite, LinkedIn-friendly, AI is a tool and humans will adapt version of this conversation is doing real damage to real people by making it harder to have the honest version.

The honest version is messier. The honest version makes executives uncomfortable. The honest version asks questions that do not have clean answers.

That is exactly why it is the version worth having.

If you were one of the people let go in an AI restructuring, or if you are a fresh grad who cannot get an interview, I want to hear from you directly. And if you think I am being unfair to the companies here, tell me that too. The comments are the place where this conversation gets real.

Post a Comment

Previous Post Next Post