Meta Is Spying on Its Own Employees to Train AI

Meta Is Spying on Its Own Employees to Train AI

Every time a Meta employee opened a dropdown menu last week, something was watching. Every keyboard shortcut. Every mouse click. Every tab switch. All of it captured, packaged, and sent into Meta’s AI training pipeline — whether the employee wanted that or not.

This is not a rumor. This is the Model Capability Initiative, or MCI — a surveillance program that Meta disclosed to its U.S. staff through an internal memo in a channel belonging to the Meta Superintelligence Labs team. Reuters first broke the story on April 21, 2026, and since then the details coming out have been, to put it plainly, a lot to process.

So let’s go through it. What MCI actually does, which websites are being tracked, what employees think about it, and why the timing of this announcement feels like it was designed to make people uncomfortable.

What Meta Is Actually Collecting

The program installs software on Meta employees’ U.S. work computers. That software records mouse movements, clicks, keystrokes, and occasional screenshots. All of this feeds directly into Meta’s AI training pipeline.

The stated reason is logical enough on the surface. Meta is building AI agents — the kind that can navigate a computer on their own, click through menus, use apps, fill forms. To teach a model how to do that, you need data. Lots of it. Real data showing how actual humans use computers in actual work situations. “Our models need real examples of how people actually use them — things like mouse movements, clicking buttons, and navigating dropdown menus,” a Meta spokesperson said in a statement.

And look, that part is not wrong. This is a real problem in AI agent development. Alexandr Wang, former CEO of Scale AI and now head of Meta’s Superintelligence Labs, said it plainly in a 2024 interview: “There’s no pool of really valuable agent data that’s just sitting around anywhere.” That was the gap. MCI is apparently the answer Meta landed on.

But here is where the details get interesting.

CNBC got access to internal messages and reported that the list of sites being monitored includes Google, LinkedIn, Wikipedia, GitHub, Slack, and Atlassian. Meta’s own apps too — Threads, Manus, and others. The list was still changing at the time of reporting. And the original version apparently included OpenAI’s ChatGPT and Anthropic’s Claude, though those were later removed. Why those two got pulled off the list was not explained publicly — you can speculate about competitive sensitivity or legal risk, but nobody said.

Think about what that means for a minute. Meta employees doing their daily work — searching LinkedIn, reading a GitHub thread, checking a Wikipedia article — are generating training data for AI systems. Every click is a data point. Every time someone scrolls down a Slack channel and then scrolls back up because they missed something, that scroll pattern is being logged. The hesitation before clicking the wrong button and then correcting it — that’s in there too. The model is learning not just what the right move is, but how humans fumble their way to it.

That, actually, is the whole point. AI agents trained on clean, scripted data tend to be brittle in real-world use. They don’t know how to recover from mistakes the way humans do, because they’ve never seen humans make mistakes and recover. MCI is trying to capture the messy reality of computer use, not a sanitized version of it. That’s a legitimate engineering insight. The problem is everything around how they implemented it.

No Opt-Out. That’s Not a Typo.

This is the part that really set off internal alarms. Meta’s CTO Andrew Bosworth told employees directly in the memo: “There is no option to opt out of this on your work provided laptop.”

That’s it. No choice. You use the laptop, you’re in the dataset.

Multiple employees, according to CNBC’s reporting, called the program “dystopian” in internal messages. Others raised more specific worries — that MCI could accidentally capture passwords typed into work apps, confidential product development details, and personal information about employees’ immigration status, health, or family members. Employees protested the program on internal forums, and The Register picked up on those internal reactions pretty quickly.

Meta says safeguards are in place to protect sensitive content, and that the data will not be used for performance reviews or any other purpose beyond AI training. Whether employees believe that is a different question. “Safeguards are in place” is the kind of sentence that sounds reassuring until you start asking: what safeguards, exactly? Who reviews them? What happens when something sensitive gets captured anyway? These questions were not answered in the memo.

Cornell researchers raised consent and compensation questions almost immediately after the story broke — specifically around whether it is ethically acceptable to use worker behavior as AI training data without meaningful consent or additional pay. These are not fringe concerns. These go to something real: who owns the data generated by your work? The employer? The employee? Both?

The legal answer in the U.S., broadly, is “the employer.” When you use a company-issued laptop on company time, the company has wide latitude over what it monitors. Most employment agreements include some version of language that says company devices are subject to monitoring. But there’s a difference between “we can monitor your device for security” and “we are actively harvesting your behavioral data to train commercial AI systems.” Courts have not cleanly drawn that line yet. MCI might eventually force them to.

The compensation question is also not going away. If Meta’s employees are generating training data — the raw material for products Meta will sell or use commercially — is that labor? Should it be compensated separately? These questions sound abstract until you frame it differently: if Meta had paid contractors at Scale AI to simulate the same computer-use behaviors, it would have cost money. Instead, it’s getting the same data from salaried employees who did not agree to that arrangement when they took the job.

The Timing Is Hard to Ignore

Here is the part that made this story blow up the way it did.

MCI was disclosed to Meta staff on April 22, 2026. Two days later, Meta announced it would be cutting roughly 8,000 employees — about 10% of its global workforce — beginning May 20. The company also canceled plans to fill 6,000 open positions. That’s 14,000 positions, total, gone from the headcount in one announcement.

So the sequence, as far as employees experienced it: we are monitoring your every computer action to train our AI. Also, we are cutting 14,000 jobs.

These are the third wave of 2026 layoffs at Meta. January cut more than 1,000 Reality Labs positions. March cut another 700 across sales, recruiting, global operations, and other divisions. The May round is described as structural — not performance-based, according to the internal HR memo from Janelle Gale. Teams are being reorganized into AI-focused “pods.” New job categories are being created with titles like “AI builder,” “AI pod lead,” and “AI org lead.”

The Next Web described this as a “stark juxtaposition” and I think that’s right: Meta is asking remaining employees to generate the training data that will teach AI systems to replicate computer-use patterns, while simultaneously laying off the employees whose patterns the AI will eventually replace.

Bosworth himself, in the same memo that announced the surveillance program, sketched out a future at Meta where AI agents “primarily do the work” while human employees “direct, review and help them improve.” That’s not ambiguous. The goal is explicit: fewer humans doing the work, more AI. The surveillance program is the bridge between where Meta’s agents are now and where Zuckerberg wants them.

There’s also the stock option situation happening at the same time, which The Register covered. Meta filed SEC disclosures revealing a new stock option program for senior leadership tied to reaching a $9 trillion market cap by 2031. That’s roughly six times Meta’s current valuation. Options packages worth up to $921 million each for certain executives. So: employees are being surveilled, thousands are being let go, and leadership is being offered nine-figure incentive packages. That context is not irrelevant to how rank-and-file workers are receiving all of this.

What Exactly Is Meta Racing Toward

To understand why Meta is doing this, you have to understand how far behind they feel right now. OpenAI, Anthropic, and Google are all moving fast on AI agents — software that can browse the web, use apps, and complete multi-step tasks without a human doing each individual step. Meta wants in on this. Badly.

Mark Zuckerberg committed up to $135 billion in capital expenditure for 2026 alone. Meta acquired a 49% stake in Scale AI last year for more than $14 billion. Alexandr Wang, who built Scale’s business partly on harvesting workflow data from contractors, now leads the superintelligence team. The intellectual lineage of MCI goes directly back to Scale AI’s core business model: you get the data by watching people work.

OpenAI, for what it’s worth, has been doing adjacent things. In January 2026, they were reported to be asking third-party contractors — through training data firm Handshake AI — to upload real work products from previous jobs. Actual PowerPoints, actual spreadsheets, with instructions to scrub confidential stuff first. That raised its own concerns. But there’s a difference between asking contractors to voluntarily submit old work and silently installing monitoring software on current employees’ laptops with no opt-out.

Meta has also been bleeding on the hiring front. The company ended 2025 with 78,865 employees, up from the post-2023 lows when Zuckerberg’s “year of efficiency” cut more than 20,000 jobs. Now it is cutting again — deeper, in some ways, than it rehired. The May 2026 round is the third wave of job cuts this year already. January took more than 1,000 Reality Labs positions. March cut another 700 across several divisions. And now this.

What Critics Are Saying

The reaction outside Meta has been mostly alarm, with a small number of defenders.

Privacy advocates are pointing at GDPR immediately. Meta’s European employees are apparently not included in MCI — which is itself telling. The EU’s General Data Protection Regulation requires explicit consent for collecting personal data and sets strict limits on what employers can monitor. Meta apparently looked at those requirements and decided the program was not worth the legal fight in Europe. But in the U.S., where employee privacy protections are much weaker, it went ahead.

That asymmetry bothers people. If the same data collection is too risky to do in France or Germany, why is it acceptable in California or New York? The answer is “because U.S. law allows it” — but that’s a legal answer, not an ethical one.

If MCI ever gets extended to European employees, or if Meta’s data handling for U.S. workers touches EU-protected data in any way, the legal exposure gets complicated fast. European regulators have shown they are willing to move against Meta specifically. The GDPR fine history is long.

Researchers at Cornell were among the first to publicly frame the consent and compensation questions. Labor scholars are picking this up too. The core argument: when you take a job, you implicitly agree that your work output belongs to the company. But does that extend to the behavioral data generated by how you produce that work? Your typing cadence, your navigation patterns, your error-and-recovery sequences — is that “work,” or is it something else? The law has not caught up to this yet.

From the tech side, some people genuinely defend the approach. If you accept that AI agents need computer-use training data, and that authentic human behavior is the best source, then watching employees work is more efficient than paying contractors to simulate work. The data is real. The workflows are real. The awkward, roundabout ways that actual knowledge workers navigate their actual tools — that’s what the model needs to learn.

But the defense weakens when you add the no-opt-out part. And it weakens further when the data collection is happening at the same time as mass layoffs. Asking workers to trust your safeguards is harder when those same workers just found out 14,000 colleagues are being let go. That’s a trust context problem. Meta created it by announcing both things in the same week.

The Bigger Pattern Here

MCI is not happening in isolation. Across the tech industry right now, the same dynamic is playing out in different forms.

AI companies need data. The easy data — the internet, public datasets, licensed books — is mostly used up, legally contested, or not specific enough for the kind of agent tasks these companies want to build. So the hunt for data has moved inward. Into organizations. Into actual work. Into the laptops of actual employees doing actual jobs.

This is, in a way, the logical endpoint of something that started a long time ago. Every time you’ve used Google Docs, Microsoft Word Online, Salesforce, or Slack, those companies have been learning from your behavior to improve their products. The difference with MCI is that Meta is being explicit about it, it has no user-facing consent mechanism, and the explicit goal is to build AI agents that can replace the workers being monitored.

The Platformer newsletter put it well: having your every click and scroll monitored has been part of the deal for users of Facebook and Instagram for years. Now it’s part of the deal for employees too. The company that built its business on behavioral surveillance of users is applying the same model internally.

That last part is the uncomfortable truth of this whole story. If the goal were just “make our tools better for employees,” this would still be controversial but more defensible. The stated goal is to build agents that “primarily do the work.” The data being collected is training data for systems whose express purpose is to reduce human labor. The employees generating that data know this. That’s why internal messages described MCI as dystopian. It’s not paranoia. It’s reading the memo.

And the industry is moving in this direction broadly. OpenAI paid Handshake AI contractors to upload old work documents. Microsoft is embedding Copilot into Office and learning from every Word document, Excel formula, and Outlook draft. Google has similar programs. The surveillance of white-collar work is becoming standard infrastructure for AI development. Meta is just being more explicit about it than the others — which, depending on how you look at it, is either honest or careless.

The Hacker News thread on this story had a comment that stuck: “Once you cross the line that IP going to AI providers is acceptable, your thought workers are assets, not people.” That’s probably too bleak as a general statement. But for this specific situation? It’s hard to argue with.

What Happens Next

Meta will report its Q1 2026 earnings on Wednesday, April 30. The MCI story and the layoff announcement will both be live in the room when analysts ask questions. It will be interesting to see how Zuckerberg frames the surveillance program in that context — whether it gets presented as a technical necessity, a competitive requirement, or something that barely gets mentioned at all.

The GDPR exposure is the clearest near-term risk. If EU regulators decide to look into whether MCI data collection involves EU-connected data in any way, or if they use this as a signal to tighten cross-border data rules for AI training, Meta could be looking at fines and forced changes to the program.

The consent question is slower-moving but maybe more important long-term. If Cornell or similar researchers push these arguments into legal challenges — or if a current or former employee files a complaint — MCI could become a test case for how much behavioral data employers can extract from workers without additional compensation or meaningful opt-out rights.

And then there is the morale question, which is harder to quantify but real. The employees who survived three rounds of layoffs in 2026 are now also learning their every keystroke is training data for the AI systems that may replace them. That is a difficult thing to sit with every morning when you open your laptop. How long before the most skilled people — the ones with options — start quietly looking elsewhere?

The Data Hunger Is Not Going Away

The hard truth is that MCI exists because AI training data is genuinely scarce for the specific task Meta is trying to solve. Computer-use agents need computer-use data. And nobody has figured out a clean, ethical, scalable way to get it without surveillance.

That’s not a defense of MCI as implemented. The no-opt-out rule, the lack of additional compensation, the simultaneous layoff announcement, the vague safeguard promises — all of that is legitimately worth criticizing. But the underlying problem MCI is trying to solve is real, and every major AI company is working on some version of this.

The question of who owns the behavioral data generated at work — how you type, how you navigate, how you switch between tasks — is going to become a major legal and ethical fight over the next few years. Meta just made it current.

For now, every Meta employee in the U.S. who opens a work laptop is generating training data. Some of them probably opened a dropdown menu this morning without thinking about it. The model noticed.

Where This Leaves Everyone

The MCI program is not going to disappear quietly. The GDPR questions alone will keep it alive in tech policy circles for months. The layoff timing made it a news story rather than a policy footnote. And the fact that employees openly called it dystopian in internal messages — and that those messages were leaked within days — suggests that trust inside Meta right now is not great.

Zuckerberg said on the most recent earnings call that “projects that used to require big teams” are now being done by “a single very talented person.” That framing is going to define how this whole story lands historically. Was MCI a smart technical play to get agent training data? Or was it the moment a company decided its remaining employees were most valuable as a dataset?

Both things can be true. That’s what makes this uncomfortable to write about, and probably uncomfortable to read. The tech is real. The need for data is real. The surveillance is real. And the employees being watched are also the ones being let go.

That combination is something new. At least, it’s new enough that nobody has a good name for it yet.

Post a Comment

Previous Post Next Post