## Overview
Meta has implemented a mandatory employee monitoring program called the Model Capability Initiative (MCI) that captures detailed computer usage data—including keystrokes, mouse movements, clicks, and periodic screenshots—from U.S.-based staff. The data is used to train internal AI systems. The software runs silently across designated work applications such as Google services, GitHub, Slack, and Atlassian products. Participation is required, with no option to opt out.
The initiative was disclosed in an internal memo from Meta Superintelligence Labs and reported by Reuters on April 21. Employees see a pop-up prompting them to enable the tool, but refusal is not permitted. According to internal communications, the goal is to generate high-quality interaction data from skilled knowledge workers to improve AI agent performance.
## What the program does
The MCI tool collects: - Mouse movements - Clicks - Keystrokes - Periodic screenshots
It operates only within approved work applications and websites, including: - Google - GitHub - Slack - Atlassian products
According to an internal memo cited by CNBC, the software accesses only on-screen content and does not open files or attachments. However, it can capture any visible text, including passwords, product development details, and personal information related to health or immigration status if entered during work sessions.
Employees concerned about privacy are advised to avoid conducting personal activities on company devices. Meta has not provided technical documentation on data retention periods, encryption standards, or access controls for the collected data.
## Leadership justification and context
At a company-wide town hall on Thursday, CEO Mark Zuckerberg defended the program by arguing that Meta’s employees produce more valuable training data than contract workers typically used by competing AI firms. According to The Information, he stated: "One basic insight and hypothesis that we have is that a lot of data generation across the field is done by these contract companies. In general, the average intelligence of the people who are at this company is significantly higher than the average set of people that you can get to do tasks if you're working through these contractors."
Zuckerberg also confirmed that Meta plans to lay off approximately 8,000 employees—about 10% of its global workforce—beginning May 20. He attributed the cuts to rising AI infrastructure costs, stating that the company has two major cost centers: compute infrastructure and people-related expenses. Chief People Officer Janelle Gale did not rule out additional reductions.
CTO Andrew Bosworth separately announced an expansion of internal data collection under a rebranded initiative formerly known as "AI for Work," now called the Agent Transformation Accelerator. In a memo, Bosworth outlined a long-term vision in which AI agents "primarily do the work" while human employees shift to roles involving direction, review, and improvement of AI outputs.
## Internal backlash and concerns
The rollout has sparked significant internal criticism. Employees have raised concerns on internal message boards about: - The breadth of monitoring - Risks of exposing sensitive corporate information - Potential capture of personal data - Lack of transparency around data usage
Business Insider reported employee discomfort and questions about opting out—options Meta has confirmed do not exist. The absence of an opt-out mechanism, combined with the timing of the AI-driven layoffs, has intensified unease. Workers are effectively being asked to generate training data for systems that may reduce or eliminate their roles.
While Meta asserts the tool does not access file contents or attachments, the ability to capture real-time screen activity means any unredacted information visible during work sessions is potentially logged. This includes code snippets, private messages, medical leave forms, visa documentation, and other confidential materials.
The program reflects a broader industry trend of using employee behavior as implicit training input for AI workflows. However, Meta’s approach stands out due to the lack of consent mechanisms and the explicit linkage between data collection and workforce reduction plans.
