Tech

In 2018, 4,000 Google employees killed a Pentagon contract. In 2026, Google signed a bigger one. Now the AI researchers are unionising

Google's reversal on AI-powered military contracts has sparked a union drive among researchers, who are seeking to protect their work from being co-opted for surveillance and warfare purposes, just five years after a similar contract was abandoned due to employee backlash. The new contract, reportedly worth billions, has reignited concerns about the ethics of AI development and the role of corporate researchers in shaping military technology. AI ethics guidelines, once touted as a solution, now seem insufficient to address the issue. AI-assisted, human-reviewed.

Workers at Google DeepMind’s UK offices voted in April to join the Communication Workers Union and Unite the Union, with 98 percent of ballots cast in favour. They sent a letter to management this week requesting formal recognition of the unions as their official representatives. If recognised, they would become the first frontier AI laboratory in the world to have unionised workers. The vote was not primarily about pay, benefits, or working conditions. It was about the Pentagon.

Background

In 2018, four thousand Google employees signed a petition against Project Maven, a Pentagon contract that used the company’s AI to analyse drone surveillance footage. Google did not renew the contract. It published a set of AI principles pledging not to develop weapons or surveillance technology that violates international norms. It built an AI ethics team.

Eight years later, Google has signed a classified AI deal with the Pentagon for “any lawful governmental purpose,” removed its weapons pledge from its published principles, and fired the leaders of the ethics team it created in response to Project Maven. The researchers who built the AI now being offered to the military have responded by voting to unionise.

The union demands

The union demands are specific: an end to the use of Google AI by the Israeli military and the US military, the restoration of the company’s scrapped commitment not to build AI weapons or surveillance tools, the creation of an independent ethics oversight body, and the individual right for researchers to refuse to contribute to projects on moral grounds. These are governance demands, imposed from below because the governance structures that were supposed to exist from above — the AI principles, the ethics board, the internal review processes — were dismantled or overridden when they conflicted with revenue.

The Pentagon deal

Google signed the classified Pentagon deal for “any lawful purpose” while simultaneously withdrawing from a $100 million drone swarm competition after an internal ethics review, a contradiction that researchers described as incoherent. The classified deal gives the Pentagon access to Google’s AI models on air-gapped networks where Google cannot monitor what queries are run, what outputs are generated, or what decisions are made. DeepMind research scientist Alex Turner criticised the agreement publicly, posting that Google “can’t veto usage” and is relying on “aspirational language with no legal restrictions.” The contract includes advisory guardrails discouraging mass surveillance and autonomous weapons without human oversight, but the government can request adjustments to safety settings, and on a classified network, there is no independent verification that any guardrail is honoured.

Google’s deal is reportedly more permissive than OpenAI’s, which retains “full discretion” over its safety mechanisms. Only Anthropic refused to grant the Pentagon unrestricted access, insisting that its models not be used for autonomous weapons

Similar Articles

More articles like this

Tech 2 min

Orchid, the buzzy Tame Impala synth, is back in a gorgeous clear colorway

A limited-edition Arctic clear variant of the $699 Telepathic Instruments Orchid—co-designed with Tame Impala’s Kevin Parker—reintroduces the cult-favorite chord organ synth to the market on May 5, blending vintage harmonic simplicity with modern polyphonic patch recall to sidestep music-theory barriers. AI-assisted, human-reviewed.

Tech 1 min

OpenAI is reportedly launching a phone for ChatGPT

A custom smartphone from OpenAI is reportedly in development, with mass production slated for early 2027 and a customized MediaTek Dimensity 9600 chip at its core, featuring an enhanced image signal processor (ISP) with HDR capabilities. This marks a significant shift in OpenAI's hardware strategy, moving beyond rumored collaborations with Apple's design chief Jony Ive. The phone's release timeline is closely tied to the Dimensity 9600's launch this fall. AI-assisted, human-reviewed.

Tech 1 min

QuantWare lands €152m to build the world’s largest open-architecture quantum processor fab in Delft

Dutch quantum computing startup QuantWare secures a record-breaking €152 million Series B investment, marking the largest private round for a dedicated quantum-processor company and the largest ever raised by a Dutch deeptech firm. The funding will fuel the construction of the world's largest open-architecture quantum processor fab in Delft, backed by a high-profile syndicate of investors. This milestone underscores the growing momentum behind European quantum computing initiatives. AI-assisted, human-reviewed.

Tech 1 min

Hackers steal students’ data during breach at education tech giant Instructure

A massive data breach at edtech platform Instructure has compromised sensitive student information, including personally identifiable data and learning records, with hackers allegedly making off with tens of thousands of records containing Social Security numbers, grades, and other sensitive details. The breach highlights vulnerabilities in the company's Canvas learning management system, which serves over 40 million users worldwide. The compromised data was stored in plaintext, exacerbating the breach's severity. AI-assisted, human-reviewed.

Tech 2 min

Meta will use AI to analyze height and bone structure to identify if users are underage

Meta's rollout of a novel AI-powered facial analysis system, leveraging computer vision and machine learning algorithms to assess height and bone structure, is now live in select countries, with the company aiming to expand its deployment to a global scale, effectively augmenting its existing age verification processes with a more nuanced, data-driven approach. This system utilizes a combination of 2D and 3D image analysis to determine user age. The technology is designed to supplement existing methods. AI-assisted, human-reviewed.

Tech 1 min

Google, Microsoft, and xAI will allow the US government to review their new AI models

"US Government Gains Unprecedented Access to AI Model Development, as Commerce Department's CAISI Secures Pre-Deployment Evaluations with Top AI Firms, Including Google DeepMind and Microsoft, in a Bid to Enhance AI Safety and Governance. The agreement marks a significant shift in the industry's relationship with regulatory bodies, as the companies commit to targeted research and evaluations before releasing cutting-edge AI models to the public. AI-assisted, human-reviewed."