Workers at Google DeepMind’s UK offices voted in April to join the Communication Workers Union and Unite the Union, with 98 percent of ballots cast in favour. They sent a letter to management this week requesting formal recognition of the unions as their official representatives. If recognised, they would become the first frontier AI laboratory in the world to have unionised workers. The vote was not primarily about pay, benefits, or working conditions. It was about the Pentagon.
Background
In 2018, four thousand Google employees signed a petition against Project Maven, a Pentagon contract that used the company’s AI to analyse drone surveillance footage. Google did not renew the contract. It published a set of AI principles pledging not to develop weapons or surveillance technology that violates international norms. It built an AI ethics team.
Eight years later, Google has signed a classified AI deal with the Pentagon for “any lawful governmental purpose,” removed its weapons pledge from its published principles, and fired the leaders of the ethics team it created in response to Project Maven. The researchers who built the AI now being offered to the military have responded by voting to unionise.
The union demands
The union demands are specific: an end to the use of Google AI by the Israeli military and the US military, the restoration of the company’s scrapped commitment not to build AI weapons or surveillance tools, the creation of an independent ethics oversight body, and the individual right for researchers to refuse to contribute to projects on moral grounds. These are governance demands, imposed from below because the governance structures that were supposed to exist from above — the AI principles, the ethics board, the internal review processes — were dismantled or overridden when they conflicted with revenue.
The Pentagon deal
Google signed the classified Pentagon deal for “any lawful purpose” while simultaneously withdrawing from a $100 million drone swarm competition after an internal ethics review, a contradiction that researchers described as incoherent. The classified deal gives the Pentagon access to Google’s AI models on air-gapped networks where Google cannot monitor what queries are run, what outputs are generated, or what decisions are made. DeepMind research scientist Alex Turner criticised the agreement publicly, posting that Google “can’t veto usage” and is relying on “aspirational language with no legal restrictions.” The contract includes advisory guardrails discouraging mass surveillance and autonomous weapons without human oversight, but the government can request adjustments to safety settings, and on a classified network, there is no independent verification that any guardrail is honoured.
Google’s deal is reportedly more permissive than OpenAI’s, which retains “full discretion” over its safety mechanisms. Only Anthropic refused to grant the Pentagon unrestricted access, insisting that its models not be used for autonomous weapons