Nearly 200 workers inside Google DeepMind, the company’s AI division, signed a letter calling on the tech giant to drop its contracts with military organizations earlier this year, according to a copy of the document reviewed by TIME and five people with knowledge of the matter. The letter circulated amid growing concerns inside the AI lab that its technology is being sold to militaries engaged in warfare, in what the workers say is a violation of Google’s own AI rules.
The letter is a sign of a growing dispute within Google between at least some workers in its AI division—which has pledged to never work on military technology—and its Cloud business, which has contracts to sell Google services, including AI developed inside DeepMind, to several governments and militaries including those of Israel and the United States. The signatures represent some 5% of DeepMind’s overall headcount—a small portion to be sure, but a significant level of worker unease for an industry where top machine learning talent is in high demand.
The DeepMind letter, dated May 16 of this year, begins by stating that workers are “concerned by recent reports of Google’s contracts with military organizations.” It does not refer to any specific militaries by name—saying “we emphasize that this letter is not about the geopolitics of any particular conflict.” But it links out to an April report in TIME which revealed that Google has a direct contract to supply cloud computing and AI services to the Israeli Military Defense, under a wider contract with Israel called Project Nimbus. The letter also links to other stories alleging that the Israeli military uses AI to carry out mass surveillance and target selection for its bombing campaign in Gaza, and that Israeli weapons firms are required by the government to buy cloud services from Google and Amazon.
Read More: Exclusive: Google Contract Shows Deal With Israel Defense Ministry
“Any involvement with military and weapon manufacturing impacts our position as leaders in ethical and responsible AI, and goes against our mission statement and stated AI Principles,” the letter that circulated inside Google DeepMind says. (Those principles state the company will not pursue applications of AI that are likely to cause “overall harm,” contribute to weapons or other technologies whose “principal purpose or implementation” is to cause injury, or build technologies “whose purpose contravenes widely accepted principles of international law and human rights.”) The letter says its signatories are concerned with “ensuring that Google’s AI Principles are upheld,” and adds: “We believe [DeepMind’s] leadership shares our concerns.”
A Google spokesperson told TIME: “When developing AI technologies and making them available to customers, we comply with our AI Principles, which outline our commitment to developing technology responsibly. We have been very clear that the Nimbus contract is for workloads running on our commercial cloud by Israeli government ministries, who agree to comply with our Terms of Service and Acceptable Use Policy. This work is not directed at highly sensitive, classified, or military workloads relevant to weapons or intelligence services.”
The letter calls on DeepMind’s leaders to investigate allegations that militaries and weapons manufacturers are Google Cloud users; terminate access to DeepMind technology for military users; and set up a new governance body responsible for preventing DeepMind technology from being used by military clients in the future. Three months on from the letter’s circulation, Google has done none of those things, according to four people with knowledge of the matter. “We have received no meaningful response from leadership,” one said, “and we are growing increasingly frustrated.”
When DeepMind was acquired by Google in 2014, the lab’s leaders extracted a major promise from the search giant: that their AI technology would never be used for military or surveillance purposes. For many years the London-based lab operated with a high degree of independence from Google’s California headquarters. But as the AI race heated up, DeepMind was drawn more tightly into Google proper. A bid by the lab’s leaders in 2021 to secure more autonomy failed, and in 2023 it merged with Google’s other AI team—Google Brain—bringing it closer to the heart of the tech giant. An independent ethics board that DeepMind leaders hoped would govern the uses of the AI lab’s technology ultimately met only once, and was soon replaced by an umbrella Google ethics policy: the AI Principles. While those principles promise that Google will not develop AI that is likely to cause “overall harm,” they explicitly allow the company to develop technologies that may cause harm if it concludes “that the benefits substantially outweigh the risks.” And they do not rule out selling Google’s AI to military clients.
As a result, DeepMind technology has been bundled into Google’s Cloud software and sold to militaries and governments, including Israel and its Ministry of Defense. “While DeepMind may have been unhappy to work on military AI or defense contracts in the past, I do think this isn’t really our decision any more,” one DeepMind employee told TIME in April, asking not to be named because they were not authorized to speak publicly. Several Google workers told TIME in April that for privacy reasons, the company has limited insights into government customers’ use of its infrastructure, meaning that it may be difficult, if not impossible, for Google to check if its acceptable use policy—which forbids users from using its products to engage in “violence that can cause death, serious harm, or injury”—is being broken.
Read More: Google Workers Revolt Over $1.2 Billion Contract With Israel
Google says that Project Nimbus, its contract with Israel, is not “directed at highly sensitive, classified, or military workloads relevant to weapons or intelligence services.” But that response “does not deny the allegations that its technology enables any form of violence or enables surveillance violating internationally accepted norms,” according to the letter that circulated within DeepMind in May. Google’s statement on Project Nimbus “is so specifically unspecific that we are all none the wiser on what it actually means,” one of the letter’s signatories told TIME.
More from TIME
At a DeepMind town hall event in June, executives were asked to respond to the letter, according to three people with knowledge of the matter. DeepMind’s chief operating officer Lila Ibrahim answered the question. She told employees that DeepMind would not design or deploy any AI applications for weaponry or mass surveillance, and that Google Cloud customers were legally bound by the company’s terms of service and acceptable use policy, according to a set of notes taken during the meeting that were reviewed by TIME. Ibrahim added that she was proud of Google’s track record of advancing safe and responsible AI, and that it was the reason she chose to join, and stay at, the company.
More Must-Reads from TIME
- How Nayib Bukele’s ‘Iron Fist’ Has Transformed El Salvador
- What Makes a Friendship Last Forever?
- How to Read Political Polls Like a Pro
- Long COVID Looks Different in Kids
- What a $129 Frying Pan Says About America’s Eating Habits
- How ‘Friendshoring’ Made Southeast Asia Pivotal to the AI Revolution
- Column: Your Cynicism Isn’t Helping Anybody
- The 32 Most Anticipated Books of Fall 2024
Write to Billy Perrigo at billy.perrigo@time.com