Around 200 employees at Google DeepMind, Google’s AI research division, have raised alarms about the company’s military contracts. According to Time, they have asked Google to stop working with military organizations, believing that this use of their technology goes against Google’s ethical rules for AI.
A quick reminder: When Google acquired DeepMind in 2014, the lab was promised its AI technology would not be used for military or surveillance purposes. DeepMind operated somewhat independently for years. However, as AI competition grew, DeepMind became more integrated with Google. In 2023, it merged with Google Brain, bringing it closer to Google’s main operations.
The controversial contracts: Project Nimbus & more
The issue centers around Google’s cloud services, including AI developed by DeepMind, which have been sold to governments and militaries. The most controversial contract is Project Nimbus, a deal with the Israeli government. This contract has been criticized because it supports the Israeli military, which is involved in operations in Gaza.
DeepMind employees are worried that working with military organizations goes against Google’s AI Principles. These principles aim to ensure AI is used ethically and responsibly. They state that AI should not cause harm or be used for weapons or surveillance that violates human rights.
The employees’ letter
On May 16, 2024, the employees sent a letter voicing their concerns. The letter says that while it does not target specific conflicts or governments, it is worried about the use of AI in military settings. It argues that such involvement harms DeepMind’s commitment to ethical AI.
The letter requests three main actions:
- Investigation: Look into how DeepMind’s technology is used by military organizations.
- Termination: End access to DeepMind technology for military clients.
- Governance: Create a new body to prevent future military use of DeepMind’s technology.
Company response and frustrations
Google responded by saying it follows its AI Principles and that Project Nimbus involves providing cloud services to Israeli government ministries, not directly to the military. Google claims these services are not meant for sensitive military or classified uses.
However, the letter’s signatories argue that Google’s response is unclear and does not address their concerns about supporting surveillance and violence through its technology.
Employees are frustrated with Google’s lack of action on their letter and feel that leadership has not effectively addressed their concerns.
At a June town hall meeting, DeepMind’s Chief Operating Officer, Lila Ibrahim, assured employees that DeepMind would not develop AI for weapons or mass surveillance. She emphasized Google’s commitment to responsible AI, which she said was the reason she joined and stayed with the company.
This situation at Google DeepMind reflects a larger debate in the tech industry about using advanced technologies in military applications. As AI technology continues to advance, companies like Google face the challenge of balancing business interests with ethical responsibilities. The outcome of this dispute could set important guidelines for how AI is developed and used in the future.