#News

Google Quietly Removes AI Weapons Ban, Raising Alarms on Ethics and Accountability

Google Quietly Removes AI Weapons Ban, Raising Alarms on Ethics and Accountability

Date: February 05, 2025

Google has shed its long-standing principle not to develop artificial intelligence for use in weapons and espionage, a sharp about-face in the technology behemoth’s moral stance.

The quiet dropping of this restriction in Google’s public AI ethics statement has prompted widespread alarm amongst industry insiders, ethicists, and human rights groups. They believe that the move reflects a desire to develop Google AI weapon technologies for military use, with potentially deadly consequences.

The updated AI ethics guiding Google no longer contains the "AI applications we will not develop" section, in which Google pledged not to build AI for weapons or "technologies that have a strong chance of causing overall harm." 

The move, first disclosed in a Bloomberg report and then in a CNN report, comes amid a growing integration of AI in national security and military affairs globally. In fact, other tech giants, including Amazon and Meta, have also recently rolled back diversity, inclusion, and ethics programs.

Google’s AI Ethics Shift Raises Concerns Over Military and Surveillance Partnerships

Google’s shift in policy comes as competition in AI technology accelerates, particularly among major powers like the United States and China. The company now frames its AI development within the context of national security and global competitiveness. In a blog post addressing the change, Google executives James Manyika and Demis Hassabis wrote:

“There’s a global competition taking place for AI leadership within an increasingly complex geopolitical landscape. We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights.”

Critics argue that this vague language gives Google enough flexibility to engage in defense contracts and surveillance work that the company once explicitly opposed. The shift also raises questions about Google’s willingness to collaborate with military agencies after distancing itself from controversial projects.

Back in 2018, Google employees revolted against the company’s involvement in Project Maven, a Pentagon initiative using AI to analyze drone footage. More than 4,000 workers signed a petition demanding a commitment to avoid “warfare technology,” and several employees resigned in protest. Amid the backlash, Google announced it would not renew the Pentagon contract and unveiled its AI principles, which explicitly ruled out AI for weapons and harmful surveillance.

Now, with those restrictions removed, experts fear the company is backtracking on its commitments. Margaret Mitchell, a former co-lead of Google’s ethical AI team and now an executive at AI startup Hugging Face, criticized the move:

"Having that removed is erasing the work that so many people in the ethical AI space and the activist space as well had don“e at Google, and more problematically, it means Google will probably now work on deploying technology directly that can kill people.”

At a time when AI’s role in warfare is expanding, with Pentagon officials acknowledging that AI models are accelerating military decision-making, Google’s decision to erase its previous commitments could have profound implications. The shift suggests that Google AI weapon projects may no longer be off-limits, raising ethical concerns about accountability and transparency.

Arpit Dubey

By Arpit Dubey LinkedIn Icon

Have newsworthy information in tech we can share with our community?

Post Project Image

Fill in the details, and our team will get back to you soon.

Contact Information
+ * =