On February 4, 2025, Google announced an update to its ethical principles concerning artificial intelligence (AI), marking a decisive turning point in its policy.
The company removed a key commitment it had made in 2018, which prohibited the development of AI for military purposes, particularly for weapons and surveillance. This change comes in a context where AI is becoming increasingly ubiquitous in various sectors, including defense and national security.
The group justifies this evolution by citing a radical shift in the technological landscape since 2018. At the time, following internal protests, Google had committed not to participate in military projects like the Maven program, aimed at developing drones for the Pentagon. However, with the acceleration of AI usage, particularly by authoritarian regimes, Google has concluded that democracies cannot afford to fall behind in this global technological race.
A revision under geopolitical pressure
Google's change in stance comes in an increasingly tense geopolitical context. The global competition for AI leadership is at the heart of concerns, especially in the face of massive investments by countries in military-related AI research.
James Manyika (VP of Research) and Demis Hassabis (head of Google Deepmind) emphasized that democracies must remain at the forefront of this technological revolution to ensure values like freedom and respect for human rights while strengthening national security.
This update also follows the cancellation by U.S. President Donald Trump of an executive order by his predecessor Joe Biden regulating the use of AI in national security contexts. This has reduced transparency requirements for companies developing potentially risky technologies. The less stringent framework has allowed tech giants, including Google, to strengthen their collaboration with governments for military and security projects.
A commercial and competitive strategy
Some experts and observers perceive this revision of Google's principles as an attempt to align its practices with those of its competitors, such as Microsoft and Amazon, which have already signed contracts with the Pentagon.
With the military AI market booming, Google, to avoid risking market share loss, seems to have chosen to lift the restrictions that limited its development in this field. The group thus seeks to ensure that its products can be used in defense-related sectors while adhering to national security principles.
This evolution comes after Google faced internal protests in 2023 over an AI project intended for the Israeli military. Although protests erupted, the company continued its rapprochement with governments, eager not to fall behind in the global AI race, which has become a crucial strategic lever in the context of growing geopolitical rivalries.
Impact on employees and company culture
In 2018, Google's ethical policy was heavily influenced by internal protests from over 3,000 employees. They had opposed the Maven project, which aimed to develop military drones, fearing that this collaboration would mark the beginning of lethal AI usage. Since then, the group had promised not to develop military technologies, but this promise is now collapsing under the pressure of geopolitics and commercial stakes.
Today, internal resistance seems more subdued. The majority of employees appear less inclined to challenge the leadership, especially after the post-Covid layoffs. The merging of commercial and political interests seems to have eroded the initial momentum of employees, and Google's leadership, despite taking a major turn, remains in line with the general trend of tech companies working hand in hand with governments, particularly in a context where AI is becoming a pillar of national security.
A decisive turning point for Google
Google's reversal, by eliminating the ban on military use of its technologies, marks the end of an era where the firm strived to maintain an image of social responsibility and alignment with strong ethical values. By removing this section from its principles, Google seems to be embarking on a more pragmatic path, influenced by global political and commercial dynamics.
This change raises questions about the future of ethics in technology, as the boundaries between technological progress and national security become increasingly blurred. Google, once perceived as a model of technological responsibility, now appears to conform to a logic where innovation and collaboration with governments are paramount, even if it means sacrificing some established ethical principles.