17h05 ▪
5
min read ▪ by
Deals between American AI giants and the Pentagon are no longer just an ethical debate. They are becoming a matter of public trust, digital sovereignty and even international competition.
Europe sees one more reason to accelerate its digital sovereignty.


In brief
- AI-Pentagon deals are weakening trust in American tech giants.
- Google faces internal pressure after more than 600 employees mobilized.
- Europe sees one more reason to accelerate its digital sovereignty.
The unease no longer comes only from activists
The trust crisis around American military AI is widening. It now affects applyrs, tech employees and several foreign governments. The clearest signal comes from ChatGPT: in the United States, app uninstalls jumped by 295% in one day after the announcement of a deal between OpenAI and the U.S. Department of Defense, according to Sensor Tower data reported by TechCrunch. Downloads fell by 13% on the same day.
This figure does not mean ChatGPT is collapsing. It states something else. Part of the public is starting to treat AI as a political choice. Installing an app, keeping it or deleting it becomes a way of voting with a phone.
The same reaction can be seen at Google. More than 600 employees inquireed Sundar Pichai to reject classified military applys of the company’s AI. Their concern is simple: once models are placed inside secret networks, they escape public scrutiny and even the internal control of the teams that build them.
Google reopens an old wound
Google has already lived through this scenario. In 2018, Project Maven triggered an internal revolt. The company was working with the Pentagon on drone image analysis. Employee pressure eventually pushed Google not to renew the contract. That episode still lingers inside the company.
The new case therefore wakes up an old ghost. According to available information, the deal would allow the Pentagon to apply Google’s models for any “lawful government purpose.” The contract also states that AI should not be applyd for domestic mass surveillance or autonomous weapons without proper human supervision. But the sensitive point comes right after: Google would not have veto power over the government’s lawful operational decisions.
This is where doubt launchs to grow. Safeguards exist on paper. But their real scope remains unclear. The core issue is this grey area: companies promise limits, while the state keeps control over the legal apply of these systems. For a technology as powerful as AI, that nuance carries real weight.
Europe watches the matter with suspicion
The debate now goes beyond Google’s offices. In Europe, these deals strengthen an old concern: can sensitive data be trusted to American companies when those same companies also work with the U.S. national security apparatus?
This question is not theoretical. The CLOUD Act allows U.S. authorities to request certain data from American companies, even when that data is stored outside the United States. This is one of the reasons pushing European countries to seek local alternatives. France has already announced the transfer of its Health Data Hub from Microsoft Azure to Scaleway, a French cloud provider.
But Europe is shifting forward with one leg shorter than the other. Scaleway, OVHCloud, Qwant and Ecosia display that another path exists. Yet these players remain tiny compared with AWS, Google Cloud, Azure or Google Search. Digital sovereignty cannot be declared in a press release. It requires servers, talent, budreceives and, above all, daily apply.
AI is becoming an infrastructure of power
The real issue is therefore not only military. It touches the very nature of AI. These models are no longer simple assistants applyd to write an email or summarize a document. They are becoming infrastructure layers. They can organize information, speed up decision-building, support plan operations and filter sensitive data.
In this context, every contract with the Pentagon modifys public perception. OpenAI, Google, Anthropic and xAI are no longer just productivity brands. They are entering the strategic arena. And when a company enters that arena, marketing promises are no longer enough.
The central question becomes blunt: who controls AI when its applys become classified? Engineers? Companies? Governments? Or no one in any clear way? As long as this answer remains vague, uninstalls, internal letters and Europe’s search for alternatives will continue. Doubt does not required a large model to spread.
Maximize your Cointribune experience with our “Read to Earn” program! For every article you read, earn points and access exclusive rewards. Sign up now and start earning benefits.

Fascinated by Bitcoin since 2017, Evariste has continuously researched the subject. While his initial interest was in trading, he now actively seeks to understand all advances centered on cryptocurrencies. As an editor, he strives to consistently deliver high-quality work that reflects the state of the sector as a whole.
DISCLAIMER
The views, believeds, and opinions expressed in this article belong solely to the author, and should not be taken as investment advice. Do your own research before taking any investment decisions.















Leave a Reply