Google and Microsoft Back Anthropic After Pentagon Blacklist — A Major AI Power Struggle Begins
The U.S. Department of Defense officially labeled AI company Anthropic as a “supply chain risk.” Within hours, federal agencies were told to stop using the company’s technology for defense purposes. Government partnerships that once looked strong were suddenly uncertain.
But what happened next surprised many people across the tech industry.
Instead of stepping away from Anthropic, some of the world’s biggest technology companies publicly said they will continue working with it. Google, Microsoft, and Amazon made it clear that Anthropic’s AI models will remain available to their customers — just not for defense projects.
This moment shows how complicated the AI world has become. It is not only about technology anymore. It is about power, security, politics, and the future of innovation.
A Decision That Shocked the AI Industry
Anthropic is one of the most respected artificial intelligence companies in the United States. Its AI system, Claude, competes directly with other advanced models like ChatGPT and Google’s Gemini.
The company became widely known for building AI systems focused on safety and responsible development. Many businesses and developers use Claude through cloud platforms to build AI-powered tools and services.
But recently the Pentagon made a major move. The Department of Defense declared Anthropic a supply chain risk after the company refused to agree to certain terms requested by the government.
Soon after that decision, President Donald Trump instructed federal agencies to stop using Anthropic’s technology. Defense Secretary Pete Hegseth confirmed that the Defense Department would gradually end its work with the company over the next six months.
The announcement created immediate concern in the tech world. Government contracts are extremely important for AI companies. Losing defense partnerships could affect reputation, future deals, and investor confidence.
For a moment, many people wondered whether Anthropic’s position in the AI industry was about to collapse.
Google Sends a Clear Message
Then Google stepped in.
A spokesperson for Google confirmed that the company will continue offering Anthropic’s artificial intelligence models through Google Cloud for non-defense related projects. The company explained that the Pentagon’s designation does not prevent them from working with Anthropic on commercial services.
This statement was very important because Google is not just a casual partner of Anthropic. The relationship between the two companies runs deep.
Anthropic’s Claude AI models are available on Google Cloud through the Vertex AI platform. Thousands of businesses use this system to build applications powered by artificial intelligence.
Google has also invested heavily in the company. In January 2025, Google agreed to invest another $1 billion into Anthropic, adding to its previous $2 billion investment. That means Google has billions of dollars tied to the success of Anthropic.
Even more interesting is the technical partnership between the companies. Anthropic trains its AI models using Google Cloud infrastructure. Recently the partnership expanded further, giving Anthropic access to as many as one million of Google’s custom Tensor Processing Units, also known as TPUs.
That level of computing power is enormous and shows how closely the companies are connected.
So when Google said it would continue working with Anthropic, the message to the tech industry was clear: this issue is limited to defense work, not commercial AI development.
Microsoft and Amazon Follow the Same Path
Microsoft was actually the first company to respond after the Pentagon’s decision. The tech giant told customers that Anthropic’s technology will still be available on its platforms, except for projects related to the Department of Defense.
According to Microsoft, its legal team reviewed the government’s designation and concluded that there is no restriction on providing Anthropic’s products to general cloud customers.
Shortly after Microsoft’s statement, Amazon also confirmed it would continue offering Anthropic AI through its cloud services while excluding defense work.
This moment became very significant for the technology industry. The three largest cloud computing providers in the world — Google Cloud, Microsoft Azure, and Amazon Web Services — all publicly confirmed they will keep supporting Anthropic’s technology for businesses and developers.
In the competitive world of artificial intelligence, that kind of support sends a powerful signal.
It means the industry still trusts the company’s technology and believes the issue is mainly political and regulatory rather than technical.
The Emotional Side of the Story
Behind this corporate drama is a human story as well.
Anthropic CEO Dario Amodei spoke publicly after the Pentagon’s decision. He confirmed that the U.S. government labeled the company a supply chain risk and said the company has no choice but to challenge the decision in court.
For a technology company, reputation is everything. Being labeled a supply chain risk by the government can affect trust, partnerships, and investor confidence.
From Amodei’s perspective, the company must defend itself. The legal battle ahead could become one of the most important court cases related to artificial intelligence regulation.
The outcome might influence how governments around the world deal with AI companies in the future.
AI Is Now Part of Global Politics
Another surprising detail added even more tension to the story. Reports confirmed that Anthropic’s AI models were used by the United States during its latest attack on Iran.
This information shows how deeply artificial intelligence has already entered the world of national security and military operations.
Just a few years ago AI chatbots were mainly used for writing emails, answering questions, and helping with coding. Today those same technologies are connected to intelligence analysis, defense planning, and military strategy.
That reality changes everything.
When AI becomes part of military systems, governments start paying much closer attention to who builds the technology and how it is controlled.
The Pentagon’s decision may be one of the first examples of this new reality.
What This Means for Businesses and Developers
For companies building products with artificial intelligence, the biggest fear was a complete shutdown of Anthropic services.
If that had happened, thousands of startups and software companies would have needed to quickly rebuild their AI infrastructure using other providers.
Fortunately, that scenario did not happen.
Because Google, Microsoft, and Amazon continue supporting Anthropic’s models, businesses can keep using Claude AI without disruption.
Developers can still build AI applications, chatbots, data tools, and automation systems using the same technology they relied on before.
This stability helps prevent panic in the AI ecosystem.
The AI Competition Is Heating Up
The situation also highlights the intense competition between major AI companies.
Some defense technology firms have already told employees to stop using Anthropic’s Claude models and switch to other AI systems. Many of those alternatives come from OpenAI, one of Anthropic’s biggest rivals.
The race to dominate artificial intelligence is already fierce. Companies like OpenAI, Google DeepMind, Anthropic, and others are competing for the same customers, talent, and government contracts.
Government decisions can easily shift the balance of power in this competition.
Even small regulatory changes can influence billions of dollars in investments.
A Defining Moment for AI Regulation
This entire situation may mark the beginning of a new chapter in how artificial intelligence is regulated in the United States.
For years the tech industry has moved much faster than government policy. AI companies built powerful systems while regulators tried to understand the technology.
Now that AI is being used in defense operations and national security systems, governments are starting to step in more aggressively.
The Pentagon’s decision shows that supply chain security is becoming a serious issue in the AI world.
At the same time, the support from Google, Microsoft, and Amazon shows that the commercial technology sector still wants to protect innovation and keep AI development moving forward.
The balance between regulation and innovation will likely shape the future of artificial intelligence.
The Story Is Far From Over
Anthropic’s legal challenge could take months or even years. During that time, the debate about AI security, government control, and corporate responsibility will continue.
Investors, developers, and policymakers are watching closely.
Artificial intelligence is quickly becoming one of the most powerful technologies ever created. Decisions about how it is used, regulated, and controlled will affect economies, governments, and everyday life.
For now, one thing is clear.
Google, Microsoft, and Amazon have decided that Anthropic still belongs in the AI ecosystem.
And the battle over the future of artificial intelligence is only just beginning.
Disclaimer
The information in this article is for informational purposes only and is based on publicly available news and reports at the time of writing. We do not guarantee complete accuracy or updates as situations may change. This content is not financial, legal, or professional advice. All company names, trademarks, and brands mentioned belong to their respective owners. Readers should conduct their own research before making any decisions based on this information.

Post a Comment