The Global AI Power Shift

Anthropic Defies the Pentagon, China Doubles US AI Usage, OpenAI Expands to London, and Google Goes Viral Again

Published: February 27, 2026 | By the Kersai Research Team | Reading Time: ~20 minutes


1. The Week Everything Changed

If you only follow AI news casually, this week probably looked like business as usual: a few product launches, some earnings numbers, the usual lab drama.

It was not business as usual.

In the space of five days, four separate stories broke that — taken together — signal a fundamental restructuring of global AI power. The United States is no longer the unchallenged centre of the AI universe. The companies that built their reputations on ethical leadership are being forced to choose between their principles and their contracts. And the infrastructure race is accelerating faster than any analyst predicted twelve months ago.

Here is what actually happened this week — and what it means for every business, government, and individual navigating the AI era.


2. Anthropic Defies the Pentagon — With Hours to the Deadline

2.1 How we got here

To understand why today’s Anthropic-Pentagon standoff matters, you need to understand what Anthropic was built to be.

In 2021, Dario Amodei — then VP of Research at OpenAI — walked out along with his sister Daniela and a group of senior researchers. Their reason: OpenAI was moving too fast and too recklessly. They founded Anthropic on a specific, binding promise: that it would never deploy AI it couldn’t guarantee was safe, and that it would never weaponise AI or use it to surveil citizens.

That wasn’t just PR. It was baked into a formal document — the Responsible Scaling Policy — that gave Anthropic its identity, its talent pool, and much of its enterprise trust.

For four years, that identity held.

Then the Pentagon came knocking.

2.2 The contract, the Venezuela incident, and the ultimatum

In late 2025, Anthropic signed a contract worth up to $200 million with the US Department of Defense, channelled through Palantir Technologies. The arrangement gave the US military access to Claude for what was described as “all lawful purposes.”

The first major friction point came when US military forces conducted a raid in Venezuela resulting in the capture of President Nicolás Maduro. Anthropic became aware that Claude may have been used in the planning or execution of the operation. The company asked Palantir questions about how its AI had been deployed.

The Pentagon viewed this as a private company attempting to audit a classified sovereign military operation. Anthropic viewed it as reasonable oversight of its own technology.

The standoff escalated rapidly.

On February 24, Defense Secretary Pete Hegseth personally summoned Dario Amodei to the Pentagon and issued a blunt ultimatum:

  • Remove all safety guardrails on Claude for military and intelligence use.
  • Specifically drop Anthropic’s prohibitions on AI-operated autonomous weapons and AI-enabled mass surveillance of US citizens.
  • Sign the agreement by 5:00pm on Friday, February 27.

Failure to comply would result in:

  1. Loss of the $200M Pentagon contract.
  2. Invocation of the Defense Production Act — Cold War-era emergency powers that give the government authority to legally compel private companies to serve national security requirements.
  3. Placement on a supply chain risk blacklist that would shut Anthropic out of all federal government contracts permanently.

2.3 Anthropic’s response: “We cannot accede”

The New York Times headline on February 26 said it clearly: “Anthropic Says It Cannot ‘Accede’ to Pentagon in Talks Over AI.”

The day before today’s deadline, the Economic Times reported that Anthropic had rejected the Pentagon’s “final offer” — a revised compromise wording that Amodei’s team concluded still contained loopholes that could permit mass domestic surveillance and autonomous weapons deployment.

Anthropic’s position, as articulated across multiple statements this week:

  • The company is not walking away from the negotiations.
  • It will continue supplying Claude for legitimate military use cases.
  • It will not sign any agreement that creates legal pathways — explicit or through loopholes — for autonomous lethal weapons or mass civilian surveillance.

Meanwhile, the Pentagon has already begun preparing consequences. Reports indicate Hegseth’s office has asked Boeing and Lockheed Martin to audit their supply chain relationships with Anthropic, a standard precursor to formal government contract exclusion.

2.4 The safety policy context

One detail makes this story even more significant. Just one day after Hegseth issued his ultimatum, Anthropic quietly announced it was replacing its Responsible Scaling Policy — the hard pause mechanism — with a new, softer “Frontier Safety Roadmap” of flexible, self-assessed goals.

The company maintains the two events are unrelated. The coincidence of timing has not escaped the attention of critics, regulators, or competitors.

2.5 What makes Anthropic different from every other lab

The context that elevates this from a business dispute to a civilisational moment is this:

Every other major AI lab has already complied.

  • OpenAI removed its military-use prohibitions in January 2024.
  • Google DeepMind followed shortly after.
  • xAI (Grok) was built with deep US government alignment from day one.

Anthropic is the last major AI lab with active military contracts that is attempting to maintain any independent ethical framework over how its technology is deployed. Whether Amodei holds that line — or whether the Defense Production Act ultimately forces compliance — will define what the relationship between AI companies and governments looks like for the next decade.


3. China’s AI Just Doubled America’s — And Almost Nobody Noticed

3.1 The OpenRouter data

While the Washington drama dominated headlines, a dataset published this week contained arguably the more important geopolitical story.

OpenRouter — the world’s largest multi-model AI API routing platform, used by over five million developers globally — released its weekly usage statistics for the period of February 16–22, 2026.

The numbers were stunning:

RegionWeekly Token Usage3-Week Growth
China (top models)5.16 trillion tokens+127%
United States (top models)2.70 trillion tokens+18%

Chinese AI models are now being used at nearly double the rate of US models on the world’s most widely used developer platform — and that gap is growing at 127% in three weeks.

3.2 The models leading the surge

The top four models on OpenRouter this week are all Chinese:

  1. MiniMax M2.5 — developed by Shanghai-based MiniMax, now one of the most-used models globally for long-context reasoning.
  2. Kimi K2.5 (Moonshot AI) — strong performance in coding and multi-step reasoning, open-weights version freely available.
  3. Zhipu GLM-5 — from Zhipu AI, a Tsinghua University spin-out, with strong multilingual capabilities.
  4. DeepSeek V3.2 — the latest iteration of the model that sent shockwaves through global AI markets in January 2025 when DeepSeek R1 outperformed GPT-4 at a fraction of the training cost.

3.3 Why developers are voting with their feet

These models share several characteristics that are driving their adoption:

  • Open weights: Unlike GPT-4 or Claude, most of these models allow developers to download, modify, and self-host them — removing dependency on any single API provider.
  • Cost: Open-weight models run at infrastructure cost only, with no per-token API charges. For high-volume production use cases, this is often 10–50× cheaper than US commercial APIs.
  • Performance: Multiple independent benchmarks now show MiniMax M2.5 and DeepSeek V3.2 matching or exceeding GPT-4o and Claude Sonnet on core tasks including coding, reasoning, and summarisation.

The developer community is not choosing Chinese models out of political preference. It is choosing them because they are cheaper, open, and increasingly better for the workloads that matter.

3.4 The strategic implications

For the United States government — which has spent the last three years attempting to maintain AI leadership through chip export controls, investment restrictions, and regulatory pressure on Chinese tech companies — the OpenRouter data is a significant blow.

The logic behind those policies was that restricting Chinese access to Nvidia GPUs would slow Chinese AI development enough to preserve US model superiority. The February usage data suggests that strategy has not achieved its intended outcome.

Chinese labs have:

  • Built competitive frontier models despite export controls.
  • Open-sourced those models globally, making them freely available to every developer in the world.
  • Driven developer adoption at a pace that now exceeds US model usage by nearly 2:1 on a major global platform.

The AI race is no longer a US-led competition with China chasing. It is a genuinely contested global race — and in at least one critical metric, China is currently ahead.


4. OpenAI Plants Its Flag in London

4.1 The announcement

On February 26, OpenAI named London as its largest research hub outside San Francisco.

The expansion goes well beyond a sales office. According to reporting by Wired and UK Tech News, London-based OpenAI researchers will own key components of frontier model development — including safety, reliability, and alignment research. These are not support functions. They are core research roles at the frontier of what OpenAI builds.

OpenAI CEO Sam Altman visited London to make the announcement personally, describing the expansion as a signal of the company’s long-term commitment to the UK and to building AI “safely and for everyone.”

4.2 The talent war context

London’s emergence as OpenAI’s second research capital is the result of several converging factors:

  • DeepMind heritage: London has been home to Google DeepMind since its founding in 2010. The city has the deepest concentration of world-class AI researchers outside the Bay Area, including many of the original architects of AlphaGo, AlphaFold, and Gemini.
  • UK government alignment: Prime Minister Keir Starmer has made AI a central pillar of UK economic policy, offering regulatory clarity, talent visa reform, and public investment signals that make the UK an attractive base for US AI companies.
  • Post-Brexit talent dynamics: While Brexit complicated some aspects of UK tech, it also created space for companies to build distinct UK research identities outside the EU regulatory framework — a strategic advantage as EU AI regulation becomes more restrictive.

OpenAI’s London hub puts it in direct competition with Google DeepMind for the same talent pool in the same city. For UK AI researchers, this is an extraordinary moment: the world’s two leading AI organisations are now both actively recruiting in London at the frontier model level.

4.3 The broader signal

OpenAI’s London expansion is also a hedge.

With the US political and regulatory environment becoming increasingly unpredictable — as the Anthropic-Pentagon confrontation illustrates — having a major research hub in a stable, English-speaking democracy with strong IP law and a government that actively wants AI investment is a sensible risk management move.

The AI research community is watching whether the US government’s aggressive posture toward AI companies becomes a structural driver pushing talent and research capacity offshore.


5. Google Goes Viral Again: Nano Banana 2

5.1 The launch

On February 26, Google launched Nano Banana 2 — officially Gemini 3.1 Flash Image — as the default image generation model across the entire Gemini platform, for both free and paid users globally.

The upgrade represents a generational leap over the original Nano Banana model that went viral in August 2025 — particularly in India, where it generated tens of millions of images within days of launch, briefly overwhelming Google’s infrastructure.

5.2 What’s new

Nano Banana 2’s headline improvements:

  • 4K resolution output — a significant jump from the 1024×1024 ceiling of most consumer AI image tools.
  • 5-character consistency — the model can generate and maintain the visual identity of up to five distinct characters across multiple frames or scenes, enabling short-form storytelling and storyboarding at consumer level.
  • 14-object fidelity — precise rendering of complex scenes with up to 14 distinct objects without the spatial confusion and “melting” artifacts that plague most current image models.
  • Real-time knowledge integration — the model can incorporate current events, recent visual styles, and up-to-date cultural references without requiring specific prompting.
  • Dramatically faster generation — Google reports 3× speed improvement over Nano Banana 1, with most images rendering in under two seconds on standard consumer hardware.

5.3 The competitive context

Nano Banana 2 arrives at a moment when the consumer AI image generation market is wide open for consolidation.

  • Midjourney remains the premium creative tool of choice for professional designers, but its subscription pricing and steep learning curve limit mainstream adoption.
  • DALL-E 3 (via ChatGPT) is widely used but has fallen behind on resolution and stylistic flexibility.
  • Stable Diffusion and its open-source derivatives remain popular among developers and power users, but require technical knowledge to run effectively.

Google’s advantage is distribution. Gemini is already installed on billions of Android devices and integrated into Google Workspace, Google Search, and Google Photos. Nano Banana 2 does not need to win on product features alone — it needs to be good enough that the 2 billion people who already use Google products choose to generate images there rather than opening a separate app.

Based on early developer and user feedback across Reddit, X, and LinkedIn, “good enough” understates it significantly. Multiple independent comparisons this week show Nano Banana 2 producing outputs competitive with Midjourney v7 on photorealism, and exceeding it on consistency and speed.

5.4 The Gemini API for developers

Beyond the consumer product, Google also released the Nano Banana 2 model via the Gemini API for developers, making it available for integration into third-party applications immediately. This positions Google to capture the enterprise and developer market for image generation — a segment currently dominated by Stability AI’s commercial API and Midjourney’s nascent API offering.

For businesses building products that require image generation — from e-commerce to marketing automation to social content tools — Nano Banana 2 via the Gemini API is now the benchmark against which all alternatives will be measured.


6. Four Stories, One Shift

Each of these stories would be significant on its own. Together they describe a single, coherent transformation:

The AI world is becoming multipolar, contested, and consequential in ways that the “move fast and iterate” ethos of the early 2020s was never designed to handle.

6.1 The ethics layer is under maximum pressure

Anthropic’s Pentagon standoff is not an isolated dispute between one company and one government agency. It is the most visible test yet of whether AI companies can maintain ethical frameworks in the face of state power.

If Anthropic ultimately complies — under threat of the Defense Production Act — the message to every AI lab, every regulator, and every enterprise buyer is clear: no private ethical commitment can survive a government ultimatum backed by legal force.

If Anthropic holds its line and accepts the consequences — losing the Pentagon contract, facing potential blacklisting — it sets a different kind of precedent: that principled refusal is possible, and that there is an enterprise market willing to pay for AI it trusts not to be weaponised without oversight.

Either outcome will shape the next decade of AI governance.

6.2 The US-China AI race has a new scoreboard

The OpenRouter data introduces a metric that has been largely absent from AI geopolitical analysis: production developer usage, not benchmark performance or training compute.

Benchmarks can be optimised. Compute can be classified. But when five million developers choose which models to build their products on, they are expressing a clear preference based on cost, quality, and openness.

By that measure, Chinese AI has not just caught up. It has pulled ahead — and the gap is growing at triple-digit weekly rates.

6.3 The talent war is going global

OpenAI’s London expansion confirms that the most important resource in AI — frontier research talent — is globally distributed and actively contested. The era of Silicon Valley as the sole centre of AI gravity is ending.

This has implications for regulation, for national AI strategies, and for where the next generation of breakthrough research will originate.

6.4 The consumer AI market is entering a consolidation phase

Google’s Nano Banana 2 launch is the latest evidence that the consumer AI market — long fragmented across dozens of specialised tools — is beginning to consolidate around distribution-advantaged platforms.

Users do not want ten AI apps. They want the AI capabilities they need to be present in the tools they already use. Google, Microsoft (Copilot), and Apple (Intelligence) are all making the same bet: the winner in consumer AI is not the best model, but the best-integrated model.


7. What This Means for Businesses

7.1 Your AI vendor landscape is less stable than you think

The Anthropic-Pentagon dispute illustrates that AI vendors — even the most safety-focused ones — operate under government pressure that is not always disclosed publicly. Enterprise AI governance frameworks need to include explicit assessment of:

  • Government contract exposure of your AI vendors.
  • The scope of any classified or national-security obligations those vendors may be subject to.
  • Whether your vendor’s ethical commitments are binding, self-assessed, or already compromised.

7.2 Chinese open-source models deserve serious evaluation

If your enterprise AI strategy is built entirely on US commercial APIs, the OpenRouter data should prompt a review. Chinese open-source models — particularly DeepSeek V3.2, Kimi K2.5, and MiniMax M2.5 — are now production-grade options for a wide range of enterprise workloads, at dramatically lower cost.

The geopolitical considerations are real and should be factored in. But so should the performance and economics. A comprehensive AI vendor strategy in 2026 evaluates all available options on merit — then applies appropriate risk filters.

7.3 The image generation opportunity is now enterprise-grade

Google Nano Banana 2 via the Gemini API signals that AI image generation has crossed from creative novelty into enterprise infrastructure. For businesses in e-commerce, marketing, publishing, education, and content production, the question is no longer whether to integrate AI image generation — it is which provider, at what quality tier, with what governance guardrails.

7.4 Build for a multipolar AI world

The single most important strategic principle to take from this week’s events:

Do not build your AI strategy around a single provider, a single geography, or a single regulatory environment.

The US-China dynamic is accelerating. Government pressure on AI labs is intensifying. The talent and research base is globalising. The enterprises that will navigate this most effectively are those that have built flexible, multi-vendor, multi-model AI architectures that can adapt as the landscape shifts — because it will keep shifting.


8. FAQ

Did Anthropic sign the Pentagon agreement?

As of the time of publication (February 27, 2026 — the deadline day), Anthropic has not signed. The company rejected the Pentagon’s “final offer” on February 26, stating that the proposed wording still contained loopholes permitting mass surveillance and autonomous weapons use. Talks are continuing, but Anthropic has stated it cannot accede to the core demands. The Pentagon has threatened to invoke the Defense Production Act and place Anthropic on a supply chain risk blacklist.

What is the Defense Production Act and why is the Pentagon threatening to use it against Anthropic?

The Defense Production Act is a Korean War-era US law that gives the federal government authority to compel private companies to prioritise national security requirements above normal commercial operations. The Pentagon is threatening to invoke it to legally force Anthropic to remove its safety guardrails on Claude for military and intelligence use. The same act has previously been used to shape semiconductor manufacturing supply chains — the same supply chains Anthropic depends on for its Nvidia GPU access.

How did China’s AI models overtake the US in developer usage?

According to OpenRouter’s weekly usage data for February 16–22, 2026, Chinese AI models processed 5.16 trillion tokens versus 2.70 trillion for US models — nearly double. The top four models on the platform are all Chinese: MiniMax M2.5, Kimi K2.5 (Moonshot), Zhipu GLM-5, and DeepSeek V3.2. Chinese models have driven this growth primarily through open-source availability, lower cost, and competitive benchmark performance — not through access to restricted US infrastructure.

What is OpenAI’s London hub and why does it matter?

OpenAI has named London as its largest research hub outside San Francisco, with local researchers owning key components of frontier model development including safety and alignment research. The expansion puts OpenAI in direct competition with Google DeepMind for London’s world-class AI talent pool and signals a broader globalisation of frontier AI research beyond Silicon Valley.

What is Nano Banana 2 and how does it compare to Midjourney?

Nano Banana 2 (Gemini 3.1 Flash Image) is Google’s latest AI image generation model, launched February 26 as the default across all Gemini products globally. It supports 4K resolution, 5-character consistency, 14-object fidelity, and generates images in under two seconds. Early independent comparisons show it competitive with or exceeding Midjourney v7 on photorealism and consistency, while being free for Gemini users and available via API for developers.

Should enterprises consider Chinese AI models despite geopolitical risks?

This requires case-by-case assessment. Chinese open-source models like DeepSeek V3.2 and MiniMax M2.5 offer genuine performance and cost advantages for a range of enterprise workloads. The geopolitical risks — including potential data sovereignty concerns, supply chain dependencies, and regulatory exposure — are real and must be factored into procurement decisions. A responsible enterprise AI strategy evaluates these models on merit and applies appropriate risk filters, rather than excluding them categorically or adopting them without due diligence.


This article was researched and written by the Kersai Research Team. Kersai is a global AI consultancy firm dedicated to helping enterprises confidently navigate the rapidly evolving artificial intelligence landscape — from cutting-edge strategic insights to practical, large-scale AI implementation. To learn more, visit kersai.com.

Similar Posts