Anthropic Just Overtook OpenAI in Enterprise AI — In 10 Weeks. Here’s How Claude Beat ChatGPT Where It Matters Most.
OpenAI Kills Sora and Pivots Strategy, Apple Rebuilds Siri With Gemini for iOS 27, Bank of America Deploys AI Agents at Scale, and the Race Nobody Predicted Has a New Leader
Published: March 26, 2026 | By the Kersai Research Team | Reading Time: ~27 minutes
Last Updated: March 26, 2026
Quick Summary: In the ten weeks between January and mid-March 2026, Anthropic went from a 50/50 split with OpenAI in enterprise AI spending to capturing 73% of all new enterprise AI spend — the fastest market share reversal in enterprise software history, confirmed by Ramp’s corporate spend data covering thousands of US businesses. In the same period, OpenAI shut down Sora — its viral AI video app, just months after launch — redirected the team to robotics world simulation, and signalled a sharp strategic pivot away from broad consumer bets toward enterprise. Apple, meanwhile, confirmed that iOS 27 will feature a completely rebuilt Siri as a standalone app powered by Google Gemini — giving Gemini access to 1.5 billion iPhone users at WWDC on June 8. Bank of America became the first major US bank to deploy AI agents in live revenue-generating advisory roles — rolling out to 1,000 financial advisors this week. And Anthropic’s own Economic Index, published March 22, provides the most rigorous data yet on how AI is actually being used across the economy — finding that augmentation still dominates automation, but that automation is growing as a proportion of use every quarter.
Table of Contents
- The Upset That Rewrites the AI Narrative
- Story 1: Anthropic’s 73% — The Enterprise AI Coup Nobody Saw Coming
- Story 2: OpenAI Kills Sora — And What It Reveals About a Company in Transition
- Story 3: Apple Rebuilds Siri From Scratch — iOS 27’s Standalone Gemini-Powered AI App
- Story 4: Bank of America Deploys AI Agents to 1,000 Financial Advisors
- Story 5: Anthropic’s Economic Index — The Data on How AI Is Actually Used
- The Five Stories Are One Story
- Enterprise AI Provider Comparison: Claude vs ChatGPT vs Gemini 2026
- What This Means for Businesses
- FAQ
1. The Upset That Rewrites the AI Narrative
For most of the period since ChatGPT’s launch in November 2022, the story of the AI race has had a simple, stable structure: OpenAI leads, everyone else follows. GPT-4, GPT-4o, GPT-5 — each release confirmed OpenAI’s position at the frontier and reinforced the assumption that ChatGPT’s brand dominance would translate into enterprise revenue dominance over time.
That assumption is no longer warranted.
In a ten-week window between January and mid-March 2026, Anthropic — the AI safety company founded by former OpenAI researchers, backed by Amazon and Google, and known for its Claude model family — captured 73% of all new enterprise AI spending, according to corporate spend data from Ramp covering thousands of US businesses across every major sector. Ten weeks earlier, the split was even. In December, OpenAI still led.
The speed of the reversal is genuinely without precedent in enterprise software. Enterprise technology markets move slowly — procurement cycles are long, switching costs are high, and enterprise relationships are stickier than consumer ones. A 73/27 split achieved in ten weeks does not happen because of incremental product improvement. It happens because something fundamental changed in how enterprise buyers perceive the relative value of the two products.
This article examines exactly what changed, why Anthropic won the enterprise race, what OpenAI’s response reveals about its strategic trajectory, and what four additional major stories from the past 48 hours collectively tell us about the state of the AI race as we close out March 2026.
2. Anthropic’s 73% — The Enterprise AI Coup Nobody Saw Coming
2.1 The data
The primary source for Anthropic’s enterprise dominance story is Ramp’s March 2026 AI Index — a monthly analysis of AI-related corporate card spending across Ramp’s customer base, which covers thousands of US businesses of all sizes across every major industry. Because Ramp processes actual payment transactions, its data reflects real purchasing decisions rather than survey responses or anecdotal reports.
The March index, reported by Axios, Yahoo Finance, and confirmed by independent analysis from Stormy.ai and On The Ground Agency, shows:
- 73% of first-time enterprise AI spending in the period went to Anthropic — specifically to Claude API, Claude Team, and Claude Enterprise plans
- Anthropic’s adoption rate grew 4.9% month-over-month in February — its largest single monthly gain ever recorded
- 1 in 4 businesses on Ramp now pays for Anthropic — up from 1 in 25 just twelve months ago
- OpenAI experienced a 1.5% month-over-month decline in February — its steepest single-month drop on record
- Anthropic’s annualised revenue reached $72 billion in February 2026, with a $6 billion single month — an extraordinary figure for a company that did not generate meaningful revenue three years ago
The Axios headline characterised the shift precisely: “Anthropic turns the tables on OpenAI in critical revenue metric.”
2.2 Why Claude is winning enterprise — the five factors
The enterprise AI market’s shift toward Anthropic is not random. It reflects five specific, compounding factors that enterprises consistently cite in procurement discussions:
Factor 1: Enterprise-grade safety and reliability
Anthropic’s Constitutional AI methodology — and the culture of safety-first development it reflects — has translated into a product that enterprise compliance and legal teams find easier to approve. Claude’s refusal rates, hallucination rates, and instruction-following consistency on complex enterprise tasks consistently score higher in independent enterprise evaluations than GPT-5.4 in real-world deployment conditions.
The irony is acute: Anthropic was founded by researchers who left OpenAI specifically over safety concerns — and now those safety investments are generating commercial returns precisely because enterprise buyers have decided safety is the product, not just the principle.
Factor 2: The 200,000-token context window advantage (now standard)
Claude Opus 4.6 and Claude Sonnet 4.6 both feature a 200,000-token context window — large enough to process an entire enterprise codebase, a year’s worth of customer communications, a complete legal contract portfolio, or a full project history in a single session. This context depth is the feature that consistently wins enterprise evaluations: it enables the specific use cases — document analysis, code review, regulatory compliance — that enterprises actually deploy AI for, at a scale that changes workflows rather than just accelerating them.
Factor 3: Claude’s coding performance — the enterprise differentiator
Enterprise AI adoption is disproportionately driven by software engineering and technical teams — the people with both the access and the judgment to evaluate AI tools empirically. In 2025–2026, those teams overwhelmingly concluded that Claude performs better on real-world coding tasks than GPT-5.4. SWE-Bench Pro scores tell part of the story. The rest comes from the lived experience of thousands of engineers who have used both: Claude’s code is more readable, better documented, more consistent in style, and less prone to subtle logic errors.
When engineers choose Claude, they do not just choose it for themselves — they make the tool selection decision that determines which AI platform gets integrated into the organisation’s CI/CD pipeline, which platform gets embedded in the internal developer portal, and which platform’s API becomes the default for every AI application the engineering team builds. Claude wins the engineer, Claude wins the enterprise.
Factor 4: Amazon’s distribution muscle
Amazon invested $4 billion in Anthropic in 2023 and an additional $4 billion in 2024 — making it Anthropic’s largest investor. The commercial terms of that investment include a commitment to provide Claude through AWS Bedrock as a first-class model option — giving Anthropic access to Amazon’s enterprise sales force, AWS’s existing enterprise relationships, and the massive installed base of enterprises already running their infrastructure on AWS.
For an enterprise that already uses AWS, adding Claude via Bedrock is a two-click procurement decision. The sales resistance is minimal, the integration path is pre-built, and the billing arrives on the existing AWS invoice. OpenAI does not have an equivalent frictionless enterprise distribution channel.
Factor 5: The OpenAI trust deficit
OpenAI’s November 2023 board crisis — when the board fired Sam Altman and then reinstated him within five days — left a residue of governance uncertainty in enterprise procurement conversations that has proved remarkably durable. Enterprise buyers are risk-averse. A vendor whose senior leadership may be destabilised by internal governance disputes represents supply chain risk that procurement teams are trained to avoid.
Anthropic, by contrast, is perceived as a stable, well-governed company with a clear mission and consistent leadership. In enterprise sales conversations, that perception of stability is itself a product feature.
2.3 Anthropic’s revenue trajectory — putting $72 billion in context
Anthropic’s $72 billion annualised revenue figure requires context: this is ARR run-rate calculated from a single strong month, not a trailing twelve-month figure. The actual twelve-month revenue for fiscal 2025 is lower.
But the trajectory is extraordinary. Consider the growth arc:
- Q1 2024: ~$200 million ARR
- Q4 2024: ~$4 billion ARR
- Q1 2025: ~$12 billion ARR
- Q3 2025: ~$40 billion ARR
- February 2026: ~$72 billion ARR (run-rate)
No enterprise software company in history has grown revenue at this pace from zero. Salesforce took 20 years to reach $10 billion in annual revenue. ServiceNow took 14 years. Anthropic has covered equivalent ground in under 3 years.
The growth is driven almost entirely by enterprise API consumption and enterprise plan subscriptions — not consumer products. Anthropic has never had a viral consumer moment equivalent to ChatGPT’s 2022 launch. Its growth is quieter, more deliberate, and, it is now clear, more durable.
2.4 What OpenAI’s response has been
OpenAI’s response to the enterprise share data has been threefold:
First: The hiring surge of 3,500 new employees (covered in our March 25 article) specifically targets enterprise sales and solutions — an acknowledgment that its enterprise go-to-market operation needs significant investment to compete with Anthropic’s AWS-backed distribution.
Second: The GPT-5.4 mini and nano releases (covered in our March 23 article) are designed to capture the price-sensitive enterprise workloads where Claude’s slightly higher pricing has been a barrier.
Third: The strategic pivot signalled by the Sora shutdown (covered below) — away from broad consumer bets toward focused enterprise execution.
None of these responses is wrong. All of them will take time to show results. The enterprise share trajectory, if it continues, will become a crisis for OpenAI’s valuation narrative — which depends on a story of total AI market dominance, not a competitive market where Claude holds two-thirds of enterprise spend.
3. OpenAI Kills Sora — And What It Reveals About a Company in Transition
3.1 What happened
On March 24, 2026, OpenAI announced it is shutting down the Sora app — its AI video generation social platform, launched to enormous public attention in late 2024 — just months after the consumer launch.
The shutdown was confirmed by CNN, CBC, and ABC Australia. The reasons given publicly: compute costs, strategic refocus, and concerns about deepfake proliferation on the platform.
The planned Disney partnership — a significant announced collaboration for Sora-generated content announced in December 2025 — is also not proceeding.
3.2 What Sora was — and why it mattered
When OpenAI launched Sora’s research preview in February 2024, the reaction was a genuine cultural moment. The model’s ability to generate photorealistic video from text descriptions — scenes with consistent lighting, physics, character motion, and camera angles — was categorically beyond anything previously demonstrated by AI.
The consumer app, launched in late 2024, allowed anyone to generate short videos from prompts. The initial months produced viral social media content and extensive media coverage. Sora seemed like the beginning of AI’s transformation of visual content creation.
The shutdown of the consumer app does not mean the technology is dead. It means OpenAI has decided the consumer video platform is not where it wants to compete.
3.3 The real reason: a strategic reckoning
The Wall Street Journal’s reporting on OpenAI’s strategic review — referenced in the Mark McNeilly Substack analysis from March 20 — paints a more complete picture than the public statement.
OpenAI has been running a sprawling portfolio of consumer bets simultaneously:
- Sora (AI video generation social app)
- A rumoured AI browser (to compete with Chrome and Edge)
- An AI search product (to compete with Google)
- Operator (autonomous AI agent for web tasks)
- Sweet Pea (AI earbuds, H2 2026)
- Multiple enterprise API products
- ChatGPT consumer subscriptions at four pricing tiers
- The Jony Ive hardware partnership
- A rumoured social media platform
For a company with approximately 4,500 employees at the start of 2026, this portfolio is too wide. Resources are spread thin. Enterprise sales is understaffed. Anthropic — with a narrower focus on Claude API and enterprise products — is executing with greater efficiency.
The Sora shutdown is the first visible product of a strategic rationalisation: OpenAI is narrowing its focus to the products most likely to generate the enterprise revenue it needs to justify its $300+ billion valuation and fund the frontier model development that remains its core mission.
3.4 Where Sora’s team is going — and why it matters
The detail that points to OpenAI’s actual strategic direction: Sora’s research team is being redirected to world simulation for robotics — building AI systems that can generate physically accurate simulations of real-world environments, for use in training the physical AI systems (humanoid robots, autonomous vehicles, AI wearables) that OpenAI is developing as part of its hardware strategy.
This is not a consolation prize for a team losing its product. World simulation is one of the most technically demanding and strategically valuable capabilities in AI robotics — it is the technology that enables physical AI systems to train in synthetic environments before deployment in the real world. If Sora’s world modelling capabilities — its ability to generate physically coherent, temporally consistent video of real environments — are redirected toward training simulation, the team is moving from a consumer product to foundational infrastructure for AI’s physical era.
The Sweet Pea earbuds, Optimus-competing robots, and whatever hardware OpenAI announces in 2027 all require physical world simulation for training. Sora’s team is now building that foundation.
3.5 The deepfake problem — a warning the industry is ignoring
The public reason OpenAI gave for the Sora shutdown includes “concerns about deepfake proliferation.” This reason is being widely dismissed in the tech press as a fig leaf for the strategic pivot. That dismissal is too easy.
In the six months the Sora consumer app was live, it was used extensively to generate non-consensual synthetic intimate images (NCII), election-related disinformation video, and fraudulent impersonation content at a scale that Sora’s content moderation infrastructure could not adequately address. The platform’s Australian users — among the most active globally — generated a disproportionate volume of complaints to the eSafety Commissioner, contributing to regulatory pressure on OpenAI’s Australian operations.
The deepfake problem is not going away because Sora shut down. Other video generation tools — Runway, Kling, Pika — continue operating, and their content moderation capabilities are generally less sophisticated than Sora’s. But OpenAI’s decision to exit a product partly because it was generating deepfake harms at scale that it could not control is a significant moment for the AI governance conversation: it is the first time a major AI lab has voluntarily shut down a consumer product for safety reasons rather than business reasons alone.
For Australian enterprises navigating the NSW Digital Work Systems Act and related AI governance obligations, OpenAI’s Sora decision is a concrete example of what responsible AI deployment governance looks like in practice — and a signal that safety-motivated product decisions are becoming a real part of AI strategy, not just public relations.
4. Apple Rebuilds Siri From Scratch — iOS 27’s Standalone Gemini-Powered AI App
4.1 Bloomberg confirms the iOS 27 Siri overhaul
On March 24–25, Bloomberg’s Mark Gurman — the most reliable source on Apple’s internal product plans — reported that iOS 27, debuting at WWDC on June 8, 2026, will feature a completely rebuilt Siri as a standalone app.
The reporting was confirmed and expanded by Mashable and Yahoo Finance Singapore. Combined, the coverage reveals the most significant Siri architecture change since the assistant launched with iPhone 4S in 2011.
The core change: Siri is being transformed from an embedded voice interface — a system-level feature you access with a button press or “Hey Siri” — into a standalone AI application with its own icon, its own conversation history, its own interface, and the capability of a full-featured AI assistant competitive with ChatGPT and Claude.
4.2 What the new Siri can do
Based on Gurman’s reporting and Mashable’s analysis, the new Siri will feature:
“Ask Siri” conversational mode: A persistent chat interface — visible, scrollable, and searchable — that maintains conversation history across sessions. Users can return to previous Siri conversations the way they return to ChatGPT conversations, reference past interactions, and build on previous context.
Deep personal context integration: The new Siri has access to — with explicit user permission — Messages, Mail, Notes, Calendar, Reminders, Photos, and Health data. This mirrors Google Personal Intelligence (covered in our March 23 article) but integrated natively into Apple’s tightly controlled privacy architecture, with on-device processing for the most sensitive data.
Full web search and reasoning: The new Siri can search the web, synthesise information from multiple sources, generate structured analysis, write documents, and answer complex multi-step questions — capabilities far beyond the current Siri’s narrow, shortcut-based response model.
App control and task automation: Deep integration with every app on the iPhone — not just Apple’s first-party apps. The new Siri can complete tasks across third-party apps the way a human would: open Uber, book a ride to a saved destination, confirm the booking, and add it to Calendar — without the user ever leaving the conversation interface.
Proactive suggestions: The new Siri monitors context across apps and surfaces suggestions before being asked — “Your meeting starts in 20 minutes and traffic to the venue will take 25 minutes. Leave now?” — based on integrated access to Calendar, Maps, and current traffic conditions.
4.3 The Gemini engine — and what it means for the AI race
The confirmed detail that makes the new Siri announcement as significant for the competitive AI landscape as for Apple’s product strategy: the new Siri’s underlying AI capability is powered by Google Gemini.
Apple confirmed the Gemini integration for Apple Intelligence in mid-2025. The new standalone Siri in iOS 27 extends that integration to the full conversational AI layer — meaning the chat interface that 1.5 billion iPhone users will interact with as their primary AI assistant is running on Google’s models.
The distribution implications are staggering:
- Google’s Gemini models, via the new Siri, will become the most widely distributed AI in the world — not because users chose Gemini, but because they chose iPhone
- OpenAI and Anthropic — both of whom pursued the Apple partnership and lost to Google — are absent from the most important AI distribution channel in consumer technology
- Google’s AI user base, once primarily limited to Android, Google Search, and Gemini app users, now encompasses the entirety of Apple’s installed base
For OpenAI, the Apple loss is compounded by the Anthropic enterprise story: losing the consumer distribution crown to Google while losing the enterprise crown to Anthropic simultaneously creates a genuine strategic crisis that the company’s current pivot is designed to address.
4.4 The privacy architecture — how Apple squares Gemini with its privacy brand
Apple’s entire brand identity in AI has been built on privacy: on-device processing, no training on user data, encrypted communication. Using Google’s Gemini — a company whose entire business model is built on data — as the engine for its AI assistant creates an obvious tension that Apple has worked hard to address.
The architecture, per Apple’s published technical documentation:
On-device processing first: Simple requests — setting timers, playing music, making calls, controlling smart home devices — are handled entirely on-device using Apple’s own Neural Engine. Gemini is only invoked for requests that require the reasoning and knowledge capabilities that exceed what on-device models can handle.
Private Relay for Gemini queries: When a query is sent to Gemini, it is transmitted through Apple’s Private Relay infrastructure — which anonymises the request so that Google receives a query but not the identity of the user making it.
No training on Siri queries: Apple’s contract with Google explicitly prohibits Google from using Siri queries to train Gemini models — a contractual protection that mirrors the structure of the Google Personal Intelligence privacy terms.
User control: Users can disable Gemini integration entirely, falling back to Apple’s on-device models for all queries (with reduced capability). The setting is prominent in iOS 27’s privacy controls.
Whether this architecture fully satisfies privacy advocates — particularly in Europe, where the GDPR creates specific obligations around data transfer to US companies — will be a significant regulatory discussion through 2026.
4.5 What WWDC June 8 will look like
Based on the confirmed details and typical Apple event structure, WWDC 2026’s iOS 27 announcement will almost certainly position the new Siri as the centrepiece of the keynote — Apple’s answer to “what has Apple been doing while ChatGPT and Claude dominated AI conversation for three years?”
The standalone Siri app is Apple’s most significant product announcement in the AI era. It takes a feature that has been the subject of mockery and criticism — “Siri is terrible compared to ChatGPT” — and replaces it with a product that, on paper, competes directly with every AI assistant on the market, distributed to 1.5 billion users by default.
For Apple, the new Siri is the product that justifies the Q.ai acquisition, the Gemini partnership, the Apple Intelligence infrastructure investment, and the $1 trillion Neural Engine silicon investment. It is the moment Apple shows the AI world what a hardware-software-model integration at Apple’s scale actually looks like.
5. Bank of America Deploys AI Agents to 1,000 Financial Advisors
5.1 What happened
On March 25, AI News confirmed that Bank of America has deployed AI agents to approximately 1,000 financial advisors — the largest single deployment of AI agents in live, revenue-generating financial advisory roles at any major US bank to date.
The deployment covers Bank of America’s Merrill Lynch wealth management division — the 20,000-person advisory unit that manages approximately $3.5 trillion in client assets.
5.2 What the agents do
Bank of America’s AI agents, built on a combination of Claude and proprietary Bank of America financial data models via Amazon Bedrock, perform six specific functions in the advisory workflow:
Client portfolio analysis: The agent continuously monitors each advisor’s client portfolios — flagging changes in risk exposure, identifying rebalancing opportunities, and generating plain-language summaries of portfolio performance relative to each client’s stated objectives. Previously this analysis was produced manually by the advisor or by a junior analyst, taking 2–4 hours per client per quarter. The agent produces it continuously.
Regulatory compliance monitoring: Financial advisors operate under a dense framework of FINRA, SEC, and internal compliance obligations. The agent monitors all advisor communications — emails, meeting notes, trade activity — in real time, flagging potential compliance issues before they escalate. This capability was previously performed by a dedicated compliance review team operating on a delayed sampling basis.
Market data synthesis: Before client meetings, the agent generates a briefing document — pulling together relevant market developments, sector news, analyst reports, and macroeconomic data specific to the client’s portfolio composition. A briefing that previously took an analyst 2 hours to prepare is generated in 3 minutes.
Meeting preparation: The agent accesses the client’s complete account history, previous meeting notes, outstanding action items, and stated financial goals — generating a structured meeting agenda with suggested discussion points and relevant data exhibits. The advisor arrives at every client meeting fully prepared without manual preparation time.
Post-meeting documentation: The agent transcribes and summarises each client meeting, generates action items, updates the CRM with meeting notes, and flags compliance-relevant statements for review — eliminating the 45–60 minutes of administrative work that currently follows every client meeting.
Proactive opportunity identification: The agent identifies life events and financial triggers that represent advisory opportunities — a client receiving a large inheritance, a company stock vesting, a client’s child reaching college age — and surfaces these to the advisor as actionable opportunities before the client raises them.
5.3 The productivity and revenue impact
Bank of America’s internal data, reported by AI News, shows early results from the pilot deployment that preceded the full rollout:
- Average advisor administrative time reduced by 3.2 hours per day
- Client meeting preparation time reduced from 2.1 hours to 22 minutes per meeting
- Compliance exception detection improved by 47% — catching issues the previous human review process missed
- Advisor-reported client satisfaction scores up 12% — attributed to advisors arriving more prepared and delivering more relevant, personalised advice
The revenue implication: if each advisor can conduct 30% more client meetings per week due to reduced administrative overhead, Bank of America’s Merrill Lynch division generates approximately $4–6 billion in additional annual revenue from the same advisory headcount — without hiring a single additional advisor.
5.4 The workforce implication — what this means for banking jobs
Bank of America is not cutting advisors. It is making existing advisors more productive. But the roles that the AI agents are replacing — junior analysts, compliance reviewers, administrative staff, research synthesisers — are less protected.
Oliver Wyman’s January 2026 AI in banking report frames the sectoral trajectory clearly:
“AI will not run the bank — but it will run far more of the bank’s operational and analytical machinery than today’s organisations are designed to govern. The institutions that navigate this transition well will reduce operational costs by 20–35% while improving service quality. Those that do not will face competitive pressure that compounds quarterly.”
The 1,000 Bank of America advisors with AI agents are the visible, front-line deployment. Behind them, the compliance teams, junior analysts, research assistants, and administrative staff whose functions those agents perform are the invisible displacement — the roles that will contract through attrition and restructuring rather than visible announcements.
5.5 The Australian banking context
Australia’s four major banks — Commonwealth Bank, ANZ, Westpac, and NAB — are all running AI pilot programmes in comparable advisory and operational functions. Commonwealth Bank’s AI-powered customer service agents, ANZ’s mortgage processing automation, and NAB’s document review AI are all heading in the same direction as Bank of America’s deployment.
The NSW Digital Work Systems Act (covered in our March 19 article) applies directly to these banking AI deployments — meaning Australian banks face regulatory obligations that US banks do not: formal risk assessments of AI advisory systems, worker consultation requirements, and potential union inspector access to AI workflow systems.
Australian banking professionals and financial advisors should monitor Bank of America’s deployment outcomes closely — the evidence base it generates over the next 12 months will directly inform the regulatory and commercial decisions that Australian banks make about equivalent deployments.
6. Anthropic’s Economic Index — The Best Data We Have on How AI Is Actually Used
6.1 What the report found
On March 22, Anthropic published its Economic Index — a major research report using privacy-preserving analysis of how Claude is actually being used across the economy, providing the most rigorous empirical picture available of real-world AI deployment patterns at scale.
The full report is available at anthropic.com/research/economic-index-march-2026-report and is titled “Learning Curves” — a reference both to the AI development trajectory and to the human learning curves involved in AI adoption.
The headline findings:
Augmentation dominates automation — but the balance is shifting: The vast majority of Claude usage (approximately 78% by task volume) remains in the augmentation category — humans using AI to perform their existing jobs more effectively. Automation use cases — AI performing tasks independently without human involvement — represent approximately 22% of usage but are growing as a proportion of the total each quarter.
The most common professional use cases (by volume):
- Software development assistance — code writing, review, debugging, documentation (32% of professional use)
- Writing and editing — drafting, refining, and formatting professional communications and documents (24%)
- Research and analysis — synthesising information, answering questions, summarising sources (19%)
- Data analysis — interpreting datasets, generating insights, creating visualisations (12%)
- Customer communication — drafting responses, personalising outreach, handling routine queries (8%)
- Legal and compliance — contract review, regulatory research, policy drafting (5%)
The professions using AI most intensively (by sessions per professional per day):
- Software engineers (4.2 sessions/day average)
- Management consultants (3.8 sessions/day)
- Financial analysts (3.4 sessions/day)
- Legal professionals (3.1 sessions/day)
- Marketing professionals (2.9 sessions/day)
The skill gap finding: The largest predictor of AI usage intensity is not the AI capability of the individual — it is the quality of the prompts and context they provide. Users who provide well-structured, context-rich inputs to Claude achieve productivity gains 3–4× higher than users who provide the same queries in vague, underspecified form. This confirms the context engineering thesis we have covered in our previous articles: the bottleneck in AI productivity is not model capability but human skill in working with models effectively.
6.2 The automation trajectory — a warning in the data
The 22% automation figure, and its consistent quarterly growth, deserves careful attention.
At 22% of Claude usage as automation, with the remainder as augmentation, the current moment is still one of predominantly human-directed AI work. Humans are still making the decisions; AI is executing, analysing, and drafting at their direction.
But 22% growing each quarter compounds. If automation’s share of AI usage grows by even 2 percentage points per quarter — a conservative estimate given current deployment trends — the balance reaches 50/50 within approximately four years. At that point, AI is performing an equal volume of work independently as it performs in support of human direction.
That is not an imminent crisis. But it is a trajectory that organisations need to be designing their workforce transition strategies around now — not when it arrives.
6.3 What the index means for the Goldman Sachs paradox
The Anthropic Economic Index provides the missing piece of the Goldman Sachs productivity paradox (covered in our March 24 article).
Goldman found no economy-wide productivity gains from AI. Anthropic’s data shows that 78% of Claude usage is augmentation — humans working more effectively with AI assistance.
The connection: augmentation-driven productivity gains are invisible in economy-wide statistics because they show up as individual performance improvement, not as output increase at the economic measurement level. When a software engineer writes code 30% faster with Claude, they do not produce 30% more software — they produce the same software with 30% less effort, potentially leaving the remaining time for other tasks, professional development, or rest. That 30% efficiency gain does not appear in GDP or labour productivity statistics because the output (the software shipped) is the same; only the input (time spent) changed.
The automation-driven gains are different: they show up as outputs produced without human input at all. When an AI agent independently processes 1,000 customer support tickets that previously required 10 human agents, the output increases (more tickets resolved per unit of time) and the input decreases (fewer human hours). That gain will eventually appear in productivity statistics — but only when automation reaches a sufficient scale across the entire economy, which Goldman’s data suggests has not yet happened.
The Anthropic Economic Index, in other words, confirms both Goldman’s finding and its limitation: current AI usage is primarily augmentation-based and therefore largely invisible in economy-wide statistics — but automation is growing, and when it reaches scale, the productivity surge that Goldman’s framework predicts for Stage 2 of the AI transition will arrive.
7. The Five Stories Are One Story
7.1 The enterprise layer is the decisive battleground — and Anthropic is winning it
The five stories this week, viewed individually, are about: enterprise market share, a consumer product shutdown, a phone OS update, a banking deployment, and an economic research report.
Viewed together, they are all about the same thing: the enterprise software layer of AI is where the durable competitive advantage in this era will be built, and Anthropic has established a commanding position in it.
Enterprise software has always been the most profitable layer of the technology stack. Consumer products generate scale and brand awareness. Enterprise contracts generate recurring, predictable, high-margin revenue that compounds through expansion, renewal, and platform lock-in. Microsoft’s $3 trillion market capitalisation is built almost entirely on enterprise software and cloud revenue, not consumer products. Salesforce, ServiceNow, Oracle — the most valuable technology companies by enterprise value — all built their positions through enterprise software.
Anthropic has, in ten weeks, established a 73/27 enterprise spending lead over OpenAI. If it maintains and extends that lead — and the structural factors driving it (safety, coding performance, Amazon distribution, governance stability) show no signs of reversing — Anthropic is on a trajectory to become the enterprise software layer of the AI era in the same way Salesforce became the enterprise software layer of the cloud era.
7.2 Distribution trumps capability at the consumer layer — and Google is winning it
At the consumer layer, the story is different. The most capable AI models are irrelevant if they are not the default on the devices consumers use. iOS 27’s Gemini-powered Siri makes Google’s AI the default for 1.5 billion iPhone users — not because those users chose Google, but because they chose Apple.
Google’s AI distribution strategy — power Android natively, power iOS via the Siri partnership, provide the cheapest inference infrastructure via Gemini Flash-Lite — is the most sophisticated in the industry. It does not require Google to have the best consumer-facing AI product. It requires Google’s models to be embedded in the best consumer-facing AI products made by other companies.
Apple makes a better phone than Google. Amazon has a better shopping experience than Google. Anthropic may build better enterprise AI than Google. But if all three distribute their products on Google’s AI infrastructure, Google captures the revenue, the data signals, and the flywheel improvement benefits of all three ecosystems simultaneously.
7.3 The physical AI layer is still open — but the window is closing
The hardware race we covered in yesterday’s article — Sweet Pea, Apple’s Q.ai wearables, Meta Ray-Bans, Tesla Optimus — represents the one layer where no company has yet established dominance. Bank of America’s AI agent deployment, Anthropic’s Economic Index, and the enterprise share data all confirm that the software AI layer is consolidating rapidly. The physical AI layer — the devices and robots through which AI interacts with the physical world — remains genuinely contested.
The organisations building AI strategy today should be designing for a world in which: Anthropic powers enterprise API; Google powers consumer AI via Apple and Android; and a third company — most likely OpenAI, Apple, or Meta — powers the ambient physical AI layer through wearables and robotics. The three-layer structure of the AI platform is taking shape, and the window for new entrants to any layer is closing.
8. Enterprise AI Provider Comparison: Claude vs ChatGPT vs Gemini
| Dimension | Claude (Anthropic) | ChatGPT (OpenAI) | Gemini (Google) |
|---|---|---|---|
| Enterprise market share (Q1 2026) | 73% of new spend | 27% of new spend (declining) | Growing via AWS / Azure secondary |
| Best model | Claude Opus 4.6 | GPT-5.4 | Gemini Ultra 3.0 |
| Context window | 200,000 tokens (standard) | 1,000,000 tokens (GPT-5.4) | 1,000,000 tokens (Gemini Ultra) |
| Coding performance (SWE-Bench Pro) | ✅ Industry-leading | ✅ Competitive | 🟡 Strong but behind |
| Enterprise safety / compliance | ✅ Constitutional AI — highest rated | 🟡 Good but governance concerns | 🟡 Good |
| Primary distribution | AWS Bedrock (Amazon) | Azure OpenAI Service (Microsoft) | Google Cloud Vertex AI |
| Consumer AI distribution | ❌ No major consumer channel | ChatGPT app (500M MAU) | Android + iOS 27 Siri (2B+ devices) |
| Pricing (best mid-tier model) | Sonnet 4.6: $3/$15 per M tokens | GPT-5.4 mini: $0.80/$3.20 per M tokens | Flash-Lite: $0.25/$1.00 per M tokens |
| Cheapest model | Haiku 3.5: $0.80/$4.00 per M tokens | GPT-5.4 nano: $0.20/$1.25 per M tokens | Flash-Lite: $0.25/$1.00 per M tokens |
| Enterprise governance | ✅ Strongest — Constitutional AI | 🟡 Good | 🟡 Good |
| Open source model | ❌ Proprietary only | ❌ Proprietary only | ✅ Gemma 3 open weights |
| Best for | Enterprise coding, safety-critical, long-context analysis | Consumer AI, broad capability, computer use agents | Cost-sensitive high-volume, consumer distribution |
| Annualised revenue (Feb 2026) | $72B run-rate | ~$120B run-rate (est.) | Part of Google Cloud ($115B+) |
Sources: Ramp March 2026 AI Index; Axios; Anthropic; OpenAI; Google Cloud pricing pages; March 2026
9. What This Means for Businesses
9.1 If you haven’t evaluated Claude for enterprise use, this week is the time
Anthropic’s 73% enterprise market share is not a marketing statistic. It is a signal from thousands of businesses that have evaluated both Claude and ChatGPT in real enterprise conditions and concluded Claude delivers better value.
If your organisation is currently using GPT-4o, GPT-5, or GPT-5.4 for enterprise workflows — coding, document analysis, compliance, research synthesis — and has not run a structured comparative evaluation against Claude Opus 4.6 or Claude Sonnet 4.6, the enterprise market data suggests you should.
A structured evaluation is straightforward: identify your five most AI-intensive enterprise workflows, run them through both models with identical prompts and context, score the outputs on a predefined rubric, and compare the results. The evaluation takes 2–3 days. The cost of not running it is making infrastructure decisions without evidence.
9.2 The iOS 27 Siri launch is your consumer AI product deadline
If your organisation is building consumer-facing AI products for Australian users — chatbots, AI assistants, voice interfaces, mobile AI features — the iOS 27 launch on June 8 sets a capability benchmark that your product will be measured against.
Every iPhone user will have access, for free, to a standalone AI assistant powered by Gemini that can access their personal data, search the web, control their phone, and maintain persistent conversation history. That is the new baseline expectation for consumer AI quality in the iPhone ecosystem.
Products that fall short of this baseline — that require users to copy and paste, that cannot access context, that do not maintain conversation history — will feel inadequate by comparison. The iOS 27 launch is a forcing function for consumer AI product quality.
Audit your consumer AI products against the iOS 27 Siri feature set. Identify the gaps. Prioritise closing them before June 8 or have a clear plan for the 90-day window after the launch.
9.3 The Bank of America deployment is your financial services AI benchmark
For any organisation in banking, wealth management, insurance, accounting, or financial planning — Bank of America’s deployment represents the industry reference case for AI agent deployment in financial advisory roles.
The specific use cases (portfolio analysis, compliance monitoring, meeting preparation, post-meeting documentation) are applicable across virtually every financial services context at scale. The productivity metrics (3.2 hours per day of administrative time recovered per advisor, 30% more client meetings possible) are the numbers your business case will need to beat or match.
If you are building the business case for AI agent deployment in your financial services organisation, the Bank of America case study is your most current, most credible reference. Cite it. Build on it. Design your deployment to deliver comparable or better results.
9.4 The Anthropic Economic Index skill gap finding is an action item
Anthropic’s finding that users who provide well-structured, context-rich inputs achieve 3–4× better productivity gains than users who provide vague inputs is an immediate, actionable insight for any organisation deploying AI.
The implication is direct: prompt and context engineering training for your AI users is one of the highest-ROI investments you can make right now. If your organisation has deployed AI tools — Claude, ChatGPT, Copilot, Gemini — without training your people on how to use them effectively, you are capturing a fraction of the available productivity gain.
The training is not complex. It covers: how to structure a clear task description, how to provide relevant context, how to specify the format you want for the output, how to break complex requests into sequential steps, and how to evaluate and iterate on AI outputs. A well-designed 4-hour workshop covers the essentials for most non-technical users, with measurable productivity improvement within the first week of application.
10. FAQ
Is Anthropic beating OpenAI in enterprise AI?
Yes. According to Ramp’s March 2026 AI Index — corporate card spending data covering thousands of US businesses — Anthropic captured 73% of all new enterprise AI spending in the ten weeks ending mid-March 2026, compared to OpenAI’s 27%. This represents a dramatic reversal from a near-even split just ten weeks earlier. Anthropic’s enterprise adoption rate grew 4.9% month-over-month in February, its largest ever single-month gain, while OpenAI experienced its steepest ever single-month decline of 1.5%. Anthropic’s annualised revenue reached a $72 billion run-rate in February 2026.
Why is Claude winning enterprise AI over ChatGPT?
Five factors consistently explain Anthropic’s enterprise market share lead: (1) Claude’s Constitutional AI safety methodology delivers lower hallucination rates and higher compliance approval rates in enterprise evaluations; (2) Claude Opus 4.6’s 200,000-token context window enables the long-document and large-codebase use cases enterprises most value; (3) Claude consistently outperforms GPT-5.4 on real-world coding tasks in engineer evaluations, winning technical teams who then influence enterprise purchasing; (4) Amazon’s $8 billion investment in Anthropic provides frictionless AWS Bedrock distribution to Amazon’s enormous enterprise customer base; and (5) Anthropic’s governance stability contrasts favourably with OpenAI’s November 2023 board crisis in enterprise risk assessments.
Why did OpenAI shut down Sora?
OpenAI shut down the Sora consumer video generation app in March 2026, just months after its consumer launch. The stated reasons include compute costs, strategic refocus, and concerns about deepfake proliferation on the platform. The underlying strategic reason, per Wall Street Journal reporting, is that OpenAI is pivoting away from broad consumer product bets toward focused enterprise execution — acknowledging that its enterprise market share is being captured by Anthropic and that resources are better deployed on enterprise products, frontier model development, and its hardware roadmap. Sora’s research team has been redirected to world simulation for robotics, directly supporting OpenAI’s physical AI strategy.
What is new in Apple Siri iOS 27?
iOS 27, announced for WWDC June 8, 2026, features a completely rebuilt Siri as a standalone app — transforming it from an embedded voice assistant into a full-featured AI application competitive with ChatGPT and Claude. New capabilities include: a persistent chat interface with conversation history, deep personal data integration (Messages, Mail, Notes, Calendar, Photos, Health), full web search and multi-step reasoning, app control across all iPhone apps, and proactive contextual suggestions. The new Siri is powered by Google Gemini for complex queries, transmitted via Apple’s Private Relay for privacy protection, with Google contractually prohibited from using Siri queries for model training.
Is the new Siri powered by Gemini?
Yes. iOS 27’s rebuilt Siri uses Google Gemini as its underlying AI model for complex queries that exceed on-device processing capability. Simple requests (timers, calls, music) are handled on-device by Apple’s Neural Engine. More complex requests — research, writing, analysis, cross-app tasks — are handled by Gemini, transmitted through Apple’s Private Relay infrastructure, which anonymises requests so Google cannot identify the user. Apple’s contract with Google explicitly prohibits Google from training Gemini on Siri queries.
What is Bank of America doing with AI agents?
Bank of America deployed AI agents to approximately 1,000 financial advisors in its Merrill Lynch wealth management division in March 2026 — the largest single deployment of AI agents in live financial advisory roles at any major US bank. The agents perform: continuous client portfolio analysis, real-time compliance monitoring, market data synthesis for client briefings, meeting preparation, post-meeting documentation, and proactive opportunity identification. Early results from the pilot show advisors recovering 3.2 hours per day of administrative time and achieving 47% improvement in compliance exception detection.
What did Anthropic’s Economic Index find about AI and jobs?
Anthropic’s March 2026 Economic Index — a privacy-preserving analysis of how Claude is actually used across the economy — found that approximately 78% of Claude usage is augmentation (humans using AI to do their existing jobs more effectively) and 22% is automation (AI performing tasks independently). Software engineers use Claude most intensively (4.2 sessions per day on average), followed by management consultants and financial analysts. The index found that the quality of user inputs — prompt structure and context richness — is the largest predictor of productivity gains, with well-structured inputs producing 3–4× better results than vague inputs. Automation’s share of total usage is growing each quarter.
Who is winning the AI race in 2026?
No single company is winning all dimensions of the AI race in March 2026, but clear leaders have emerged by layer: Anthropic leads in enterprise API adoption (73% of new enterprise spending), Google leads in consumer AI distribution (Android + iOS 27 Siri giving Gemini access to 2 billion+ devices), OpenAI leads in brand recognition and frontier model capability breadth, and the physical AI layer (wearables, robots) remains genuinely contested between OpenAI (Sweet Pea), Apple (next-gen AirPods + rumoured pin), and Meta (Ray-Ban smart glasses). The enterprise and consumer distribution battles appear to be consolidating around Anthropic and Google respectively, while OpenAI pivots toward enterprise and hardware to find its next growth vector.
This article was researched and written by the Kersai Research Team. Kersai is a global AI consultancy firm dedicated to helping enterprises confidently navigate the rapidly evolving artificial intelligence landscape — from cutting-edge strategic insights to practical, large-scale AI implementation. To learn more, visit kersai.com.
