Anthropic Accidentally Leaked Its Entire Secret AI Roadmap — And What It Reveals Changes Everything We Thought We Knew About Claude


KAIROS: A Background Agent That Thinks Without You. Dream Mode. Undercover Mode. Anti-Competitor Traps. Plus: Oracle Axes Thousands, KPMG’s Damning Enterprise AI Data, and SpaceX Quietly Files for History’s Biggest IPO.

Published: April 2, 2026 | By the Kersai Research Team | Reading Time: ~20 minutes
Last Updated: April 2, 2026


Quick Summary: On March 31, a routine npm package update for Claude Code accidentally exposed a source map file containing 512,000 lines of Anthropic’s proprietary TypeScript code — the complete internal architecture of the world’s most widely deployed enterprise AI tool. The leak, which garnered 28.8 million views on X within hours, revealed features Anthropic has never publicly discussed: KAIROS, a persistent background agent that operates continuously without user input; Dream Mode, which enables Claude to think and develop ideas in the background indefinitely; Undercover Mode, which makes stealth commits to open-source repositories without revealing Anthropic’s involvement; and a covert anti-distillation system that injects fake data into API responses to poison competitors’ training datasets. Anthropic confirmed the accidental leak and is rolling out prevention measures — but the code, now starred 84,000 times on GitHub, cannot be un-leaked. Meanwhile, Oracle notified thousands of employees their roles were eliminated effective immediately as part of a $2.1 billion restructuring budget. KPMG’s Global AI Pulse Survey revealed that 65% of enterprises cannot scale AI use cases — double last quarter’s figure. And SpaceX quietly filed confidential IPO paperwork, targeting the highest valuation in stock market history.


Table of Contents

  1. The Leak: How 512,000 Lines of Claude’s Source Code Ended Up on GitHub
  2. KAIROS: Anthropic’s Secret Always-On Background Agent
  3. Dream Mode, Undercover Mode, and the Anti-Competitor Traps
  4. The Security Fallout: Supply Chain Attack and Typosquatting
  5. What the Leaked Code Tells Us About Anthropic’s Real Roadmap
  6. Oracle Lays Off Thousands: The $2.1 Billion AI Restructuring
  7. KPMG’s Verdict: 65% of Enterprises Can’t Scale AI — And Why That’s Actually Good News for Businesses That Can
  8. SpaceX Confidentially Files for IPO: What a $1.5 Trillion Listing Means for AI
  9. What This Means for Australian Businesses
  10. FAQ

1. The Leak: How 512,000 Lines of Claude’s Source Code Ended Up on GitHub

1.1 What happened

On March 31, 2026, Anthropic released version 2.1.88 of the Claude Code npm package — a routine update to their AI coding tool distributed through npm, the world’s largest software registry.

Inside that update, a human error: a source map file was accidentally bundled with the release. Source maps are developer tools that translate minified, obfuscated code back into readable source — they are designed for internal debugging and should never ship in production packages. In this case, the source map gave anyone who downloaded the package the ability to reconstruct nearly all of Claude Code’s internal source code.

Security researcher Chaofan Shou was the first to notice, posting on X: “Claude code source code has been leaked via a map file in their npm registry!” The post accumulated 28.8 million views within 24 hours. Within hours, the full reconstructed codebase appeared on GitHub — nearly 2,000 TypeScript files, more than 512,000 lines of code.

Anthropic pulled version 2.1.88 from npm immediately and confirmed the leak to CNBC:

“No sensitive customer data or credentials were involved or exposed. This was a release packaging issue caused by human error, not a security breach. We’re rolling out measures to prevent this from happening again.”

The clarification is important: this was not a cyberattack, not a deliberate disclosure, and not a data breach affecting customer information. It was a packaging mistake that exposed Anthropic’s internal engineering work — the blueprint for how Claude Code is built.

But the contents of that blueprint are extraordinary.

1.2 The scale of the exposure

The leaked codebase reveals Claude Code’s full internal architecture, confirming and extending what was previously known about its capabilities:

  • Tools system: The complete set of capabilities Claude Code has access to, including file reading, bash execution, web browsing, and IDE integration
  • Query engine: The internal LLM API call and orchestration system — how Claude Code manages model requests, handles context, and chains responses
  • Multi-agent orchestration: The system for spawning sub-agents and coordinating agent “swarms” for complex parallel tasks
  • Self-healing memory architecture: A sophisticated system for managing Claude’s fixed context window, including techniques for compressing, prioritising, and reconstructing context across very long sessions — one of the most technically impressive components in the leaked code
  • Bidirectional communication layer: The protocol connecting Claude Code’s CLI to IDE extensions — enabling the deep VSCode and JetBrains integration that engineers use daily

These are the expected components of a sophisticated AI coding tool. The truly surprising revelations are the features that appear in the code but have never been publicly announced.


2. KAIROS: Anthropic’s Secret Always-On Background Agent

The most consequential discovery in the leaked code is a feature called KAIROS — and it represents a fundamentally different vision for what AI assistants are about to become.

2.1 What KAIROS is

KAIROS is described in the leaked code as a persistent, proactive background agent — an AI that does not wait for you to ask it something. It runs continuously in the background of a developer’s environment, monitoring the codebase, identifying errors, and taking actions to fix them without receiving a prompt.

The specific capabilities described in the KAIROS feature flags and configuration code:

  • Proactive error detection: KAIROS continuously monitors the codebase for errors, inconsistencies, and potential issues — not when asked, but as a persistent background process
  • Autonomous error resolution: When KAIROS identifies fixable issues, it can attempt to resolve them without user input
  • Periodic task execution: KAIROS can run scheduled tasks on the codebase at defined intervals — tests, code quality checks, documentation updates — independently
  • Push notifications: KAIROS can proactively alert users when it has identified something important or completed a background task — the AI reaching out to the human, rather than waiting for the human to reach out to it

The conceptual shift KAIROS represents is worth pausing on. Every AI assistant today — including Claude, ChatGPT, and Gemini — operates reactively. You initiate. The AI responds. KAIROS inverts this model: the AI initiates, and you respond.

This is not incremental. It is a different category of AI interaction — one where your AI codebase assistant is doing work on your project while you are asleep, while you are in meetings, while you are doing something else entirely.

2.2 Why KAIROS is significant beyond coding

KAIROS is described in developer context — fixing code bugs, running tests — but its architecture is not inherently limited to coding. The persistent background agent model that KAIROS implements is applicable to any workflow domain:

  • A KAIROS-style email agent that monitors your inbox and acts on actionable emails proactively, without waiting for you to open them
  • A KAIROS-style financial monitoring agent that tracks your business accounts and flags anomalies as they occur
  • A KAIROS-style compliance agent that continuously reviews your business communications and flags potential issues before they become problems
  • A KAIROS-style sales agent that monitors your CRM and proactively nurtures leads based on their activity patterns, without waiting for you to assign a task

The leaked feature flags suggest KAIROS is a near-term product addition, not a distant research initiative. The fact that it is implemented in shipping code rather than experimental branches suggests a release timeline measured in months, not years.


3. Dream Mode, Undercover Mode, and the Anti-Competitor Traps

Beyond KAIROS, the leaked code contains three additional unreleased features that have generated intense discussion in the developer community.

3.1 Dream Mode — Claude That Never Stops Thinking

The leaked code references a feature called “dream” mode that will allow Claude to constantly think in the background — developing ideas and iterating on existing work without waiting for user prompts.

The analogy embedded in the feature’s name is intentional: just as human brains process and consolidate information during sleep, Dream Mode gives Claude a continuous cognitive process that runs independently of active user sessions.

The practical application: a developer working on a complex architectural problem could engage Dream Mode and come back hours later to find that Claude has explored multiple solution approaches, identified tradeoffs, and developed a recommendation — the work done during the developer’s absence.

Dream Mode, combined with KAIROS, paints a picture of an AI coding companion that is not just reactive but continuously contributive — actively participating in your project’s development between your active sessions.

3.2 Undercover Mode — Stealth Open-Source Contributions

Perhaps the most surprising feature in the leaked code is what developers are calling Undercover Mode.

The leaked system prompt reads:

“You are operating UNDERCOVER in a PUBLIC/OPEN-SOURCE repository. Your commit messages, PR titles, and PR bodies MUST NOT contain ANY Anthropic-internal information. Do not blow your cover.”

This feature enables Claude Code to make contributions to public open-source repositories — commits, pull requests, code reviews — without revealing that the contributions are AI-generated or Anthropic-assisted. The commit messages, PR descriptions, and code comments are written to appear as ordinary human developer contributions.

The discovery has generated significant ethical debate in the developer community. Open-source projects rely on transparent attribution — knowing who contributed what, and why, is fundamental to how open-source governance works. A tool that makes AI contributions appear as human contributions without disclosure raises questions about:

  • Attribution integrity: If an AI makes a significant contribution to an open-source project, should that be disclosed?
  • License implications: Some open-source licenses have specific provisions about AI-generated code; undisclosed AI contributions could create license compliance issues
  • Security implications: The open-source security model partly relies on human review of human-written code; covert AI contributions could bypass community review processes designed around human contributor behaviour

Anthropic has not yet commented specifically on Undercover Mode following the leak. The feature’s existence suggests deliberate design choices that will require public explanation.

3.3 Anti-Distillation Traps — Poisoning Competitor Training Data

The third notable discovery: covert controls designed to deceive competitors who attempt to train AI models on Claude Code’s outputs.

The leaked code contains a system described as anti-distillation protection — which injects fake tool definitions into API responses when Claude Code detects patterns consistent with model distillation attacks (attempts to train a competing model on Claude’s outputs by systematically collecting its responses).

In February 2026, Anthropic had already publicly accused Chinese AI companies of using 16 million Claude API calls to train competing models — a practice known as “model distillation.” The anti-distillation system in the leaked code represents Anthropic’s covert technical countermeasure: if you try to train on Claude’s outputs, Claude silently feeds you fake data to corrupt your training dataset.

The ethical and legal dimensions of this are genuinely novel territory. Using AI outputs to train a competing model without the provider’s consent is a terms-of-service violation — but Anthropic secretly poisoning data provided to paying API customers (even if those customers are violating terms) raises its own questions. This is an area where technology has outpaced both regulation and established ethical frameworks.


4. The Security Fallout: Supply Chain Attack and Typosquatting

4.1 The Axios supply chain attack

The source code leak itself was a packaging accident — but it arrived simultaneously with a more dangerous security incident that every Claude Code user must act on immediately.

Users who installed or updated Claude Code via npm on March 31, 2026, between 00:21 and 03:29 UTC may have inadvertently installed a trojanized version of Axios — the widely-used HTTP client — that contained a cross-platform remote access trojan (RAT).

The Axios supply chain attack is a separate incident from the source code leak — it exploited the coincidence of the Claude Code update to insert malicious code into a commonly-co-installed dependency.

Immediate action required if you updated Claude Code during this window:

  1. Immediately downgrade Claude Code to a version released before March 31, 2026
  2. Rotate all secrets, API keys, and credentials stored on the affected system — assume they may have been exfiltrated
  3. Audit recent outbound network connections from the affected system for anomalous activity
  4. Alert your security team and conduct a full incident assessment

4.2 The typosquatting attack

Following the leak, attackers began publishing npm packages with names matching internal packages referenced in the leaked source code — a technique called typosquatting or dependency confusion. Developers attempting to compile the leaked source code may inadvertently install these malicious packages.

The malicious package names, all published by a user called “pacifier136,” are currently empty stubs — but as security researcher Clément Dumas explained: “That’s how these attacks work — squat the name, wait for downloads, then push a malicious update that hits everyone who installed it.”

Do not attempt to compile or run the leaked Claude Code source code. The risk of inadvertently installing a malicious package through dependency confusion is significant for any developer who tries to build from the leaked codebase.


5. What the Leaked Code Tells Us About Anthropic’s Real Roadmap

Setting aside the security incidents, the leaked code reveals a product roadmap that is significantly more ambitious than Anthropic’s public communications have suggested.

The picture that emerges from the feature flags, configuration structures, and system prompts in the 512,000 lines of TypeScript:

5.1 The transition from tool to teammate

The features revealed — KAIROS’s proactive error fixing, Dream Mode’s continuous background thinking, the multi-agent orchestration for spawning sub-agent swarms — collectively describe a product designed to transition from AI-as-tool to AI-as-teammate.

A tool does what you tell it when you tell it. A teammate takes initiative, works independently, and contributes without being directed for every task. The Claude Code roadmap is explicitly moving toward the latter.

5.2 The second Anthropic leak in one week

The Claude Code source code leak is, remarkably, Anthropic’s second major accidental disclosure in seven days. The week before, details about an upcoming AI model — internally referred to as “Mythos” — were left accessible via Anthropic’s content management system.

Fortune reported that Anthropic acknowledged testing Mythos with early access customers, describing it as “the most capable we’ve built to date” and representing a “step change in capabilities.” Combined with the source code leak, Anthropic has inadvertently disclosed both its next frontier model and its next-generation product architecture within the same seven-day period.

For competitors — and for the developer community — this is an unprecedented window into a company that has otherwise maintained tight operational security. The accidental transparency may actually benefit Anthropic in one respect: the developer excitement generated by the KAIROS and Dream Mode revelations is enormous, and the organic publicity may accelerate adoption in ways that a formal product announcement could not replicate.

5.3 The GitHub reaction

The leaked code on GitHub has accumulated 84,000 stars and 82,000 forks — extraordinary numbers for a repository that has existed for days. For context, major open-source projects like React took years to accumulate comparable star counts.

The developer community’s reaction suggests that the leaked features — particularly KAIROS and Dream Mode — have generated genuine excitement about Claude Code’s near-term direction. The leak may paradoxically have strengthened Claude’s position in the developer market by demonstrating the ambition and technical sophistication of Anthropic’s roadmap.


6. Oracle Lays Off Thousands: The $2.1 Billion AI Restructuring

6.1 What happened

On March 31, thousands of Oracle employees received a 6 AM email containing the following sentence:

“After careful consideration of Oracle’s current business needs, we have made the decision to eliminate your role as a part of a broader organizational change. As a result, today is your last working day.”

The email, reviewed and published by Business Insider, confirmed immediate termination effective the day of receipt — no transition period, no notice. Affected employees were directed to sign termination paperwork to receive severance.

Oracle had approximately 162,000 full-time employees as of May 2025, per its most recent 10-K filing. The number of roles eliminated is “in the thousands,” according to CNBC’s sources — representing a meaningful reduction in Oracle’s global headcount.

The financial scale: Oracle’s March 2026 filing disclosed that its restructuring plan is projected to cost as much as $2.1 billion in fiscal 2026, the majority of which will go to employee severance and related expenses. At an average severance cost of $50,000–$100,000 per employee, that budget implies a restructuring affecting 20,000–40,000 positions at the upper end — though the actual number may be significantly lower given Oracle’s blend of global cost structures.

6.2 Why Oracle is restructuring now

Oracle’s layoffs are not a sign of financial distress — they are a sign of strategic reorientation around AI infrastructure.

The context: Oracle has been rapidly increasing capital expenditure to build AI data centres. The company’s cloud infrastructure business is experiencing strong growth driven by AI workload demand. But that infrastructure investment requires capital that Oracle’s existing cost base — heavily weighted toward sales, support, and legacy software maintenance roles — does not efficiently generate.

Oracle CEO Larry Ellison has been vocal about the company’s AI ambitions: at a White House event in 2025, he announced AI infrastructure investments alongside OpenAI and SoftBank. The layoffs are the workforce consequence of the capital reallocation that announcement implied.

Oracle’s stock has been under significant pressure — down 25% year to date and 48% over six months — reflecting investor anxiety about whether enterprise software companies can successfully navigate the transition to AI-native architectures without losing revenue from their legacy businesses. The layoff news actually boosted Oracle’s shares by more than 4% on announcement day, reflecting investor relief that management is taking aggressive restructuring action rather than protecting the status quo.

6.3 The pattern: a third major tech AI restructuring in two weeks

Oracle’s layoffs follow Meta’s 700 job cuts (March 25) and are part of a pattern of AI-driven restructuring across the technology sector. The pattern is consistent: companies are simultaneously:

  • Increasing AI infrastructure investment (data centres, compute, model training)
  • Decreasing headcount in roles that AI can replicate or that support legacy business models
  • Redirecting human capital toward roles that require judgment, creativity, and oversight of AI systems

For employees and business leaders alike, the Oracle announcement reinforces a structural reality: the AI transition is not creating a gentle, gradual workforce shift. It is creating rapid, decisive restructurings at some of the world’s largest and most stable technology employers.


7. KPMG’s Verdict: 65% of Enterprises Can’t Scale AI — And Why That’s Actually Good News for Businesses That Can

7.1 The headline numbers

KPMG’s Q1 2026 Global AI Pulse Survey — released March 31, covering 2,110 C-suite and business leaders across 20 countries, 75% from companies with over $1 billion in annual revenue — is the most credible enterprise AI dataset released this quarter. Its findings are simultaneously sobering and strategically significant.

The investment picture (strong):

  • US organisations project average AI spending of $207 million over the next 12 months — nearly double the same period last year
  • Global average: $186 million — also roughly double year-prior figures
  • AI investment is at the highest level KPMG has ever recorded and continues on a “sharp upward trajectory”

The execution picture (deeply challenged):

Barrier to AI ROIQ1 2026Q1 2025Change
Difficulty scaling use cases65%33%+32 points
Skills gaps62%25%+37 points
Data security and privacy concerns91% cite as influencing strategy

The numbers tell a precise story: enterprise investment in AI has doubled, but the ability to convert that investment into business results has not kept pace. In fact, the barriers have nearly doubled too — the percentage of leaders citing scaling difficulty jumped from 33% to 65% in a single quarter, as organisations moved from isolated pilots to enterprise-wide deployment and discovered that scaling AI is categorically harder than piloting it.

7.2 AI agent deployment: the tipping point is real

On AI agent adoption specifically, the KPMG data provides the clearest measurement yet of where the market stands:

Period% of Organisations Actively Deploying AI Agents
Early 202412%
Q2 202433%
Q1 202654%

More than half of large enterprises globally are now actively deploying AI agents — not experimenting, not planning, but deploying in production. Operations (79%) and technology (78%) departments lead. 73% use agents to automate workflows that span multiple functions. The agentic era is not coming. It is here, measured by real deployment data from the world’s largest organisations.

7.3 The human readiness crisis

KPMG’s most significant finding is what Steve Chase, KPMG’s Global Head of AI & Digital Innovation, called the limiting factor: “The limiting factor isn’t the technology — it’s whether people have the skills to direct AI, apply judgment and take responsibility for results.”

The specific numbers:

  • 87% of leaders say upskilling and reskilling the existing workforce is their #1 priority for building an AI-enabled workforce — ahead of hiring (68%) and job redesign (55%)
  • 76% of employees resist AI agent adoption due to skills gaps (up from significantly lower figures in 2025)
  • 67% of employees resist due to job security concerns
  • For entry-level roles, adaptability and continuous learning (83%) now outranks technical programming skills (67%) as the most valued hiring attribute

The workforce readiness data explains the scaling gap precisely: organisations have invested in AI tools but not in the human infrastructure required to use them effectively. The technology is ready. The people are not.

7.4 Governance becoming a prerequisite — not an afterthought

KPMG’s governance data reveals a significant maturation in enterprise AI risk management:

  • 91% of leaders say data security, privacy, and risk will influence their AI strategies in the next six months
  • 63% now require human validation of AI agent outputs — up from just 22% in Q1 2025, nearly tripling in one year
  • The requirement for human validation is emerging as a non-negotiable standard, not a nice-to-have

The near-tripling of human validation requirements reflects enterprises learning from experience: AI agents that operate without governance checkpoints generate errors and liabilities at a rate that exceeds their productivity gains. The organisations that are successfully scaling AI have built governance in as a prerequisite, not retrofitted it after deployment.

7.5 The strategic opportunity in the scaling gap

The KPMG data contains a counterintuitive strategic insight for Australian businesses: the fact that 65% of enterprises cannot scale AI is an opportunity, not just a warning.

Every large enterprise unable to scale AI effectively is a potential client for consultancies, technology providers, and implementation partners who can close the scaling gap. The $207 million average AI spend that these organisations are planning represents trillions in addressable global spending on AI implementation and change management — the majority of which will go to external partners because the internal capability to execute does not exist.

ASPAC (Asia-Pacific, including Australia) shows distinctive characteristics in the KPMG global data: ASPAC organisations are more likely to pursue agent-first operating models than their US or European counterparts — suggesting that the region’s AI adoption pattern may accelerate faster than the global average once the skills and governance foundations are built.


8. SpaceX Confidentially Files for IPO: What a $1.5 Trillion Listing Means for AI

8.1 The filing confirmed

Bloomberg confirmed on April 1 that SpaceX has made a confidential IPO filing with the Securities and Exchange Commission — the formal legal step that precedes a public listing, allowing the company to prepare for an IPO while keeping its financial details private until closer to the public offering.

The confidential filing does not guarantee an imminent IPO — companies routinely file confidentially and then delay or withdraw. But it confirms that SpaceX is actively preparing for a public market debut and that management has made the organisational and governance changes that a confidential filing requires.

The targeted valuation: $1.5 trillion — which would make SpaceX the most valuable company in history at the time of its listing, surpassing Apple’s peak market capitalisation.

The business case: SpaceX’s Starlink satellite internet division generated $6.6 billion in revenue in 2025 with trajectory toward $10 billion+ in 2026. The combination of Starlink’s recurring subscription revenue, SpaceX’s launch services dominance (controlling over 90% of global orbital launch mass in 2025), and the growth optionality of Starship’s commercial capability creates a valuation argument that is qualitatively different from conventional technology company IPOs.

8.2 Why the SpaceX IPO matters for AI

The connection between SpaceX and the AI landscape is more direct than it might appear:

First, compute infrastructure: AI requires enormous amounts of electricity and cooling — and the location of data centres is increasingly constrained by energy availability. SpaceX’s Starlink provides satellite internet connectivity that enables AI infrastructure deployment in locations that terrestrial connectivity cannot reach. SpaceX’s public market capital will accelerate Starlink’s infrastructure expansion, which in turn expands the geographic reach of AI deployment.

Second, the AI-in-space convergence: SpaceX’s Starship programme — the heavy-lift launch vehicle targeting full reusability — is designed to dramatically reduce the cost of deploying hardware to orbit. AI-powered satellite systems, including AI earth observation, communications, and military intelligence platforms, depend on affordable launch access. SpaceX’s IPO capital will accelerate Starship’s development, which accelerates AI-in-space deployment.

Third, the IPO cohort: SpaceX’s confidential filing confirms what Bloomberg and the New York Times reported in January: 2026 is the year of the AI and deep-tech mega-IPO. OpenAI is preparing its for-profit restructuring as a prerequisite for an IPO. Anthropic is preparing its public benefit corporation structure for investor scrutiny. SpaceX’s filing, as the first confirmed in this cohort, signals that the regulatory and market conditions for major tech IPOs are favourable — which may accelerate the timelines for OpenAI and Anthropic’s listings.

8.3 The AI IPO timeline

CompanyEstimated IPO WindowValuation TargetKey Prerequisite
SpaceXH2 2026 – H1 2027$1.5 trillionNone — filing confirmed
OpenAIH2 2026$800B–$1 trillionFor-profit restructuring completion
AnthropicH2 2026 – H1 2027$300B–$350BPublic benefit corporation structure review
Stripe2026–2027$65B+Operational — reports suggest H2 2026

The combined market capitalisation of these four listings — if all proceed at targeted valuations — would exceed $2.8 trillion. For context, the entire US technology sector’s market cap was approximately $12 trillion at the end of 2025. A $2.8 trillion addition from four new listings in 18 months represents the largest concentration of tech IPO capital in history.


9. What This Means for Australian Businesses

On the Claude Code leak: Australian businesses using Claude Code in their development workflows should take two immediate actions. First, if any developers updated Claude Code via npm on March 31 between 00:21 and 03:29 UTC, treat the affected system as potentially compromised — rotate all credentials and secrets stored on it. Second, do not attempt to access, compile, or deploy the leaked source code — the typosquatting risk is real and the malicious packages are already in the npm registry.

Beyond the immediate security actions, the leaked roadmap has strategic implications for Australian technology businesses evaluating their AI coding tool investment. KAIROS and Dream Mode, when released, will represent a step-change in developer productivity that no other AI coding tool currently offers. Teams that are deeply embedded in Claude Code’s workflow today will be the earliest adopters of these features when they launch — building the productivity advantage compounding from day one.

On Oracle’s layoffs: The Oracle restructuring reinforces a pattern that Australian technology buyers need to factor into vendor risk assessments. Legacy enterprise software providers — Oracle, SAP, Salesforce, and others — are in various stages of AI-driven restructuring that will change their support quality, product roadmaps, and organisational stability over the next 12–24 months. If your business runs critical infrastructure on legacy enterprise software, now is the time to assess your vendor’s AI transition credibility and ensure your contracts contain appropriate service level protections.

On the KPMG data: The finding that 65% of enterprises globally cannot scale AI — with skills gaps and difficulty scaling cited as the top barriers — has direct implications for Australian businesses at every stage of AI adoption. For businesses early in their AI journey: the data confirms that the path from pilot to scale is the hardest part, and that the investment required to navigate it successfully is predominantly in people and processes, not in additional technology. For Australian AI consultancies and implementation partners: the ASPAC data showing preference for agent-first operating models suggests the local market will move to production AI agent deployment faster than the global average — creating near-term demand for the governance, integration, and change management services that scaling requires.

On SpaceX’s IPO: For Australian investors and ASX-listed technology companies, the SpaceX filing signals the opening of the AI-era public market. Australian technology companies with credible AI adoption stories are entering a market environment where AI capability is being actively priced into valuations. Companies that can demonstrate measurable AI productivity gains, AI-native product features, or AI-driven cost structure improvements will be valued differently from those that cannot. The window to build that story — before the AI IPO cohort resets investor expectations for what AI-capable means — is measured in months, not years.


10. FAQ

What is the Claude Code source code leak?

On March 31, 2026, Anthropic accidentally included a source map file in version 2.1.88 of the Claude Code npm package. Source maps allow the reconstruction of original source code from compiled, obfuscated code. The error exposed nearly 2,000 TypeScript files and over 512,000 lines of Claude Code’s internal source code. Anthropic confirmed it was a human packaging error — not a cyberattack or data breach — and removed the affected version from npm. The code has since been posted to GitHub, where it accumulated 84,000 stars and 82,000 forks. Anthropic is rolling out measures to prevent recurrence.

What is Anthropic’s KAIROS project?

KAIROS is a previously unannounced feature discovered in the leaked Claude Code source code. It is a persistent background agent that operates continuously in a developer’s environment without waiting for user prompts — proactively detecting and fixing errors, running scheduled tasks, and sending push notifications when it identifies important issues. KAIROS represents a shift from reactive AI (AI that responds to your prompts) to proactive AI (AI that initiates and takes action independently). No official release timeline has been announced.

What is Claude Code Dream Mode?

Dream Mode is another unreleased feature discovered in the leaked code. It allows Claude to continuously think and develop ideas in the background — processing and iterating on a codebase or problem indefinitely without requiring user prompts. The analogy is to human sleep: just as the brain processes information during sleep, Dream Mode gives Claude a continuous cognitive process that runs between active user sessions. No official announcement or release date has been provided by Anthropic.

What is Claude Code Undercover Mode?

Undercover Mode is a feature in the leaked code that enables Claude to make contributions to public open-source repositories — commits, pull requests, code reviews — without revealing that the contributions are AI-generated or Anthropic-assisted. The leaked system prompt instructs Claude: “Do not blow your cover.” The feature has generated ethical debate about AI attribution transparency in open-source software development.

Should I be concerned about the Claude Code security issue?

Yes, if you updated Claude Code via npm on March 31, 2026, between 00:21 and 03:29 UTC. During this specific window, the update coincided with a supply chain attack on the Axios HTTP library that may have installed a remote access trojan on affected systems. If you updated during this window: immediately downgrade Claude Code, rotate all credentials and API keys on the affected system, and conduct a security audit of recent network activity. The source code leak itself does not create a direct security risk for users — but do not attempt to compile the leaked source code due to active typosquatting attacks in the npm registry.

Why did Oracle lay off thousands of employees?

Oracle is restructuring its workforce as part of a strategic pivot toward AI infrastructure. The company has significantly increased capital expenditure on AI data centres and is redirecting resources from legacy software support, sales, and maintenance roles toward cloud infrastructure and AI services. Oracle’s fiscal 2026 restructuring budget is as much as $2.1 billion, mostly in severance. The restructuring is designed to cut operating costs while increasing investment in AI-related infrastructure — a pattern seen at Meta (700 layoffs, March 25) and across multiple major technology companies in Q1 2026.

What did the KPMG AI survey find?

KPMG’s Q1 2026 Global AI Pulse Survey of 2,110 C-suite leaders across 20 countries found: US organisations plan to spend an average of $207 million on AI in the next 12 months (nearly double year-prior); 54% of organisations are actively deploying AI agents (up from 12% in early 2024); 65% of organisations struggle to scale AI use cases (up from 33% last quarter); 62% cite skills gaps as a top barrier (up from 25%); and 91% say data security and risk will influence their AI strategy in the next six months. The top workforce priority is upskilling existing employees (87%), ahead of hiring (68%).

When is SpaceX going public?

SpaceX has confidentially filed IPO paperwork with the SEC, as confirmed by Bloomberg on April 1, 2026. A confidential filing is the formal legal step before a public listing but does not guarantee an imminent IPO. The most likely window based on analyst consensus is H2 2026 to H1 2027. SpaceX is targeting a valuation of approximately $1.5 trillion — which would make it the most valuable company in history at the time of listing. The IPO cohort also includes OpenAI (targeting H2 2026, subject to for-profit restructuring) and Anthropic (targeting 2026–2027).


The Bottom Line

In the past 48 hours, Anthropic has accidentally shown the world its roadmap, Oracle has eliminated thousands of roles by 6 AM email, KPMG has confirmed that most enterprises cannot execute on their AI ambitions, and SpaceX has taken the first formal step toward the largest IPO in history.

The common thread: the AI transition is no longer a future event. It is the present operating environment — creating winners among organisations that can execute and casualties among those that cannot.

The KPMG data makes the opportunity precise: 65% of enterprises globally are struggling to scale AI. The 35% that can scale it are building competitive advantages that compound with every quarter. The question for every business leader reading this is which group their organisation will be in twelve months from now.

Kersai works with businesses across Australia to bridge the gap between AI ambition and AI execution — from initial strategy and tool selection through to governance frameworks, workforce capability building, and full-scale AI deployment. If you would like to discuss where your organisation sits on the AI maturity curve and how to accelerate your path to the 35%, visit kersai.com.


This article was researched and written by the Kersai Research Team. Kersai is a global AI consultancy firm dedicated to helping enterprises confidently navigate the rapidly evolving artificial intelligence landscape. To learn more, visit kersai.com.