OpenAI’s Secret “Spud” Model Is Coming in April, JPMorgan Is Spying on Its Bankers, Meta Laid Off 700, and Claude Is Confirmed in the Iran War Kill Chain


The Last Week of March 2026 Produced More Consequential AI Developments Than Most Months. Here Is Everything That Happened and Why It Matters.

Published: March 31, 2026 | By the Kersai Research Team | Reading Time: ~20 minutes
Last Updated: March 31, 2026


Quick Summary: OpenAI has completed development of a model internally codenamed “Spud” — described by CEO Sam Altman as moving “faster than any of us expected” — targeting a mid-to-late April 2026 release and internally referred to as AGI Deployment rather than another incremental ChatGPT upgrade. JPMorgan has begun monitoring the keystrokes, video calls, and calendar data of its junior investment bankers using AI — triggering a global workplace surveillance debate. Meta laid off 700 employees across sales, recruiting, and Reality Labs while simultaneously restructuring teams into AI-native “pods” with employees rebranded as “AI builders.” Claude — Anthropic’s AI model — has been confirmed as part of the US military’s active targeting workflow in the Iran conflict, deployed by Palantir’s Maven Smart System. Boston Dynamics’ humanoid Atlas robot is sold out through the end of 2026 at a unit price of $130,000–$200,000. And China has now produced more top-tier AI researchers than the United States for the first time in history — a structural talent shift with decade-long implications for the global AI race.


Table of Contents

  1. OpenAI “Spud” — The Model They Killed Sora For
  2. JPMorgan’s AI Surveillance — The Keystroke Tracking That Sparked a Global Debate
  3. Meta’s 700 Layoffs and the “AI Builder Pod” Restructure
  4. Claude in the Iran War Kill Chain — The Most Uncomfortable AI Story of 2026
  5. Boston Dynamics Atlas: Sold Out at $200K — The Physical AI Era Has Arrived
  6. China Overtakes the US in AI Talent — The Most Underreported Story of the Week
  7. The Through-Line: What These Six Stories Are Really Telling Us
  8. What This Means for Australian Businesses

1. OpenAI “Spud” — The Model They Killed Sora For

1.1 What we know

On March 24, journalists Stephanie Palatolo and Amir Ephrati at The Information broke the story: OpenAI has completed the pre-training phase of a new model internally codenamed “Spud.”

The details, confirmed by multiple follow-up sources including Geeky Gadgets, MindStudio, and The Neuron Daily:

  • Pre-training completed in mid-March 2026
  • CEO Sam Altman wrote to staff that “things are moving faster than any of us expected”
  • Target release window: mid-to-late April 2026
  • OpenAI’s internal team has begun referring to their work as “AGI Deployment” — not model development
  • The Sora video app was shut down in part specifically to free up GPU computing capacity for Spud’s training and deployment
  • Three features confirmed by internal sources: rebuilt human-like audio, native multimodality at a new level, and a consolidated AI “super app” interface

1.2 Is Spud GPT-6?

The naming question is genuinely uncertain. The most credible analysis as of this week:

  • ~2% probability that OpenAI officially calls this GPT-6 (per analyst consensus reported by The Information)
  • Most likely outcomes: GPT-5.5, a new naming convention entirely, or a product-level name rather than a model version number
  • LifeArchitect.ai’s model tracking database lists Spud’s internal name as GPT-6 — but OpenAI’s public naming decisions rarely mirror internal codenames
  • The “AGI Deployment” framing is significant: it suggests OpenAI believes Spud represents a qualitative capability threshold rather than an incremental improvement

The specific capabilities that make internal teams use the phrase “AGI Deployment” rather than “new model launch” have not been publicly confirmed. But three features reported by sources familiar with the model are: a fundamentally rebuilt audio interface that is described as genuinely human-like rather than synthetic-sounding; a multimodal integration that unifies text, image, audio, and computer-use in a single coherent interface rather than separate tool modes; and an interface redesign that consolidates all of OpenAI’s scattered products — ChatGPT, Operator, DALL-E, Voice Mode, Deep Research — into a unified super app.

1.3 The strategic context — OpenAI’s response to the enterprise crisis

Spud’s timing is not coincidental. It arrives as Anthropic’s Claude has captured 73% of new enterprise AI spending — a fact that represents an existential threat to OpenAI’s valuation narrative. The $110 billion fundraise at an $840 billion valuation that OpenAI completed in March 2026 depends entirely on a story of AI market dominance. A world in which Claude leads enterprise and Spud hasn’t shipped yet is not the story OpenAI’s investors paid for.

Spud is OpenAI’s answer to the enterprise crisis. If it delivers on the “AGI Deployment” framing — genuine qualitative capability improvements, not another incremental benchmark improvement — it has the potential to reset the competitive landscape before the enterprise spend gap widens further.

The specific trigger for watching: whether Spud’s coding performance closes the gap with Claude Opus 4.6 on SWE-bench Verified. That single benchmark, reflecting real-world software engineering task performance, is the most cited reason enterprise engineering teams chose Claude over GPT-5.4. If Spud closes that gap, the enterprise market becomes genuinely contested again.

1.4 What to watch

Mid-to-late April is a narrow window. The most likely announcement format: a standalone event rather than a blog post — OpenAI’s response to Anthropic’s enterprise dominance story will require a moment, not a memo. Watch for a Spud launch event in the third or fourth week of April 2026.


2. JPMorgan’s AI Surveillance — The Keystroke Tracking That Sparked a Global Debate

2.1 What JPMorgan is actually doing

On March 23, Fortune and the Financial Times reported that JPMorgan Chase — the world’s largest bank by total assets at $3.9 trillion — has launched a pilot programme monitoring the digital activity of its junior investment bankers.

The specific monitoring: every week, participating employees receive a report comparing:

  • Their self-reported hours (from time sheets)
  • Against their actual digital footprint, derived from: desktop keystrokes, video conference attendance, calendar bookings, and scheduled meetings

The stated purpose: identifying overwork. JPMorgan’s junior banking culture has a documented problem with extreme hours — 100+ hour weeks, deteriorating mental health, and multiple high-profile incidents of junior banker hospitalisation and death attributed to overwork. The monitoring programme is framed as a welfare intervention, not a performance surveillance tool.

JPMorgan’s official statement: “Much like the weekly screen time summaries on a smartphone, this tool is about awareness — not enforcement. It’s designed to support transparency, wellbeing, and encourage open conversations about workload.”

2.2 The parallel programme — tracking AI tool usage

A separate but related programme, reported by AI News on March 30, goes further: JPMorgan managers are now actively monitoring which employees use AI tools in their daily tasks and how frequently. This programme is explicitly productivity-motivated — identifying the employees adopting AI effectively versus those not using it at all — and is tied to workforce planning rather than welfare.

This second programme is the more consequential of the two for the business world. It represents the first confirmed instance of a major financial institution using monitoring data to identify and act on AI adoption patterns at an individual employee level. The implications:

  • Employees who are not adopting AI tools may be identified as lower productivity contributors relative to AI-augmented colleagues
  • AI adoption may become an implicit performance metric — monitored even if not formally evaluated
  • The data generated will inform JPMorgan’s headcount decisions: which roles can be reduced as AI augmentation increases effective output per employee

2.3 The employee trust collapse — what the data shows

The broader context for the JPMorgan story is deeply important. A March 2026 analysis by NWChem examining enterprise AI monitoring trends found:

  • AI-powered employee monitoring has reached “near-universal” adoption among large enterprises globally
  • The evidence that it improves productivity is described as weak
  • The evidence that it damages trust, mental health, and innovation capacity is described as strong
  • BCG’s 2026 research introduced the concept of “AI brain fry” — cognitive and motivational burnout triggered by AI-powered surveillance environments
  • ActivTrak’s internal data shows what it calls the “productivity paradox”: organisations that monitor most intensively show lower team-level output and innovation rates than those that monitor lightly

The NWChem analysis conclusion is blunt: “Surveillance theatre does not build high-performance cultures.” The organisations that achieve the best AI-era performance are those using monitoring data to improve systems and workflows — not to track individuals.

2.4 The line between wellbeing and surveillance

JPMorgan’s framing — “this is about wellbeing, not enforcement” — may be entirely sincere. The programme’s stated purpose (identifying overworked junior bankers) is legitimate and the bank has a documented history of overwork problems that have caused genuine human harm.

But the architecture of the monitoring system is indistinguishable from a surveillance system. The same data that identifies an overworked banker also identifies an underperforming one, a distracted one, a disengaged one. The promise that data will not be used for evaluation purposes depends entirely on institutional self-restraint — self-restraint that is hard to verify and historically unreliable in financial services environments driven by performance culture.

For business leaders considering employee AI monitoring, JPMorgan’s programme is a case study in the gap between surveillance intent and surveillance effect. The wellbeing argument may be real. The trust cost is also real. The organisations navigating this tension most effectively are those that monitor processes and systems rather than individuals — measuring output quality and workflow efficiency rather than individual keystrokes and calendar density.


3. Meta’s 700 Layoffs and the “AI Builder Pod” Restructure

3.1 The layoffs

On March 25–26, Storyboard18 and multiple financial outlets confirmed that Meta Platforms has laid off approximately 700 employees across sales, recruiting, and Reality Labs divisions. The cuts were announced alongside a new executive stock option programme that could increase compensation for senior leaders by up to $921 million over five years — creating an immediate and visible tension between layoffs at the bottom of the organisation and rewards at the top.

Meta’s spokesperson framing: the restructuring is designed to “ensure teams are best positioned” as the company shifts focus toward artificial intelligence.

3.2 The AI builder pod restructure

The more structurally significant news, obtained by Business Insider from a leaked internal memo, is what Meta is building in place of the eliminated roles.

Within Meta’s Reality Labs division — the 20,000-person unit responsible for AR/VR headsets, AI wearables, and AI research — 1,000 employees are being reorganised into AI-native “pods” with their job titles rebranded to “AI builders.”

The pod model, as described in the memo:

  • Small, autonomous teams of 3–8 people structured around AI-first workflows
  • Each pod uses AI agents for the majority of execution work — research, code generation, content creation, data analysis
  • Human team members focus on strategic direction, quality judgment, and creative decisions
  • Traditional management layers are reduced — pods report directly to senior leadership with minimal middle-management structure

3.3 The Zuckerberg AI CEO experiment

Adding context to the restructure: Mark Zuckerberg has separately been testing an AI assistant for leadership functions — using AI to synthesise internal performance data, draft strategic briefings, analyse competitive intelligence, and prepare board communications. Reported by People Matters on March 22, this programme is described as Zuckerberg’s personal exploration of how much executive cognitive load AI can absorb.

The significance: if the CEO of the world’s fourth largest technology company is testing AI for leadership functions and publicly discussing it, the normalisation of AI in executive roles is progressing faster than most governance frameworks are designed to handle.

3.4 Meta’s AI investment paradox

Meta’s 2026 AI strategy presents a striking paradox: the company is simultaneously cutting human headcount, massively increasing AI infrastructure investment, paying record compensation to senior executives, and restructuring surviving roles around AI execution models. The net result is an organisation that employs fewer people, pays them more selectively, and extracts more output per employee through AI augmentation — a capital allocation shift from labour to technology that has profound implications for Meta’s workforce model and the broader knowledge economy.

The Reality Labs AI wearables division — which cut more than 1,000 roles earlier in 2026 and has now restructured survivors into AI builder pods — is the clearest signal of what the AI-native organisation looks like in practice: smaller, faster, more technically intensive, and structurally dependent on AI execution capacity rather than human execution headcount.


4. Claude in the Iran War Kill Chain — The Most Uncomfortable AI Story of 2026

4.1 What happened on February 28

On February 28, 2026 — the same day OpenAI confirmed its Pentagon deal and ChatGPT uninstalls surged 295% — US and Israeli forces launched strikes on Iran. The strikes, which resulted in the death of Supreme Leader Ayatollah Ali Khamenei among others, represented the most significant kinetic military action in the Middle East in years.

The human toll in the three weeks following the strikes: more than 2,000 dead, 10,000 injured, 4 million displaced, oil prices above $100 a barrel, and 56 cultural heritage sites damaged or destroyed.

4.2 Claude’s role confirmed

The Wall Street Journal first reported, and RTÉ’s analysis confirmed, that Anthropic’s Claude AI model was deployed by US combatant commands as part of the Iran operation — specifically through Palantir’s Maven Smart System, which integrates Claude for intelligence synthesis, target identification, and battle scenario simulation.

The nature of Claude’s role, as described by multiple sources:

  • Intelligence synthesis: Processing and summarising large volumes of satellite imagery data, electronic surveillance intercepts, and signals intelligence into actionable summaries for human commanders
  • Target identification assistance: Analysing intelligence inputs to generate targeting recommendations — humans retain final authority, but Claude’s recommendations shape the decision
  • Battle scenario simulation: Modelling potential outcomes of different tactical decisions to inform operational planning

Craig Jones, a lecturer at Newcastle University and author of The War Lawyers, described it precisely: “The AI machine is making recommendations for what to target, which is actually much quicker in some ways than the speed of thought.”

The conclusion from multiple analysts: “The distinction between ‘recommendation’ and ‘decision’ becomes increasingly theoretical at combat speed.”

4.3 Anthropic’s position — and the painful irony

The irony of Claude’s military deployment is acute. Anthropic was founded by former OpenAI researchers who left specifically over ethical concerns about AI’s direction. Its Constitutional AI framework, its marketing as the safety-first AI, and its public positioning as the responsible alternative to OpenAI are all predicated on being the company that takes AI ethics seriously.

Simultaneously, Nature magazine reported that just one day before the US-Israeli offensive began, the US government had sidelined one of its main AI suppliers over ethical concerns about AI’s use — reportedly OpenAI, in the context of the Pentagon controversy that triggered the ChatGPT uninstall surge.

The resulting situation: OpenAI was sidelined from the Iran operation partly over ethics concerns; Claude was deployed in the operation instead. A separate YouTube report confirmed Anthropic has filed legal challenges against aspects of the Pentagon’s Claude deployment — suggesting Anthropic itself is not entirely comfortable with how its model is being used.

4.4 The governance gap this exposes

The Iran conflict is exposing a governance gap that no major AI lab, government, or international body has adequately addressed: the absence of clear, enforceable international standards for AI use in military operations.

The question “is AI making kill decisions?” has a technically accurate but practically misleading answer: not officially, but functionally yes in fast-tempo drone intercept scenarios where human “final authority” operates at reaction times measured in milliseconds, not seconds.

For Australian businesses, this story is not just geopolitical background — it has direct implications for AI governance frameworks and the emerging regulatory environment. The NSW Digital Work Systems Act and equivalent frameworks in development elsewhere are directly responding to the demonstrated reality that AI systems can be used in high-stakes decision-making contexts far beyond what their developers intended or most users would sanction. Understanding that Claude — the tool recommended by most enterprise AI consultancies for business use — is simultaneously deployed in active military targeting operations is important context for any organisation’s AI risk assessment.


5. Boston Dynamics Atlas: Sold Out at $200K — The Physical AI Era Has Arrived

5.1 The sold-out humanoid

Boston Dynamics’ humanoid Atlas robot — the electric, AI-powered successor to the famous hydraulic Atlas that went viral for parkour and backflips — has entered mass production and its entire 2026 production run is already sold out.

The pricing context from analysts and market data:

  • Atlas estimated unit price: $130,000–$320,000 per unit (most cited figure: ~$200,000)
  • For comparison, competitors including Tesla Optimus, Figure AI, and Unitree are targeting price points of $20,000–$30,000
  • Boston Dynamics is selling into a market segment that wants premium performance over price — its backlog of industrial and military customers are willing to pay the premium

At $200,000 per unit with a sold-out 2026 run, the physical AI market is no longer speculative. It is generating committed purchase orders at scale.

5.2 The ROI case at $200K

The sold-out status despite premium pricing reflects a compelling ROI calculation that enterprise buyers have already run:

A $200,000 Atlas deployed in warehouse logistics at 16 hours/day, 350 days/year, across a 5-year operational life generates an effective hourly labour cost of approximately $6.70/hour. US warehouse labour costs $35–$45/hour. The payback period at that differential: under 2 years.

The ROI equation is so favourable that at $200,000 — 10× the price of the cheapest competitors — Atlas remains a financially compelling investment for any operation with sufficient volume and consistent task requirements. The premium buys superior dexterity, reliability, and adaptability — qualities that matter in unstructured environments where cheaper robots fail.

5.3 The competitive landscape

The physical AI market is bifurcating:

Premium tier ($130K–$320K): Boston Dynamics Atlas, Apptronik Apollo, Agility Robotics Digit — targeting industrial, logistics, and government/military customers who prioritise performance over price. Production is capacity-constrained and mostly pre-sold.

Volume tier ($5K–$30K): Unitree H1/R1, Tesla Optimus, Figure AI robots, Chinese manufacturers including AgiBot, Kepler, and BYD — targeting mass-market deployment where price is the primary constraint. Chinese manufacturers have driven costs down 40% year-over-year and are the dominant force in this segment.

The competitive divergence matters for Australian businesses: premium humanoid robots are already commercially available and financially justifiable for large-scale industrial operations. Volume-tier robots — at $20,000–$30,000 and approaching consumer-grade pricing — will begin entering mid-market businesses within 24–36 months. The physical AI era is not coming. It has arrived at the premium tier and is rapidly approaching the mass market.

5.4 The skilled trades counter-narrative

One data point that complicates the simple “robots replace workers” narrative: a Seoul Economic Daily analysis published March 18 found that skilled trade worker wages are surging in the AI era. The report’s thesis — illustrated with the headline “The $200K robot that made unions tremble still can’t replace them” — is that physical AI systems’ most significant limitation is performance in unstructured, variable environments requiring human adaptability and judgment.

Plumbers, electricians, carpenters, and other skilled tradespeople whose work is inherently variable and site-specific are experiencing wage growth as demand for their skills increases against a supply that AI cannot yet replicate. The counter-intuitive result: the AI era may create a bifurcated labour market where highly structured physical work (warehouse picking, assembly line tasks) is rapidly automated, while unstructured skilled trades actually appreciate in value.


6. China Overtakes the US in AI Talent — The Most Underreported Story of the Week

6.1 The talent milestone

Stanford University’s DigiChina Project and multiple AI talent tracking databases confirmed in late March 2026 that China has for the first time produced more top-tier AI researchers than the United States in a trailing 12-month period — measured by publications at leading AI conferences (NeurIPS, ICML, ICLR) and by citation impact in frontier AI research.

This is a structural milestone, not a marginal data point. For most of the past decade, the United States — drawing on both domestic talent and international researchers attracted to US universities and companies — has led global AI talent production by a significant margin. China closing and crossing that gap represents a fundamental shift in the human capital infrastructure of global AI development.

6.2 The drivers

Three factors have driven China’s talent surge:

State investment at scale: Beijing’s AI investment strategy — funding AI research through universities, national laboratories, and a network of sponsored startups — has created a talent development pipeline with no equivalent in the West. Chinese AI researchers who previously emigrated to the US for graduate study and remained in American companies are increasingly returning to China, attracted by research funding, competitive compensation, and the scale of deployment opportunities in the Chinese market.

Open-source model ecosystem: China’s AI companies — Alibaba’s Qwen, Moonshot AI, MiniMax, Zhipu AI (Z.ai), and others — have embraced open-source model development aggressively. Meta’s Avocado model was reportedly trained using Alibaba’s Qwen as a base. The open-source strategy has accelerated Chinese AI capability development by making high-quality foundation models freely available for research iteration.

Competitive DeepSeek effect: DeepSeek’s January 2026 release — demonstrating near-frontier AI capability developed at a fraction of US model training costs — validated the Chinese research community’s technical approach and triggered a wave of investment in Chinese AI talent development. The “DeepSeek moment” did for Chinese AI what Sputnik did for US aerospace: demonstrated competitive parity and galvanised institutional response.

6.3 What this means for the global AI race

The US response to China’s talent milestone has been contradictory. US export controls on advanced AI chips — Nvidia H100s, H200s, and GB200s — are designed to slow Chinese AI development by limiting computing infrastructure. But the talent data suggests the strategy has a fundamental flaw: AI capability development is ultimately constrained by human ingenuity, not just compute. A country that produces more top-tier AI researchers than its competitor can develop algorithmic innovations that partially compensate for compute disadvantages — as DeepSeek demonstrated.

Chinese domestic semiconductor development has also accelerated in direct response to export controls. Huawei’s Ascend AI chips, while not yet at Nvidia’s performance level, are improving at a rate that suggests China will achieve meaningful AI compute self-sufficiency within 2–3 years. When that occurs, the export control strategy’s effectiveness will substantially diminish.

For Australian policymakers and business leaders, the China AI talent milestone has direct strategic implications. Australia’s AI strategy — currently heavily reliant on US technology partnerships and aligned with US export control frameworks — needs to account for a world in which AI capability is genuinely bifurcated between US and Chinese ecosystems, each with full-stack capability. The current assumption that US AI leadership is durable enough to build long-term strategic alignment around is becoming less secure with every quarterly talent publication count.


7. The Through-Line: What These Six Stories Are Really Telling Us

Six stories from one week, six different domains — a secret model, a surveillance programme, a restructure, a war, a robot, a talent race. The surface diversity obscures a single coherent narrative.

The AI race has entered its consequential phase. This is no longer the experimental phase, the hype phase, or the investment phase. This is the phase where AI is reshaping organisations in real time (Meta), informing military decisions with lethal consequences (Iran), monitoring workers at unprecedented granularity (JPMorgan), and generating committed purchase orders for physical embodiment at $200,000 per unit (Boston Dynamics). The stakes of getting AI governance right — and the costs of getting it wrong — are now visible and measurable rather than speculative.

OpenAI is in a race for its strategic narrative. Spud is not just a model release. It is OpenAI’s answer to an existential competitive threat from Anthropic’s enterprise dominance and a trust deficit created by the Pentagon controversy. If Spud delivers genuinely differentiated capability in April, OpenAI’s narrative resets. If it delivers incremental improvement, the enterprise share gap widens further and the $840 billion valuation becomes harder to sustain.

The workplace is the new AI governance battleground. JPMorgan’s keystroke monitoring and Meta’s AI builder pods are different expressions of the same organisational question: how do you manage human performance in an environment where AI augmentation is becoming the baseline expectation? The organisations answering this question well are those that redesign roles around AI’s capabilities rather than monitoring humans against pre-AI performance standards. The organisations answering it poorly are those generating surveillance infrastructure faster than trust infrastructure.

Geography matters again. China’s AI talent milestone and the Iran conflict’s AI deployment both signal that the AI era is not a globalised, cooperative story. It is a competitive, geopolitically structured story in which national capability, talent, and strategic alignment determine who has access to what — and who sets the rules. Australian businesses building AI strategies need to factor in a bifurcated AI ecosystem as a planning assumption, not a contingency.


8. What This Means for Australian Businesses

On OpenAI Spud: Watch the April launch closely before making any AI tool purchasing decisions. If Spud closes the gap with Claude Opus 4.6 on enterprise coding and document tasks, the competitive landscape between Claude and ChatGPT changes meaningfully. Our current recommendation — Claude as primary tool, ChatGPT as creative/Microsoft supplement — may need updating post-Spud. For businesses currently mid-procurement on enterprise AI contracts, build a 45-day review clause into any agreement signed before the Spud release.

On JPMorgan’s monitoring: Australian businesses contemplating employee AI monitoring should be aware of Privacy Act obligations that do not apply to US employers. Employee monitoring of the type JPMorgan has deployed — tracking keystrokes, video calls, and calendar data — requires disclosure under the Privacy Act 1988, is subject to Fair Work Act considerations regarding surveillance, and in NSW triggers specific obligations under the Workplace Surveillance Act 2005. The NSW Digital Work Systems Act adds additional requirements for AI systems that influence employment decisions. Before deploying any employee AI monitoring, take employment law advice specific to your jurisdiction.

On Meta’s AI builder pods: The pod model Meta is implementing in Reality Labs is the clearest articulation yet of the AI-native team structure that forward-looking organisations are moving toward — small, autonomous, AI-augmented teams with flat reporting structures and broad decision-making authority. Australian professional services firms, agencies, and technology companies should understand this model as the competitive benchmark for talent-intensive work. Teams that operate this way will produce more output per head than conventional team structures, at lower cost, with faster iteration cycles.

On Claude in the Iran war: For Australian businesses using Claude for compliance-sensitive work — particularly in financial services, healthcare, and legal sectors — the confirmation of Claude’s military deployment adds a dimension to your AI risk register that may require board-level acknowledgment. This does not mean Claude is a riskier tool for business use — its commercial data handling is entirely separate from its military deployment. But your risk register should accurately reflect that Claude is a dual-use technology, and your AI governance documentation should note this.

On Boston Dynamics at $200K: The physical AI era arriving at premium pricing is a signal that mid-market physical AI — at $20,000–$30,000 per unit — is 24–36 months away from commercial availability. Australian businesses in warehousing, logistics, manufacturing, and construction should begin scenario planning for physical AI deployment now: not to deploy immediately, but to understand which workflows would be affected, what governance frameworks would be required, and what workforce transition planning is needed before the price point becomes compelling.

On China’s AI talent: Australian businesses with supply chains, technology partnerships, or competitive exposures in industries where Chinese AI capability is relevant — manufacturing, energy, mining technology, defence supply chains — should update their AI competitive intelligence frameworks to account for Chinese AI capability as a genuine peer competitor rather than a fast follower. The talent data suggests the gap in foundational AI research capability between the US and China is closing and in some metrics has closed. Strategic planning frameworks built on the assumption of durable US AI dominance need revision.


This article was researched and written by the Kersai Research Team. Kersai is a global AI consultancy firm dedicated to helping enterprises confidently navigate the rapidly evolving artificial intelligence landscape — from cutting-edge strategic insights to practical, large-scale AI implementation. To learn more, visit kersai.com.