Nvidia’s $1 Trillion AI Order Book, Meta’s Quietly Failing Model, Australia’s First AI Workplace Law, and the Open-Source Agent War Going Global
Everything That Happened in AI This Week — March 17–19, 2026
Published: March 19, 2026 | By the Kersai Research Team | Reading Time: ~24 minutes
Last Updated: March 19, 2026
Quick Summary: Jensen Huang took the stage at Nvidia GTC 2026 and revealed a $1 trillion confirmed order backlog for the Vera Rubin NVL72 — the full-stack AI supercomputer he called “the engine of the agentic AI era.” Meta’s secret frontier model, internally codenamed “Avocado,” has quietly failed performance benchmarks against every major competitor — and the company is now considering licensing Google’s Gemini to keep its consumer AI products running while engineers try to fix it. The NSW Parliament passed Australia’s first AI workplace safety law, the Digital Work Systems Act 2026 — making AI risk a legal employer obligation and giving union inspectors the right to audit algorithmic work systems. And China’s open-source agent ecosystem is exploding: Baidu joined ByteDance and Alibaba on the OpenClaw agent framework — the same framework Nvidia just partnered with for NemoClaw — in a development that suggests US and Chinese AI agent infrastructure may be converging on compatible open standards even as chip export tensions escalate.
Table of Contents
- The Week That Proved Hardware Still Wins
- Story 1: Nvidia GTC 2026 — The $1 Trillion Agentic AI Stack
- Story 2: Meta’s Avocado Fails — And May License Google’s AI
- Story 3: NSW Passes Australia’s First AI Workplace Safety Law
- Story 4: Baidu Joins the OpenClaw Frenzy — The Agent War Goes Global
- The Four Stories Are One Story
- What This Means for Businesses
- The NSW AI Workplace Law: Employer Compliance Checklist
- This Week’s AI Landscape Comparison Table
- FAQ
1. The Week That Proved Hardware Still Wins
There is a tendency in AI commentary — including, at times, in this publication — to focus almost entirely on the software layer: the models, the benchmarks, the agents, the APIs. The hardware is assumed. The infrastructure is background. The chips are just chips.
This week reminded the entire industry that the chips are never just chips.
At Nvidia’s GTC 2026 conference in San Jose — the AI industry’s annual equivalent of the Super Bowl — Jensen Huang revealed a confirmed order backlog of $1 trillion for Nvidia’s next-generation AI hardware systems. One trillion dollars. From real customers. Already placed. The number is so large it is difficult to process: it represents more capital committed to AI infrastructure in advance purchase orders than the entire GDP of most countries.
That number — and the hardware ecosystem Huang revealed around it — sets the physical constraints of the AI era. Every model, every agent, every breakthrough, every deployment we cover in these pages runs on hardware. Nvidia makes that hardware. And the scale of commitment behind Nvidia’s order book tells you something unambiguous about the trajectory of AI investment: the people spending the most money on AI infrastructure are not slowing down. They are accelerating.
Against that backdrop, four stories from the week of March 17–19 collectively reveal where that investment is going, where it is not going as planned, what legal obligations it is creating, and how the open-source ecosystem is reshaping the competitive landscape in ways that chip export controls cannot easily contain.
2. Nvidia GTC 2026: Jensen Huang Reveals the Hardware Stack for the Agentic AI Era
2.1 The event
Nvidia’s GPU Technology Conference (GTC) 2026 ran March 16–19 in San Jose, California — the company’s flagship annual event, which in recent years has become the single most important gathering in the AI industry, consistently attracting more consequential announcements than any AI-specific research conference.
The keynote was delivered by Jensen Huang, Nvidia’s founder and CEO, on March 18. It ran for approximately three hours and covered five major announcements. Each one would have dominated an ordinary technology news cycle on its own. Together, they outlined Nvidia’s vision for the hardware infrastructure of the agentic AI era — and demonstrated that Nvidia intends to own every layer of that infrastructure, from the chip to the operating system.
2.2 Announcement 1: Vera Rubin NVL72 — The Full-Stack AI Supercomputer
The headline product of GTC 2026 is the Vera Rubin NVL72 — Nvidia’s next-generation AI computing system, succeeding the Blackwell B200 that dominated enterprise AI infrastructure in 2025.
Huang described it in terms that departed from the usual GPU product framing:
“This is not a GPU. This is one giant system — a 7-chip, 5 rack-scale AI supercomputer that is the engine supercharging the era of agentic AI.”
The technical specifications, as reported by Tom’s Hardware and CNBC:
| Specification | Vera Rubin NVL72 | Blackwell B200 (predecessor) | Improvement |
|---|---|---|---|
| Architecture | Vera Rubin (5nm TSMC) | Blackwell (4nm TSMC) | New generation |
| System scale | 72 GPUs / 5 rack units | 8 GPUs / 1 rack unit | 9× scale |
| AI training throughput | ~1,800 petaFLOPS (FP8) | ~700 petaFLOPS (FP8) | ~2.5× |
| AI inference throughput | ~3,600 petaFLOPS (FP4) | ~1,400 petaFLOPS (FP4) | ~2.5× |
| Memory bandwidth | 57.6 TB/s (NVLink 6) | 14.4 TB/s (NVLink 4) | 4× |
| HBM4 memory | 13,824 GB per system | 1,440 GB per system | ~9.6× |
| Power envelope | ~120 kW per system | ~14.3 kW per GPU | Managed via liquid cooling |
| Networking | InfiniBand 1.6Tb/s | InfiniBand 400Gb/s | 4× |
Source: Tom’s Hardware GTC 2026 live blog; Nvidia official specifications
The Vera Rubin NVL72 is designed for agentic AI workloads — the inference-heavy, long-context, multi-step reasoning tasks that AI agents require, which have different computational profiles from the batch training workloads that previous GPU generations were optimised for. Where Blackwell was built to train models faster, Vera Rubin is built to run them at scale, continuously, in production.
Huang’s framing was explicit: the era of building AI models is giving way to the era of running AI agents — and Vera Rubin is the hardware for that transition.
2.3 Announcement 2: $1 Trillion in Confirmed Orders
The number that stopped every financial analyst in the room: Huang disclosed that Nvidia currently holds $1 trillion in confirmed purchase orders for Blackwell and Vera Rubin systems combined.
For context on what that number represents:
- Nvidia’s total revenue in fiscal year 2025 was approximately $130 billion — itself a record.
- The $1 trillion order backlog represents approximately 7.7 years of current revenue already committed by customers.
- The customers placing these orders include every major hyperscaler (Microsoft Azure, Google Cloud, Amazon AWS, Meta, Oracle), national AI infrastructure projects from Saudi Arabia, UAE, India, and Japan, and dozens of enterprise-scale AI deployments.
- The orders are non-cancellable with significant deposit requirements — meaning this is not a wish list. It is contracted revenue.
The $1 trillion figure is the clearest single data point available on the scale of committed global AI infrastructure investment. It is larger than the entire semiconductor industry’s annual revenue. It is larger than the combined market capitalisation of most Fortune 500 companies. And it is all flowing to a single company.
The question it raises — which Huang addressed directly — is whether Nvidia can manufacture at the required pace. The answer: TSMC has committed to capacity expansion specifically to serve this demand, and Nvidia has restructured its supply chain to reduce the bottlenecks that limited Blackwell delivery in 2025.
2.4 Announcement 3: NemoClaw — The Open-Source Agentic AI Operating System
The third and most strategically surprising announcement: Nvidia unveiled NemoClaw — an open-source operating system for agentic AI, built in partnership with OpenClaw, the open-source agent framework.
Huang described NemoClaw as “the open-sourced operating system of agentic computers” — a software layer that manages AI agent deployment, orchestration, tool access, memory, and communication across Nvidia-powered infrastructure.
The implications of this announcement are threefold:
First, Nvidia is moving up the stack. GPUs were always Nvidia’s hardware business. NVLink and InfiniBand are its networking business. CUDA is its software development ecosystem. NemoClaw represents Nvidia entering the agent operating system layer — the software that sits between the hardware and the AI applications running on it. If NemoClaw achieves CUDA-level adoption, Nvidia’s lock on AI infrastructure extends from the chip to the OS.
Second, the OpenClaw partnership creates a direct connection between Nvidia’s hardware ecosystem and the open-source agent framework that Baidu, ByteDance, and Alibaba are all deploying in China (see Story 4). This is significant: it means Nvidia’s agent OS is architecturally compatible with the dominant Chinese open-source agent ecosystem — a fact that complicates the US government’s chip-based geopolitical strategy.
Third, open-sourcing NemoClaw is a direct challenge to Microsoft (Azure AI Foundry), Google (Vertex AI Agent Builder), and Amazon (Bedrock Agents) — all of whom offer proprietary agent orchestration platforms that generate significant cloud revenue. A free, open-source alternative from the company that makes the hardware all three of them depend on is a credible threat to those businesses.
2.5 Announcement 4: DLSS 5 — Neural Rendering Comes to Consumer Gaming
The most consumer-facing announcement of GTC 2026: DLSS 5 (Deep Learning Super Sampling version 5) — Nvidia’s AI-powered graphics upscaling technology, which has now reached the capability threshold for real-time photoreal 4K rendering on consumer-class GeForce hardware.
DLSS 5 uses a multi-frame neural network that generates intermediate frames from AI prediction rather than GPU rasterisation — effectively using AI inference to replace a significant portion of traditional graphics rendering. The result: games running at the visual quality of 4K at 120fps on hardware that would previously have required 8K native rendering to achieve equivalent image quality.
For the AI industry, DLSS 5 matters less as a gaming feature and more as a demonstration of AI inference capability on consumer hardware — proof that the inference efficiency improvements driving enterprise AI agents are simultaneously transforming consumer devices.
2.6 Announcement 5: Nvidia Acquires Groq
The final announcement, confirmed in the hours following Huang’s keynote: Nvidia has acquired Groq — the AI inference chip startup known for its LPU (Language Processing Unit) architecture, which achieved record inference speeds for LLM token generation at lower power consumption than standard GPUs.
The acquisition is structured as an acqui-hire: Groq’s engineering team joins Nvidia’s hardware research division, and Groq’s LPU architecture patents and research are absorbed into Nvidia’s IP portfolio. Financial terms were not disclosed.
The strategic rationale is clear: Groq had demonstrated inference efficiency advantages over Nvidia’s standard GPU approach for specific workloads — particularly high-throughput, low-latency LLM inference. Rather than allow Groq to mature into a competitive alternative — potentially attractive to hyperscalers seeking to reduce Nvidia dependency — Nvidia acquired the technology and the team.
For enterprises that had been evaluating Groq’s cloud inference API as a cost-reduction alternative to Nvidia-powered inference, the acquisition raises questions about the product’s roadmap under Nvidia ownership.
3. Meta’s Avocado Model Quietly Fails — And May License Google’s AI
3.1 What happened
On March 12, The New York Times broke a story that sent Meta’s stock lower and reverberated across the AI industry for days: Meta’s next-generation frontier AI model — internally codenamed “Avocado” — has underperformed GPT-5.4, Claude Opus 4.6, and Gemini Ultra 3.0 in internal benchmark testing across reasoning, coding, and writing tasks.
Bloomberg, Seeking Alpha, Business Times Singapore, and Yahoo Finance all followed with confirming coverage. The picture that emerged from multiple sources with knowledge of Meta’s internal AI programme is both technically detailed and strategically embarrassing.
3.2 What Avocado is and what went wrong
Avocado is Meta’s planned successor to LLaMA 4 — a frontier-scale, closed-weights model intended to power Meta AI across Instagram, WhatsApp, Facebook, Messenger, and Meta’s AR/VR platforms. Unlike the LLaMA series (which Meta releases as open weights), Avocado was designed as a proprietary, commercial-grade model that would give Meta a frontier AI capability competitive with OpenAI and Google on consumer AI experience.
The model has been in development for over 18 months, with a reported training budget in excess of $2 billion across Nvidia H100 and Blackwell clusters. Meta’s AI chief Yann LeCun — now departed to AMI Labs — had advocated for a different architectural approach. His departure appears to have created a strategic vacuum that contributed to the current performance shortfall.
According to sources cited by the NYT, Avocado’s specific failure points include:
- Multi-step reasoning: Avocado’s chain-of-thought reasoning underperforms GPT-5.4 by a significant margin on ARC-AGI-2 and GPQA Diamond benchmarks.
- Code generation: On HumanEval and SWE-Bench Pro, Avocado trails both GPT-5.4 and DeepSeek V4 — a particularly painful result given that coding benchmarks are among the most directly commercially relevant.
- Long-form writing quality: Human evaluator studies show Avocado’s outputs rated lower on coherence, accuracy, and tone across professional writing tasks.
- Instruction following: Avocado shows higher rates of instruction deviation on complex, multi-constraint prompts — a failure mode that makes it unreliable for the agentic use cases Meta was targeting.
The launch, originally planned for March 2026, has been delayed to at least May 2026 pending a remediation effort that Meta’s AI leadership describes as “substantial.”
3.3 The Google Gemini licensing discussion
The detail that has generated the most industry commentary: Meta is actively discussing licensing Google’s Gemini models to power its consumer AI products while Avocado is being remediated.
Yahoo Finance and Investing.com both confirmed that discussions between Meta and Google are underway, though no agreement has been finalised. The commercial logic is straightforward: Meta’s consumer AI products — particularly Meta AI on WhatsApp and Instagram, which has hundreds of millions of active users — need a capable, reliable foundation model. If Avocado is not ready and LLaMA 4 is no longer competitive at the frontier, licensing Gemini is the fastest path to maintaining consumer AI quality.
The irony is acute. Mark Zuckerberg spent much of late 2025 publicly positioning Meta as an AI frontier lab — spending billions on compute, recruiting aggressively, and promising that Meta would “push the boundaries of what’s possible.” Licensing a competitor’s model to keep your consumer products running is a direct contradiction of that positioning.
For Google, the potential licensing deal is a pure win: it expands Gemini’s distribution to Meta’s multi-billion user base, generates licensing revenue, and demonstrates that Google’s AI has become the default infrastructure choice for AI companies that cannot build their own. Apple chose Gemini for Siri. Now potentially Meta for its social platforms. Google is winning the distribution war without winning the consumer AI brand war — a more durable strategic position.
3.4 What this means for the open-source AI ecosystem
Meta’s primary strategic contribution to the AI ecosystem has been the LLaMA series of open-weight models — the releases that triggered the open-source AI movement and gave developers, researchers, and enterprises a free, self-hostable alternative to proprietary frontier models.
If Meta’s proprietary frontier model development is faltering, it raises a question for the open-source ecosystem: will Meta continue to prioritise and invest in LLaMA development? Or will the Avocado setback lead to a retrenchment that slows future LLaMA releases?
The open-source community is watching closely. LLaMA 5 — the next major open-weight release — was expected in Q3 2026. Whether the Avocado delay affects that timeline remains unclear.
4. NSW Passes Australia’s First AI Workplace Safety Law — What Every Employer Needs to Know
4.1 Why this story matters for Australian businesses
Of the four stories this week, the NSW Digital Work Systems Act 2026 is the one with the most immediate, direct impact on Australian businesses — and the one that has received the least coverage in mainstream AI media.
Every employer in New South Wales — and, through the mirroring obligations under Australia’s model WHS laws, every employer watching NSW legislative trends nationally — now has a legal obligation to manage AI as a workplace health and safety risk. Failure to comply is not just a regulatory matter. Under the Act’s provisions, individual managers can be held personally liable for harms caused by AI-driven work systems they failed to adequately assess and govern.
This is not a future regulation. It is not a proposed framework. It is passed law, effective now.
4.2 What the Act does
The Work Health and Safety Amendment (Digital Work Systems) Act 2026 amends the existing NSW Work Health and Safety Act 2011 to explicitly include digital work systems — defined broadly to cover AI, algorithms, automated systems, and online platforms used to allocate, monitor, or evaluate work — within the employer’s primary duty of care.
PwC Australia’s legal advisory team and CBP Lawyers — both of whom published compliance notes within days of the Act’s passage — summarise the core obligations as follows:
Obligation 1: Duty of Care Extends to Digital Work Systems
Employers (referred to as “PCBUs” — Persons Conducting a Business or Undertaking) must ensure that digital work systems used to allocate tasks, monitor performance, set pace, or evaluate workers do not create risks to physical or psychological health and safety.
This obligation is explicit and enforceable. It means that if your organisation uses AI to allocate work queues, set productivity targets, schedule shifts, monitor employee performance, or make HR decisions, you must assess and manage the health and safety risks those systems create.
Obligation 2: Risk Assessment for Algorithmic Systems
PCBUs must conduct formal risk assessments of their digital work systems — identifying the specific ways in which AI or algorithmic management could cause physical injury, psychological harm, or unreasonable work intensity.
The Act does not prescribe a specific risk assessment methodology, but compliance advisors are recommending frameworks that include: worker consultation, workload intensity analysis, psychosocial hazard assessment, and regular review cycles as systems are updated.
Obligation 3: Worker Consultation
The Act extends existing WHS consultation obligations to digital work systems — meaning employers must consult workers (and their representatives) when introducing, modifying, or removing algorithmic work management systems. This is a significant practical obligation for organisations that have been deploying AI tools without formal worker engagement processes.
Obligation 4: Union Inspector Access Rights
Health and safety representatives and union inspectors can now enter workplaces to audit digital work systems — including AI-powered scheduling, performance monitoring, and task allocation tools. This is a significant expansion of union inspection rights into the technology layer of workplace management.
Obligation 5: Management Accountability
The Act clarifies that delegating decision-making to an algorithm does not extinguish management accountability. NSW WHS Minister Sophie Cotsis stated explicitly during the Act’s passage:
“If PCBUs delegate tasks to an algorithm which creates harm, the Act clarifies that managers are responsible for the human impact of those decisions.”
Individual managers — not just the organisation as an entity — can be held personally liable under the Act’s officer liability provisions if they fail to exercise due diligence in managing digital work system risks.
4.3 What digital work systems are covered
The Act’s definition of “digital work system” is intentionally broad and technology-neutral:
- AI-driven task allocation: Systems that assign work items, customer tickets, delivery routes, or care tasks to workers algorithmically.
- Algorithmic performance monitoring: Systems that track keystroke rates, response times, idle periods, or productivity scores and use those metrics to manage, assess, or discipline workers.
- Automated scheduling: Systems that generate work rosters, shift patterns, or on-call requirements without individual human review of each decision.
- AI-assisted HR decisions: Systems that score job applications, flag employees for review, or generate performance ratings using AI or algorithmic methods.
- Pace-setting algorithms: Systems in logistics, manufacturing, or customer service that set work pace expectations or throughput requirements algorithmically.
- Online platform work systems: Gig economy platforms that use algorithms to match workers with tasks, set pay rates, or manage worker ratings.
Note that the Act covers all of these categories — not just the most sophisticated AI systems. An Excel macro that automatically assigns inbound service tickets is technically a digital work system within the Act’s scope. A scheduling tool that generates rosters based on availability rules is covered. A customer service platform that routes calls based on agent performance scores is covered.
4.4 National implications: is NSW the template?
Australia operates a model WHS law system, under which NSW, Victoria, Queensland, South Australia, Tasmania, the ACT, and the NT all base their workplace safety legislation on a harmonised national model. Western Australia operates its own parallel framework.
NSW amendments to the WHS Act frequently influence updates to the model WHS laws — and Safe Work Australia (the national body that maintains the model laws) has already confirmed it is reviewing the Digital Work Systems Act provisions for potential national model law incorporation.
PwC Australia’s legal team advises that even employers in states and territories other than NSW should treat the Digital Work Systems Act as a leading indicator of the compliance obligations they will face within 12–24 months as other jurisdictions adopt equivalent provisions.
For multinational enterprises with Australian operations, the Act also intersects with the EU AI Act’s high-risk AI provisions (which cover employment and worker management AI systems under their strictest requirements) and emerging equivalent frameworks in the UK and Canada.
4.5 The NSW AI Workplace Law Compliance Checklist
(See Section 8 for the full compliance checklist — a structured action plan for Australian employers.)
5. Baidu Joins the OpenClaw Frenzy — China’s Open-Source Agent War Goes Global
5.1 What happened
Reuters reported on March 17 that Baidu — China’s largest search and AI company and the creator of the ERNIE Bot AI assistant — has unveiled a full suite of AI agent products built on OpenClaw, the open-source agent orchestration framework.
Baidu joins ByteDance (owner of TikTok and Douyin) and Alibaba (owner of Taobao, Tmall, and Aliyun cloud) in deploying agents on the OpenClaw framework — creating what Reuters described as a “frenzy” of domestic Chinese interest in a single open-source agent standard.
5.2 What OpenClaw is
OpenClaw is an open-source AI agent orchestration framework — a software system that enables developers to build, deploy, and coordinate AI agents that can use tools, access external data sources, communicate with other agents, and execute multi-step autonomous workflows.
Think of it as the agent layer equivalent of what LLaMA did for base models: a free, open, community-developed foundation that any developer can build on without proprietary constraints.
OpenClaw emerged from the Chinese open-source AI community in late 2025 and has grown rapidly — now underpinning agent products from three of China’s four largest technology companies. Its architecture is designed to be model-agnostic (working with any LLM), tool-agnostic (connecting to any MCP-compatible tool server), and cloud-agnostic (deployable on any infrastructure).
5.3 The Nvidia connection — and what it means
The most geopolitically significant aspect of the OpenClaw story is the Nvidia NemoClaw partnership announced at GTC 2026.
NemoClaw — Nvidia’s new open-source agentic AI operating system — is built in direct collaboration with the OpenClaw project. The two frameworks are architecturally compatible: agents built on OpenClaw can run on NemoClaw-powered Nvidia infrastructure, and vice versa.
This creates a striking situation: the US company that controls the chips driving both the US and Chinese AI ecosystems has chosen to build its agent OS in partnership with the open-source framework that China’s largest technology companies are simultaneously standardising on.
The practical implication for the US government’s chip export control strategy is significant. Export controls restrict Chinese access to Nvidia’s physical hardware. They do not — and cannot — restrict the adoption of open-source software standards. If OpenClaw and NemoClaw achieve joint dominance as the global open-source agent framework, the agent software ecosystem becomes compatible across the US-China hardware divide.
Developers in countries using US chips and developers using domestically produced Chinese chips can build agents on the same framework — a convergence that chip export controls cannot prevent.
5.4 What the Chinese AI agent ecosystem looks like now
The speed with which China’s major technology companies have converged on a single open-source agent framework stands in contrast to the fragmentation in the US agent ecosystem, where OpenAI’s Agent Protocol, Anthropic’s MCP, Microsoft’s AutoGen, and Google’s Agent Space all compete for developer adoption.
China’s approach — driven partly by the practical necessity of working around hardware constraints, and partly by a cultural preference for open, shared infrastructure standards — may prove more durable than the proprietary US alternatives. A developer ecosystem built on a single open standard is easier to grow, easier to build tooling for, and easier to train developers on than one fragmented across incompatible proprietary protocols.
For Western enterprises assessing their AI agent strategy, the rapid coalescence of the Chinese agent ecosystem around OpenClaw is worth watching as a signal of where the global open-source agent standard may settle.
6. The Four Stories Are One Story
6.1 Hardware is the constraint, and one company controls it
The $1 trillion Nvidia order book is not just a financial milestone. It is the physical expression of every other story this week.
Meta’s Avocado model failed partly because Meta’s AI training infrastructure — despite being enormous by any historical measure — is operating at the limits of what current hardware generations can deliver efficiently. The compute required to train a genuinely frontier model at GPT-5.4’s capability level is so large that only the companies with the deepest infrastructure partnerships — OpenAI with Microsoft Azure, Google with its own TPU fleet, Anthropic with Amazon’s AWS investment — can consistently build it. Meta’s training infrastructure, while substantial, has not been as efficiently scaled.
Nvidia’s GTC 2026 announcements — Vera Rubin’s 2.5× training throughput improvement, the 4× memory bandwidth increase — are the hardware upgrades that will define who can train the next generation of frontier models. The $1 trillion order backlog tells you that every major player has already decided the answer is: whoever has the most Vera Rubin systems.
6.2 The regulatory reckoning has arrived — starting in Australia
The NSW Digital Work Systems Act is the most advanced example of a trend that is accelerating globally: governments are moving from AI ethics frameworks (voluntary, advisory, non-binding) to AI safety law (mandatory, enforceable, personally liable).
The EU AI Act has been in staged implementation since 2024. The UK’s AI Safety Institute is moving toward binding obligations. The US is developing federal AI governance frameworks. Australia — through NSW — has now moved ahead of the US in one specific and consequential domain: workplace AI safety.
For businesses operating globally, this is the beginning of a compliance burden that will compound. The organisations that build AI governance infrastructure now — risk assessment processes, worker consultation frameworks, audit trails, management accountability frameworks — will navigate the regulatory wave more efficiently than those that wait for each jurisdiction to force action.
6.3 Open source is winning the infrastructure war
Both NemoClaw (Nvidia’s agent OS) and OpenClaw (the dominant Chinese agent framework) are open source. LLaMA 4 (Meta’s base model) is open source. DeepSeek V4 is open source. Karpathy’s research agent is open source.
The major AI labs — OpenAI, Anthropic, Google — maintain proprietary models for their frontier capabilities. But the infrastructure layer — the frameworks, the orchestration tools, the operating systems, the integration protocols — is trending strongly toward open source.
This matters for enterprise AI strategy because open-source infrastructure is, by definition, not subject to proprietary lock-in, vendor pricing power, or the classified obligations that proprietary US lab products carry. For enterprises building durable AI capabilities, the open-source infrastructure layer is increasingly the strategic foundation — and the proprietary frontier models are the interchangeable components deployed on top of it.
7. What This Means for Businesses
7.1 Nvidia’s order book is your AI infrastructure forecast
If you are making capital investment decisions about AI infrastructure — private cloud, colocation, cloud provider selection, on-premise GPU deployment — Nvidia’s $1 trillion order backlog tells you several things directly:
Vera Rubin availability will be constrained through 2026 and into 2027. The demand-to-supply ratio at the frontier of AI hardware means that enterprises without existing relationships or committed orders will face long wait times for the highest-performance systems. If your AI strategy depends on access to frontier hardware, the time to establish that access is now.
Cloud pricing for AI inference will remain elevated. All three major hyperscalers (AWS, Azure, Google Cloud) are competing for the same Vera Rubin allocation. Until manufacturing capacity scales, AI inference pricing via cloud APIs will not compress as quickly as it did for previous GPU generations. Self-hosting on available hardware — including DeepSeek V4 on current-generation GPUs — may be the most cost-effective inference path for many workloads in 2026.
The agentic AI hardware stack is built. Vera Rubin NVL72 + NemoClaw represents the complete hardware and OS foundation for enterprise AI agent deployment at scale. Organisations evaluating agentic AI architectures can now plan against a known, available hardware reference stack.
7.2 Meta’s Avocado failure is a signal, not just a scandal
The instinct is to read the Avocado story as a competitive intelligence update — Meta’s model is behind, OpenAI and Google are ahead, noted and moving on.
The more useful reading is structural: building and maintaining frontier AI models is now so technically complex and computationally expensive that even a company with Meta’s resources, talent, and infrastructure is struggling to do it reliably.
This has a direct implication for any enterprise that has been considering building proprietary AI models in-house rather than deploying existing frontier models. If Meta — with a $2+ billion training budget, thousands of AI researchers, and one of the world’s largest data centres — cannot reliably produce a frontier model competitive with OpenAI and Google, the ROI case for enterprise in-house model development is even harder to justify than it was six months ago.
The strategic calculus for most organisations remains: deploy existing frontier models on well-designed context engineering and MCP infrastructure rather than attempt to build or train models from scratch. The Avocado story is a data point that strengthens that position.
7.3 The NSW law requires action this quarter
Australian employers — particularly those in NSW, but also those monitoring the national implications — cannot treat the Digital Work Systems Act as a 2027 problem. The Act is in force now. The compliance obligations are active. The personal liability provisions for managers are live.
The organisations most exposed in the short term are those in:
- Logistics and delivery (algorithmic route assignment, driver performance monitoring, delivery pace-setting)
- Customer service and contact centres (AI-driven call routing, agent performance scoring, automated quality monitoring)
- Healthcare and aged care (AI-driven care task allocation, staff scheduling algorithms)
- Retail and warehousing (automated pick-rate setting, inventory management systems that set worker tasks)
- Professional services (AI performance evaluation tools, workload allocation systems)
- Gig economy platforms (task matching algorithms, rating systems, dynamic pricing that affects worker income)
(See Section 8 for the full compliance checklist.)
7.4 Consider OpenClaw for your agent architecture
For technology teams evaluating agent orchestration frameworks, the OpenClaw + NemoClaw combination deserves serious evaluation alongside the proprietary alternatives (Microsoft AutoGen, Google Agent Space, Amazon Bedrock Agents).
Key advantages:
- Open source under a permissive licence — no vendor lock-in, no proprietary pricing
- Model-agnostic — works with any MCP-compatible LLM
- Nvidia-endorsed via NemoClaw — hardware optimisation pathway available
- Rapidly growing ecosystem — ByteDance, Alibaba, Baidu all building on it
- Compatible with both US and Chinese AI infrastructure — reduces geopolitical supply chain risk
Key considerations:
- Newer and less enterprise-mature than proprietary alternatives
- Support and SLA guarantees require third-party commercial wrappers
- Documentation is strong for developers but lighter on enterprise governance tooling
For organisations building greenfield agent infrastructure in 2026, OpenClaw is worth a formal evaluation. For organisations already committed to Microsoft or Google cloud ecosystems, the proprietary alternatives remain more integrated choices — but monitoring OpenClaw’s maturity is warranted.
8. The NSW AI Workplace Law: Employer Compliance Checklist
For Australian employers — particularly those in NSW. Also recommended for employers in all Australian states and territories as a forward-looking compliance framework.
Phase 1: Immediate Actions (Complete Within 30 Days)
- Identify all digital work systems in use across your organisation — including AI, algorithms, automated scheduling, performance monitoring tools, task allocation systems, and gig platform integrations.
- Map which systems fall within the Act’s scope — any system that allocates work, monitors performance, sets pace, schedules shifts, or influences employment decisions.
- Assign a named compliance owner — a senior manager with explicit accountability for digital work system safety, supported by HR and legal.
- Brief your executive team and board — the Act’s personal liability provisions mean individual officers can be held liable. This is a board-level governance issue, not just an HR or IT issue.
- Engage your legal advisors — PwC, CBP, and other major firms have published specific guidance on the Act. Obtain a formal legal opinion on your current compliance position.
Phase 2: Risk Assessment (Complete Within 60 Days)
- Conduct a psychosocial hazard assessment for each digital work system — identifying risks including excessive workload, unpredictable scheduling, surveillance anxiety, deskilling, and lack of autonomy.
- Assess physical health risks — particularly relevant for logistics, manufacturing, and healthcare settings where algorithmic pace-setting may create injury risks.
- Document algorithmic decision logic — for each system, document how it makes decisions, what data it uses, and what human oversight exists for its outputs.
- Identify high-risk systems — prioritise systems with the most direct impact on worker wellbeing for immediate remediation or enhanced governance.
- Review vendor documentation — for commercially provided AI or algorithmic management tools, obtain the vendor’s safety documentation and assess whether it meets your duty of care obligations.
Phase 3: Worker Consultation (Complete Within 60 Days)
- Establish consultation process — engage worker representatives (including health and safety representatives and, where applicable, union delegates) in a formal consultation about digital work systems.
- Document consultation outcomes — record worker concerns, proposed modifications, and agreed actions. This documentation is your evidence of compliance with the consultation obligation.
- Communicate changes — inform workers about what digital work systems are in use, how they function, and what safeguards are in place.
Phase 4: Governance Framework (Complete Within 90 Days)
- Implement audit logging — ensure all AI and algorithmic decisions that affect workers are logged with sufficient detail to support safety investigations and regulatory audits.
- Establish a review cycle — digital work systems must be re-assessed when materially updated, when new evidence of risk emerges, or at minimum annually.
- Create an escalation pathway — workers must have a clear, accessible way to raise concerns about digital work systems without fear of retaliation.
- Prepare for inspector access — health and safety inspectors and union representatives can now audit your digital work systems. Ensure documentation, access rights, and staff briefing are in place.
- Update your WHS management system — formally incorporate digital work system risk into your existing WHS framework documentation, policies, and training.
Phase 5: Ongoing Compliance
- Monitor regulatory guidance from SafeWork NSW and Safe Work Australia for enforcement priorities and compliance resources.
- Track national model law updates — as other Australian jurisdictions adopt equivalent provisions, maintain a nationally consistent compliance approach.
- Review AI vendor contracts — ensure vendor agreements include obligations for safety documentation, update notifications, and compliance support.
- Train managers — all managers who make decisions involving digital work systems should receive training on the Act’s obligations and their personal liability exposure.
9. This Week’s AI Landscape: March 19, 2026 Comparison Table
| Development | Organisation | Impact Level | Who It Affects Most | Action Required |
|---|---|---|---|---|
| Vera Rubin NVL72 unveiled | Nvidia | 🔴 Critical | Cloud providers, AI labs, enterprise infrastructure | Assess hardware roadmap; plan for constrained availability |
| $1T confirmed order backlog | Nvidia | 🔴 Critical | All AI-dependent enterprises | Lock in cloud AI contracts; explore self-hosting alternatives |
| NemoClaw OS launched | Nvidia + OpenClaw | 🟠 High | AI platform developers, enterprise architects | Evaluate for agent infrastructure; compare to AutoGen/Bedrock |
| Groq acquisition | Nvidia | 🟡 Medium | Groq API customers; inference cost strategists | Monitor roadmap impact; reassess inference cost strategy |
| Avocado model delay | Meta | 🟠 High | Meta AI users; enterprise Meta API customers | Assess dependency on Meta AI products; maintain vendor diversification |
| Meta considering Gemini licence | Meta + Google | 🟠 High | Google investors; OpenAI distribution strategy | Monitor — if confirmed, strengthens Gemini’s enterprise position |
| LLaMA 5 timeline uncertainty | Meta | 🟡 Medium | Open-source AI developers | Monitor; maintain DeepSeek V4 as primary open-source alternative |
| NSW Digital Work Systems Act 2026 | NSW Government | 🔴 Critical (AU) | All NSW employers using AI/algorithms | Immediate compliance action — see checklist above |
| National WHS model law review | Safe Work Australia | 🟠 High (AU) | All Australian employers | Treat NSW obligations as national baseline now |
| Baidu joins OpenClaw ecosystem | Baidu + OpenClaw | 🟡 Medium | Agent platform developers; China market operators | Evaluate OpenClaw for cross-border agent compatibility |
Impact levels: 🔴 Critical = immediate action required; 🟠 High = strategic review required; 🟡 Medium = monitor and assess
10. FAQ
What did Nvidia announce at GTC 2026?
At GTC 2026, held March 16–19 in San Jose, Nvidia CEO Jensen Huang announced five major developments: the Vera Rubin NVL72 full-stack AI supercomputer (successor to Blackwell, with approximately 2.5× training throughput improvement); a $1 trillion confirmed purchase order backlog for Blackwell and Vera Rubin systems; NemoClaw, an open-source agentic AI operating system built with OpenClaw; DLSS 5 neural rendering for consumer gaming; and the acquisition of Groq, the AI inference chip startup.
What is the Nvidia Vera Rubin NVL72?
The Vera Rubin NVL72 is Nvidia’s next-generation AI computing system, announced at GTC 2026. It is a 72-GPU, 5 rack-scale full-stack AI supercomputer designed specifically for agentic AI workloads. It delivers approximately 2.5× the AI training throughput of its Blackwell predecessor, 4× the memory bandwidth via NVLink 6, and 9.6× more HBM4 memory per system. Jensen Huang described it as “the engine supercharging the era of agentic AI.” It is built for inference-heavy, long-context, continuous production AI agent workloads rather than traditional batch training.
What happened to Meta’s Avocado AI model?
Meta’s next-generation AI model, internally codenamed “Avocado,” has underperformed GPT-5.4, Claude Opus 4.6, and Gemini Ultra 3.0 in internal benchmark testing across reasoning, coding, and writing tasks. According to sources cited by The New York Times and confirmed by Bloomberg, the model’s launch has been delayed from March 2026 to at least May 2026 pending significant remediation work. Meta is reportedly in discussions to license Google’s Gemini models to power its consumer AI products — including Meta AI on WhatsApp and Instagram — while Avocado is being fixed.
What is the NSW Digital Work Systems Act 2026?
The Work Health and Safety Amendment (Digital Work Systems) Act 2026 is a NSW law that amends the existing Work Health and Safety Act 2011 to explicitly include AI, algorithms, automated scheduling, and algorithmic performance monitoring within the employer’s primary duty of care. Under the Act, employers must assess and manage the health and safety risks of digital work systems used to allocate, monitor, or evaluate work. Union inspectors can audit algorithmic work systems. Individual managers can be held personally liable for harms caused by digital work systems they failed to adequately govern. The Act is in force now.
Does Australia have AI workplace safety laws?
Yes. As of early 2026, NSW — Australia’s most populous state — has passed Australia’s first AI-specific workplace safety legislation: the Work Health and Safety Amendment (Digital Work Systems) Act 2026. The Act makes AI and algorithmic work management systems an explicit employer duty of care obligation, gives union inspectors the right to audit such systems, and holds individual managers personally liable for harms caused by inadequately governed AI systems. Safe Work Australia is reviewing the Act’s provisions for potential incorporation into the national model WHS laws, which would extend equivalent obligations to all Australian states and territories.
What is OpenClaw AI?
OpenClaw is an open-source AI agent orchestration framework — software that enables developers to build, deploy, and coordinate autonomous AI agents that can use tools, access data, communicate with other agents, and execute multi-step workflows. It has become the dominant AI agent standard in China, with ByteDance, Alibaba, and Baidu all deploying agent products built on the framework. Nvidia announced a direct partnership with OpenClaw at GTC 2026, integrating OpenClaw compatibility into NemoClaw — Nvidia’s new open-source agentic AI operating system — creating architectural compatibility between US and Chinese AI agent infrastructure.
Is Meta using Google’s Gemini AI?
As of March 19, 2026, Meta and Google are in active discussions about Meta licensing Google’s Gemini models to power Meta AI products on WhatsApp, Instagram, and Facebook while Meta’s proprietary Avocado model undergoes performance remediation. No agreement has been finalised. If completed, the deal would give Gemini distribution access to Meta’s multi-billion user base — the largest potential AI distribution win since Google’s agreement to power Apple’s Siri with Gemini Ultra.
What is NemoClaw?
NemoClaw is Nvidia’s open-source agentic AI operating system, announced at GTC 2026. It is a software layer built in partnership with the OpenClaw open-source agent framework that manages AI agent deployment, orchestration, tool access, memory, and inter-agent communication on Nvidia-powered infrastructure. Jensen Huang described it as “the open-sourced operating system of agentic computers.” It is architecturally compatible with OpenClaw — the dominant Chinese open-source agent framework — creating a potential convergence of US and Chinese AI agent software ecosystems on a shared open standard.
How should Australian employers comply with the NSW AI workplace law?
Australian employers should take immediate action across five phases: (1) identify all digital work systems in use, including AI scheduling, performance monitoring, and task allocation tools; (2) conduct formal risk assessments for psychosocial and physical health hazards; (3) establish worker consultation processes about digital work systems; (4) implement governance frameworks including audit logging, escalation pathways, and management training; and (5) establish ongoing review cycles and monitor Safe Work Australia for national model law updates. Individual managers should be briefed on their personal liability exposure under the Act’s officer liability provisions. See the full Kersai compliance checklist above for a structured action plan.
This article was researched and written by the Kersai Research Team. Kersai is a global AI consultancy firm dedicated to helping enterprises confidently navigate the rapidly evolving artificial intelligence landscape — from cutting-edge strategic insights to practical, large-scale AI implementation. To learn more, visit kersai.com.
