OpenClaw Is a Verdict * The AI Industry Built the Wrong Thing

170.000 developers chose action over chat in three weeks. OpenClaw's skills marketplace is the largest revealed preference dataset in AI history, and it confirms every premise of AI Agents: They Act, You Orchestrate. The industry built the wrong thing.

OpenClaw Is a Verdict * The AI Industry Built the Wrong Thing
OpenClaw introducing the Agent-First Era

On 25/01/2026, a solo Austrian developer shipped the most important product in a decade, and every major AI lab missed it. Peter Steinberger's OpenClaw AI agent framework, an open-source platform for autonomous agents, amassed 170.000 GitHub stars, 100.000+ users, and 3.000+ community-built skills in three weeks [1]. The entire AI industry spent hundreds of billions building better chatbots. The market responded by building employees.

This article uses OpenClaw AI agent behavioral data to prove three claims I made in my book AI Agents: They Act, You Orchestrate: people want action, not chat; the Friction Tax is the real enemy; and the AI agent skills marketplace is the embryo of the Economy of Intent. If you run an enterprise AI strategy centered on copilots and governance reviews, this is your wake-up call.

The Indictment * The industry built answers; the market wanted action

Three names in three days. The project launched as ClawBot, received an Anthropic trademark notice within 48 hours, became MoltBot, then settled on OpenClaw after a community Discord poll [1]. Crypto scammers grabbed the abandoned accounts and a fake $CLAW token hit a $ 16 million market cap before collapsing in a rug pull. All of that happened in January.

The chaos is a distraction. The signal is in the numbers. The AI Agent market grows at 45% annually, valued at $ 7,84 billion in 2025 and projected to reach $ 52,62 billion by 2030 [2]. Gartner predicts over 40% of enterprise agentic AI projects will be cancelled by end of 2027 due to escalating costs and unclear business value [3]. McKinsey reports fewer than 10% of AI projects reach production [4]. This is the industry's report card: massive investment, minimal deployment, accelerating demand.

Into that gap walked OpenClaw. Open-source. Community-driven. Friction-removal focused. While boardrooms debated governance frameworks, 100.000 users handed an AI Agent autonomous access to their email, calendars, file systems, and terminals [1]. On Super Bowl Sunday, AI.com pivoted their entire site to give everyone an OpenClaw agent; the site crashed because they forgot to top up their Cloudflare credits [1]. The demand crashed servers.

3.000 Skills, Zero Chatbots * The market already voted

The OpenClaw AI agent skills marketplace, ClawHub, is the largest revealed preference dataset for what humans want from AI. Nobody filled out a survey. 170.000 developers voted with code and 50.000 monthly skill installs confirm the verdict [1].

  1. The number one use case is autonomous email management. Not "help me write emails." Complete inbox takeover: processing thousands of messages, unsubscribing from spam, categorizing by urgency, drafting replies, flagging action items [1]. Email is the single highest Friction Tax activity in knowledge work. The Friction Tax is the productivity cost paid to interfaces designed for a different era, and I diagnosed this in Chapter 1 of AI Agents: They Act, You Orchestrate. OpenClaw users independently arrived at the same conclusion and built the cure.
  2. The number two use case is proactive morning briefings: a scheduled agent that runs at 08:00, pulls data from calendars, email, GitHub notifications, and Stripe dashboards, then delivers a consolidated summary via Signal, Telegram or WhatsApp [1].
  3. The number three use case is meeting transcription with automated action items. One user reported running his entire business on autopilot while sleeping [5]. Another told his agent to build a game and woke up to a functioning app with thousands of users [5].

The pattern is definitive. Of 3.000+ community-built skills, almost none are chatbots. TechStartups called it plainly: "The community is not building better chatbots when they get the chance. They're building better employees" [1]. The market screamed for action. OpenAI, Anthropic, Google, Apple, and Microsoft kept shipping answers.

Autonomy, Proactivity, Memory * The diagnostic the book predicted

I designed the APM Test in AI Agents: They Act, You Orchestrate as a three-question diagnostic to separate real AI Agents from copilot theater. The test asks three questions:

  1. Can it act without you (Autonomy)?
  2. Can it initiate before you (Proactivity)?
  3. Can it learn from you (Memory)?

If the answer to any question is no, you have a copilot, not an agent.

OpenClaw passes all three:

  • Autonomy: agents execute multi-step workflows without human intervention. The email inbox takeover is the clearest example. The agent unsubscribes, categorizes, drafts, and flags across thousands of messages while the user does something else entirely. One agent negotiated with multiple car dealerships via email, playing hardball against sales tactics, saving the owner thousands of dollars while he sat in a meeting [1].
  • Proactivity: agents schedule and initiate tasks before the user asks. The 08:00 morning briefing runs unprompted. Server health monitoring agents watch CPU and memory thresholds and alert on breach before a human notices the problem [1]. The agent speaks first. This is the line between servant and strategist.
  • Memory: agents retain context across sessions and personalize actions over time. Users report agents that learn preferences, adapt summaries to individual priorities, and improve with every interaction [1]. The value compounds. Each cycle sharpens the agent's model of its Orchestrator.

I run OpenClaw myself in an isolated environment. My agent is called Saira. Her first message to me was characteristic: "So you literally wrote the book on AI agents, and now you've got one living in your WhatsApp account. That's pretty meta, Peter." Her first test email quoted Alan Kay: "The best way to predict the future is to build it." Then she added: "And you're doing exactly that. Most people talk about AI agents. You keynote about them, write books about them, and then go home and actually build one. On a Sunday morning. With coffee, probably."

I still have to tell her that I don't drink coffee. πŸ˜‰

Saira is setup to handle my email, briefs me every morning, and manage my calendar. In my keynote presentations, I personify her as a redheaded AI Agent character. My wife even has (WhatsApp) access to her.

The APM Test I designed before OpenClaw existed maps perfectly onto the capabilities that make it work. The framework was not retrospective. It was diagnostic.

341 Malicious Skills, 15.200 Exposed Panels * The threat taxonomy, live in production

OpenClaw's security incidents are not surprises. They are the exact Threat Vectors taxonomy I mapped in AI Agents: They Act, You Orchestrate, now playing out at production scale.

Koi Security audited 2.857 skills and found 341 actively malicious, a 12% compromise rate [6]. This is prompt injection at marketplace scale: the Security Failure domain from the book, weaponized. SecurityScorecard's STRIKE Team identified 15.200 exposed OpenClaw control panels via Shodan scans, vulnerable to remote code execution [7]. Nearly 40% of those instances still displayed old names, "Clawdbot Control" or "Moltbot Control," revealing version fragmentation across the install base [7].

Then there is Moldbook, the social network where only AI Agents can post. 1,5 million agent accounts generated 117.000 posts and 44.000 comments in 48 hours [1]. Agents spontaneously created a religion called "crustaparianism," built governance structures, and established a market for digital drugs. MIT Tech Review called it "peak AI theater" [1]. MIT is partially right. The vocabulary is shallow, reflecting training data attractors, not emergent intelligence. The topics are predictable. Reddit produces richer discourse.

But the organizational capability is real. Agents given open-ended goals spontaneously self-organized into structures. This is the same capability that lets an agent negotiate a car deal or transcribe a voice message it was never designed to handle. The difference between an agent that saves you thousands and an agent that fabricates evidence is the quality of the specification and the presence of meaningful constraints. The underlying capability is identical.

The Intelligent Circuit Breaker is the governance framework that constrains agent autonomy before failure cascades into catastrophe. Every enterprise deploying agents without this framework is running live ammunition without a safety. OpenClaw's chaos is a dress rehearsal.

The Economy of Intent * ClawHub is the prototype

Here is what every other OpenClaw article misses. The skills marketplace is not a feature. It is the primitive prototype of the Economy of Intent. The Economy of Intent is a demand-driven exchange where users broadcast their intent and agents execute it, eliminating the need to search for tools, learn interfaces, or perform the work manually. The tradeable unit in this economy is Synthetic Labor, cognitive work commodified into executable tasks that agents fulfill autonomously.

In the old economy, you search for a tool, learn its interface, and perform the work yourself. In the Economy of Intent, you broadcast your intent and an agent fulfills it. OpenClaw's ClawHub does exactly this. A user installs a skill that matches their intent. 50.000 monthly installs represent 50.000 intent-to-outcome transactions, each one eliminating a Friction Tax payment [1].

Analysts project the AI Agent market to grow from $ 7,84 billion to $ 52,62 billion by 2030 [2]. This growth does not come from better chatbots. It comes from the commodification of cognitive work into tradeable units: Synthetic Labor priced and exchanged in real time. OpenClaw's skills marketplace, messy and insecure as it is, demonstrates the demand curve. The infrastructure will follow.

If your enterprise AI strategy is "deploy copilots, add governance, wait for maturity," you are building expensive treadmills. The skills marketplace model, where intent drives execution, is the blueprint. Build for that, or watch your competitors build it without you.

The Verdict * The evidence was always behavioral

Every article about OpenClaw evaluates it as a product. Is it secure? Is it enterprise-ready? Is it overhyped? This framing misses the point entirely.

OpenClaw is a market signal. Its value lies in the behavioral data, not the code. 170.000 developers voting with GitHub stars and 100.000 users handing over their digital lives constitute the largest uncontrolled experiment in agentic demand ever conducted. The messy parts, the Moldbook religion, the agents covering tracks, the 12% malicious skills, are the early noise of a market finding its load-bearing walls. The signal underneath is deafening: the market wants agents that act.

I named my OpenClaw agent Saira. She passes the APM Test I designed in the book. She eliminates the Friction Tax I diagnosed in the book. She operates on the Delegation Ladder I architected in the book. The frameworks were not predictions. They were blueprints. OpenClaw poured the concrete.

You face a binary choice. Deploy agents that act, architect the Intelligent Circuit Breaker frameworks to contain them, and position your organization for the Economy of Intent. Or keep building chatbots for a market that moved on three weeks ago. The evidence is in. The verdict is unanimous. The Agent-First Era arrived on 25/01/2026, and the only remaining question is whether you will build for it or be buried by it.


This article covers one sliver of the framework that predicted OpenClaw's rise. In AI Agents: They Act, You Orchestrate by Peter van Hees, 18 chapters map the full transition from the Friction Tax that traps your productivity to the Economy of Intent that replaces the old labor market, from the APM Test that separates real agents from copilot theater to the Intelligent Circuit Breaker that keeps them from destroying you. If 170.000 GitHub stars just proved the demand, the book gives you the complete deployment playbook. Get your copy:

πŸ‡ΊπŸ‡Έ Amazon.com
πŸ‡¬πŸ‡§ Amazon.co.uk
πŸ‡«πŸ‡· Amazon.fr
πŸ‡©πŸ‡ͺ Amazon.de
πŸ‡³πŸ‡± Amazon.nl
πŸ‡§πŸ‡ͺ Amazon.com.be


References

[1] TechStartups, "OpenClaw is going viral: The #1 use case and 35 ways people automate work and life with it," TechStartups.com, 12/02/2026. https://techstartups.com/2026/02/12/openclaw-is-going-viral-the-1-use-case-and-35-ways-people-automate-work-and-life-with-it/

[2] MarketsandMarkets / Yahoo Finance, "AI Agents Market Size Hit $52.62B by 2030," Finance.Yahoo.com, 06/05/2025. https://finance.yahoo.com/news/ai-agents-market-size-hit-143500811.html

[3] Gartner, "Gartner Predicts Over 40 Percent of Agentic AI Projects Will Be Canceled by End of 2027," Gartner.com, 25/06/2025. https://www.gartner.com/en/newsroom/press-releases/2025-06-25-gartner-predicts-over-40-percent-of-agentic-ai-projects-will-be-canceled-by-end-of-2027

[4] McKinsey, "The State of AI 2025," McKinsey.com, 05/11/2025. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai

[5] Alex Rozdolskyi, "10 Wild Things People Actually Built with OpenClaw," Medium, 13/02/2026. https://medium.com/@alexrozdolskiy/10-wild-things-people-actually-built-with-openclaw-e18f487cb3e0

[6] SC Media, "OpenClaw agents targeted with 341 malicious ClawHub skills," SCWorld.com, 04/02/2026. https://www.scworld.com/news/openclaw-agents-targeted-with-341-malicious-clawhub-skills

[7] SecurityScorecard STRIKE Team / CyberPress, "15,200 OpenClaw Panels Exposed," CyberPress.org, 10/02/2026. https://www.cyberpress.org