Your Prompts Evaporate. Skills Compound. * The Knowledge Layer Agents were Missing
Your prompt library resets to zero on every Agent run. Skills encode organizational knowledge once and compound across every execution. 40% of Agentic AI projects face cancellation because Agents lack persistent context. Skills are the missing knowledge layer.
Your prompt library is a graveyard of evaporated intent. Every agent run starts from zero, and you are subsidizing that amnesia with human hours you stopped counting months ago.
This article defines AI Agent "Skills": the open standard turning Agents from amnesiac interns into institutional operators. You will understand why prompts fail at organizational scale, how skills close the architectural gap, and what separates a skill from a saved prompt.
The framework at the center is the Delegation Ladder, a four-level model I developed in my book (AI Agents: They Act, You Orchestrate) for structuring how humans hand off work to Agents with precision, and skills are how you build that ladder into your infrastructure.
The Amnesia Tax * Every run starts from zero
Gartner predicts that over 40% of agentic AI projects will be cancelled by end of 2027, due to escalating costs, unclear business value, or inadequate risk controls [1]. Read that number again. Almost half the enterprise AI deployments now in progress will fail.
The standard explanation blames the models. Too many hallucinations. Too expensive to run. Too unpredictable. That explanation is wrong. The model is not the failure point. The architecture is.
Here is what happens in most organizations today. A team deploys CoPilot, ChatGPT or Claude. They invest weeks in prompt engineering. They assemble a repository of best prompts. The CEO asks why the AI investment has not scaled beyond individual productivity. Nobody has a good answer, because the answer is embarrassing: every Agent conversation starts from zero. The prompts live in a wiki. The Agents never see them. Each run, the Agent arrives as a stranger at the door, forever meeting your organization for the first time.
I call this the Amnesia Tax: the Amnesia Tax is the accumulated cost of organizational knowledge that Agents cannot access, paid in repeated instructions, inconsistent outputs, and human hours spent (re)teaching Agents what the company already knows. The Amnesia Tax is invisible on any quarterly report, but it compounds into exactly the kind of unclear business value that kills projects. If your Agents do not know how your company works, they will invent how it works. That invention will be wrong.
What an AI Agent Skill Actually Is * A Markdown file that carries standing orders
A skill is a Markdown file with a specific structure. It contains two layers: a catalog entry (a short description the Agent reads to decide whether to load the skill) and the full instructions (loaded into the context window only when the Agent determines the skill is relevant). That two-layer design is the architectural insight that makes AI Agent skills different from prompts.
In my book, I describe the context window as the most valuable real estate in the AI Operating System (AIOS). The context window is finite. You cannot load everything an Agent needs to know upfront. Skills solve this through progressive disclosure: the catalog entry consumes a handful of tokens, and the full instructions load on demand. The Agent carries a table of contents, not the entire library.
The simplicity is the point. A skill is a Markdown file the way a contract is a piece of paper. The value is not in the format. The value is in what you encode, how you govern it, and how it compounds across every Agent run. Simon Willison, one of the first to recognize the standard's significance, predicted "a Cambrian explosion in Skills which will make this year's MCP rush look pedestrian by comparison" [2]. The open standard at agentskills.io, launched on 18/12/2025, has validated that prediction. Anthropic's skills GitHub repository crossed 20.000 stars within weeks of release, with tens of thousands of community-created skills [3]. Microsoft adopted the standard in VS Code and GitHub Copilot. OpenAI adopted a structurally identical architecture in ChatGPT and Codex CLI.
Barry Zhang, an Anthropic researcher, captured the strategic shift: "We used to think Agents in different domains will look very different. The Agent underneath is actually more universal than we thought" [3]. One general-purpose agent equipped with the right skills outperforms a dozen specialized Agents built from scratch.
The Knowledge Layer * MCP gives hands; skills give judgment
The enterprise AI stack has three layers. The Model Context Protocol (MCP) handles connectivity: can the Agent reach Salesforce? Google's Agent2Agent Protocol (A2A) handles collaboration: can Agents negotiate with each other? Skills handle knowledge: does the Agent know how your company uses Salesforce?
If you deployed MCP and stopped there, you built a nervous system without a brain. The Dragonscale team articulated the distinction precisely: "While MCP gives Agents access to the outside world, Skills give them judgment shaped by company context, preserving hard-won tribal knowledge long after individual engineers move on" [4].
This maps directly onto the Functional Dissolution Principle, as explored in Chapter 8 of my book: any function that can be defined by rules and verified by outcome is a candidate for automation. Skills encode the rules. Without them, you have Agents with hands and no playbook, connected to every system in your enterprise yet unable to follow a single procedure correctly.
Consider the practical implication. Your sales Agent has MCP access to the CRM. Without a skill, it queries the CRM like a first-day intern, guessing at your qualification criteria, your deal stages, your escalation thresholds. With a skill, it executes your exact sales methodology on every run. That is the difference between connectivity and capability.
AI Agent Skills Compound like Software * Prompts leave when people leave
Here is the economic argument for skills in one sentence: when an employee leaves, their prompts leave with them; when a skill is written, it stays.
Skills are versionable. You track changes in Git, review them in pull requests, and audit them like policy documents. They escape individual heads and become organizational assets. This is the Acceptance Criteria Contract made permanent: the Acceptance Criteria Contract is a precise specification of objective, constraints, and validation method that defines exactly what success looks like before an agent begins work, encoded once in a skill and executed on every run.
Natali is one of my bookβs central fictional characters, a founder-engineer who channels grief into rigor and tries to build systems that do not depend on fragile, one-off effort. Her instinct points to a broader truth about agentic work: the real leverage is not in producing one brilliant result, but in encoding repeatable procedures. Each skill captures a specific way of working, from research to writing to compliance. Improve one skill once, and every future output gets better. The benefit does not vanish after a single exchange. It compounds.
The lock-in objection is real but fading. The open standard at agentskills.io is platform-agnostic. You can point Codex CLI or Gemini CLI at the same skill folders. The real lock-in risk is not the format. The real lock-in risk is the organizational knowledge encoded in the skills, which is proprietary by design and should be. Version your skills like software. Review them like policy. They are standing orders, not prompts.
The Governance Imperative * Ungoverned skills are a threat vector
The supply chain risk is documented. Security researchers at Koi Security found 341 malicious skills on ClawHub out of 2.857 audited, a 12% infection rate, within weeks of the open standard's release [5]. The blast radius of a compromised skill dwarfs a compromised npm package. An agent with a poisoned skill has access to email, file systems, and credentials. One malicious skill, one click, full compromise.
The answer is governance, not avoidance. Public marketplace skills are the equivalent of running unvetted npm packages in production. Enterprise skills belong in governed repositories. You audit the source. You pin the version. You review changes the same way you review code that touches production systems.
Anthropic's own internal deployment demonstrates what governance looks like at scale. Their engineers used Claude in 60% of their work, achieving a 50% self-reported productivity boost [3]. A separate finding: 27% of Claude-assisted work consisted of tasks that would not have been done otherwise, including internal tools, documentation, and quality-of-life improvements [3]. That 27% number is the compound interest of skills. When Agents carry persistent organizational knowledge, they do not just execute assigned work faster. They surface work nobody had time to start.
-
The New Form Factor of Organizational Knowledge * The Wiki Agents can read
Skills have become the new form factor of organizational knowledge. That is the reframe this article is built on, and it changes what you should be building.
The wiki was documentation humans read. The standard operating procedure was a workflow humans followed. The skill is a procedure Agents execute. Every organization already has institutional knowledge. The question is whether that knowledge is trapped in people's heads and prompt libraries, where it evaporates, or encoded in skills, where it compounds.
The Delegation Ladder has four levels: Describe, Specify, Validate, Autonomize.
- The Prompts lives on Level 1, the vague description.
- The Skills encode Level 2 and above: the constraints, the validation criteria, the standing orders that make an Agent reliable at scale.
You will either encode your organizational intelligence into persistent, versionable, executable skills, or you will rebuild it from scratch every morning. The companies that encode are building the Company as Computer, the organizational model where every function runs on Agents executing skills rather than humans following procedures. The companies that prompt are hiring a new intern every session.
The Delegation Ladder has a concrete first rung. Step on it.
This article introduces one layer of the architecture I map across 18 chapters in AI Agents: They Act, You Orchestrate. The book builds the complete framework, from the AIOS Architecture that explains why context is finite, through the Delegation Ladder and Acceptance Criteria Contract that make delegation precise, to the Functional Dissolution Principle that encodes organizational knowledge into systems. If the gap between prompting and real orchestration resonated, the book gives you the full operating manual. Get your copy:
πΊπΈ Amazon.com
π¬π§ Amazon.co.uk
π«π· Amazon.fr
π©πͺ Amazon.de
π³π± Amazon.nl
π§πͺ Amazon.com.be
References
- [1] Gartner, "Gartner Predicts Over 40 Percent of Agentic AI Projects will be Canceled by End of 2027," Press Release, June 2025. https://www.gartner.com/en/newsroom/press-releases/2025-06-25-gartner-predicts-over-40-percent-of-agentic-ai-projects-will-be-canceled-by-end-of-2027
- [2] Simon Willison, "Claude Skills are awesome, maybe a bigger deal than MCP," simonwillison.net, 16/10/2025. https://simonwillison.net/2025/Oct/16/claude-skills/
- [3] VentureBeat / Michael Nunez, "Anthropic launches enterprise 'Agent Skills' and opens the standard," VentureBeat, 18/12/2025. https://venturebeat.com/technology/anthropic-launches-enterprise-agent-skills-and-opens-the-standard
- [4] Dragonscale / Rustic AI, "MCP vs. Skills: A Practical Decision Framework," 28/01/2026. https://blog.dragonscale.ai/mcp-vs-skills-two-ways-to-give-your-ai-superpowers/
- [5] PurpleBox Security, "AI Agent Skills: The Hidden Supply Chain Risk in 2026," 07/02/2026. https://www.prplbx.com/blog/agent-skills-supply-chain