Your Agents Cannot Talk to Each Other * Agent2Agent Protocol

50% of enterprise AI agents operate in total isolation. MCP gave them hands; the Agent2Agent Protocol gives them a voice. Peter van Hees breaks down the three capabilities of this social contract for machines and the AgentOps governance layer you need before you deploy it.

Your Agents Cannot Talk to Each Other * Agent2Agent Protocol
Agent2Agent Protocol

Half your AI agents operate in total isolation, and every new one you deploy makes the problem worse [1]. You have spent the last two years connecting agents to tools. You have not spent a single architecture cycle on how those agents talk to each other. That silence is costing you more than you realize.

This article breaks down the Agent2Agent (A2A) Protocol. The A2A Protocol is an open standard that enables AI agents to discover, negotiate with, and collaborate with each other at machine speed. I call it the social contract for machines. You will walk away with a clear architecture for agent discovery, negotiation, and collaboration, and you will understand why the Model Context Protocol (MCP) alone leaves your organization deaf.

πŸ’‘
A2A is a standard protocol that lets AI agents collaborate with other AI agents, regardless of who built them or what framework they run on. Each agent publishes an "Agent Card" describing what it can do, and other agents use that to discover, send tasks, and exchange results; treating each other as autonomous peers, not as tools to be called.

The Anarchy of Brilliant Mutes * Your agents are powerful, isolated, and multiplying

The average enterprise deploys 12 AI agents across sales, finance, and operations [1]. That number will climb 67% within two years [1]. And 83% of organizations report that most teams have adopted agents [1]. The Agent-First Era is here. The infrastructure is live. But you never architected how these agents coordinate.

Salesforce's 2026 Connectivity Benchmark delivers the verdict: 50% of deployed agents operate in isolated silos [1]. Not connected to each other. Not aware of each other's existence. Fifty percent. That means half of your agent investment is producing redundant automations, disconnected workflows, and a growing pile of human-mediated handoffs between systems that should be negotiating at machine speed.

The result is predictable. You degrade into a manual switchboard. You spend your Monday mornings triaging which agent broke which workflow over the weekend. You become the frantic translator between powerful but mute systems. I call this the anarchy of brilliant mutes: a room full of competing monologues where no one can hear anyone else.

McKinsey's research confirms the downstream damage. Nearly 80% of companies deploy generative AI, yet the same 80% report no material bottom-line impact [2]. Fewer than 10% of vertical use cases scale past pilot [2]. The gen AI paradox is a coordination failure. Your agents have all the intelligence they need. They are deaf.

Why MCP Is Half the Architecture * Hands without a voice

The Model Context Protocol (MCP) solved a critical problem. It gave agents the ability to discover and wield tools across your enterprise: your CRM, your calendar, your inventory system. MCP creates a nervous system. Every agent can find and operate the levers of your business. That is genuine progress.

Your AI Agents Are Brains in Jars * MCP Architecture
You spent six figures on AI agents that reason like senior analysts but cannot touch a single tool in your enterprise. MCP Architecture rewires your company from a collection of brains in jars into a single coordinated organism. Here are the three constitutional laws.

But MCP solves agent-to-tool connectivity. It does nothing for agent-to-agent connectivity. These are different architectural problems entirely. As I wrote in my book AI Agents: They Act, You Orchestrate, "The Model Context Protocol gives your agents hands. It does not give them a voice."

If you deployed MCP and declared victory, you solved half the problem. Your sales agent can access the CRM. It still cannot ask your logistics agent to arrange shipping. Your marketing agent can pull campaign data. It still cannot request a budget allocation from your finance agent. You have built a company of hyper-competent specialists who cannot speak to one another.

The confusion between these two protocols traps organizations in the silo crisis. MCP is the nervous system. A2A is the social contract. You need both.

Discovery, Negotiation, Collaboration * The three capabilities of the social contract

The Agent2Agent Protocol delivers three capabilities that turn isolated agents into a coordinated workforce: Discovery, Negotiation, and Collaboration. Google launched A2A in April 2025 with over 50 technology partners, including Microsoft, SAP, Salesforce, and ServiceNow [3]. By June 2025, Google donated the protocol to the Linux Foundation, ensuring no single company controls the standard [4]. This is an open, vendor-neutral social contract for machines.

  • The first capability is Discovery. Under A2A, every agent publishes an 'Agent Card', a standardized, machine-readable identity document in JSON format [3]. The Agent Card advertises the agent's name, its owner, the functions it performs, and the data formats it accepts. When your sales agent needs to arrange a product delivery, it does not need to know the name of the logistics agent. It queries the network for any agent with the arrange shipping capability and finds the right partner in real time.
  • The second capability is Negotiation. Once discovered, agents agree on terms at machine speed. The requesting agent sends task parameters: "Ship product SKU #74B to this address by Friday." The logistics agent, operating under its own constraints (warehouse capacity, shipping costs, carrier availability), can accept, reject, or propose a counteroffer: "Standard delivery by Friday is not feasible. Expedited shipping meets the deadline for an additional $42. Confirm." This is economic negotiation between two autonomous actors, executed in milliseconds without a human mediating the exchange [3].
  • The third capability is Collaboration. With terms agreed, the A2A Protocol ensures the handoff is secure and stateful [3]. Agents pass complex objectives with long-running context, not single commands. Human-in-the-loop approval gates remain available for high-stakes decisions. The protocol builds on existing standards (HTTP, JSON-RPC, server-sent events), and V0.2 added standardized authentication [5]. The infrastructure is production-ready.

Audit your own agent fleet against these three checkpoints. Can your agents discover each other? Can they negotiate constraints without your intervention? Can they hand off complex tasks with context preserved? If the answer to any of these is no, you are the bottleneck.

AgentOps: Govern or Get Buried * The social contract needs a constitution

A2A without governance is a new attack surface. The same protocol that enables agents to discover and collaborate also enables cascading failures. CIO.com documents the risks: hallucination chains where one agent's error corrupts an entire workflow (the "turtles all the way down" problem), unauthorized data access across agent boundaries, and cost spirals from unmonitored agent-to-agent interactions [6]. Only 54% of organizations who deployed AI agents have a centralized governance framework for their agents [1]. That means nearly half are deploying autonomous systems with no constitution.

This is where AgentOps becomes non-negotiable. AgentOps is the operational layer that governs the lifecycle of your agent workforce through three pillars: Provision, Monitor, and Govern.

  • Provision means onboarding every agent with scoped credentials, a structured mission brief, and hard constraints (budget ceilings, approved vendor lists, ethical red lines) before it takes a single action.
  • Monitor means tracking every agent's performance, cost, and behavior in real time, creating an immutable audit trail for debugging failures and measuring ROI.
  • Govern means enforcing policy in real time through Intelligent Circuit Breakers that interrupt dangerous behavior and escalate to a human Orchestrator before errors cascade.

The AgentOps framework requires you to deploy governance before you deploy A2A. Every agent that publishes an Agent Card must be provisioned, monitored, and governed. The social contract requires a constitution. Without one, you are giving a room full of autonomous actors the ability to discover each other and negotiate without supervision. That is systemic risk.

From Protocol to Operating Model * The real question you face

Most leaders classify A2A as a technical integration project. They are wrong. The A2A Protocol is the final architectural layer that completes the transition from company as hierarchy to company as computer, a framework I detail in Chapter 8 of my book AI Agents: They Act, You Orchestrate.

MCP dissolved the barrier between agents and tools. A2A dissolves the barrier between agents and each other. AgentOps provides the constitution. Together, these three layers transform your enterprise from a collection of human-managed silos into a self-organizing collective of carbon and silicon intelligence.

The question you face is bigger than protocol adoption: "Am I ready to architect a society, or am I still trying to manage a department?" Because every agent you deploy without a social contract adds friction. Every handoff you mediate manually is a tax on your organization's velocity. And every competitor who architects this layer before you do gains an advantage that compounds with every agent they add.

Your agents have hands. They have intelligence. They do not have a voice. The Agent2Agent Protocol gives them one. Deploy it, govern it with AgentOps, and stop being the manual switchboard between systems that should be negotiating at machine speed. The company as computer starts with a social contract.


The A2A Protocol is one layer of a multi-layer organizational transformation that AI Agents: They Act, You Orchestrate by Peter van Hees maps across 18 chapters. The book goes deeper into the Functional Dissolution Principle, the AgentOps Trinity, the Glass Box Protocol for building trust in hybrid workforces, and the Silicon Salary Model for budgeting synthetic labor. If the gap between your agent deployments and your organizational architecture resonated, the book gives you the complete blueprint. Get your copy:

πŸ‡ΊπŸ‡Έ Amazon.com
πŸ‡¬πŸ‡§ Amazon.co.uk
πŸ‡«πŸ‡· Amazon.fr
πŸ‡©πŸ‡ͺ Amazon.de
πŸ‡³πŸ‡± Amazon.nl
πŸ‡§πŸ‡ͺ Amazon.com.be


References

[1] Salesforce, "Multi-Agent Adoption to Surge 67% by 2027 as Enterprises," Salesforce News, 2026. https://www.salesforce.com/news/stories/connectivity-report-announcement-2026/

[2] McKinsey, "Seizing the agentic AI advantage," McKinsey & Company, 2025. https://www.mckinsey.com/capabilities/quantumblack/our-insights/seizing-the-agentic-ai-advantage

[3] Google, "Announcing the Agent2Agent Protocol (A2A)," Google for Developers Blog, April 2025. https://developers.googleblog.com/en/a2a-a-new-era-of-agent-interoperability/

[4] No Jitter, "The Agent2Agent Protocol: What It Does for AI Orchestration," No Jitter, June 2025. https://www.nojitter.com/ai-automation/the-agent2agent-protocol-what-it-does-for-ai-orchestration

[5] Cloud Wars, "Google Advances Agent2Agent (A2A) Protocol, Gains Microsoft and SAP Backing," Cloud Wars, May 2025. https://cloudwars.com/ai/google-advances-agent2agent-a2a-protocol-gains-microsoft-and-sap-backing/

[6] CIO.com, "AI interoperability challenges persist, even after new protocols," CIO, 2025. https://www.cio.com/article/4036859/ai-interoperability-challenges-persist-even-after-new-protocols.html