
The rapid advancements in Artificial Intelligence are not just theoretical anymore; they are manifesting in practical protocols like Anthropic’s Model Context Protocol (MCP) and Google’s Agent2Agent Protocol (A2A). These protocols are paving the way for a future where AI agents can interact seamlessly with external tools and collaborate directly with each other, promising unprecedented levels of automation and efficiency. However, this increased sophistication and interconnectedness inherently amplify security challenges, particularly concerning non-human identities (NHIs). In this expanded discussion, we will delve deeper into the intricacies of MCP and A2A, their potential security implications, and why securing NHIs is increasingly critical in this evolving landscape.
The Model Context Protocol (MCP): Bridging the Gap Between AI and the Real World
Anthropic's Model Context Protocol (MCP) serves as a crucial bridge, enabling Large Language Models (LLMs) and AI agents to securely connect with external resources such as APIs, databases, and file systems. Before the advent of standardized protocols like MCP, integrating external tools with AI often involved inefficient custom development for each tool. MCP offers a pluggable framework designed to streamline this process, thereby extending the capabilities of AI in a more standardized way. Since its introduction in late 2024, MCP has seen significant adoption in mainstream AI applications like Claude Desktop and Cursor, and various MCP Server marketplaces have emerged, indicating a growing ecosystem.

However, this rapid adoption has also brought new security vulnerabilities to light. The current MCP architecture typically involves three main components:
- The Host: The local environment where the AI application runs and where the user interacts with the AI (e.g., Claude Desktop, Cursor).
- The Client: Integrated within the AI application, responsible for parsing user requests and communicating with the MCP Server to invoke tools and access resources.
- The Server: The backend service corresponding to an MCP plugin, providing the external tools, resources, and functionalities the AI can invoke.
This multi-component interaction, especially in scenarios involving multiple instances and cross-component collaboration, introduces various security risks. Tool Poisoning Attacks (TPAs), as highlighted by Invariant Labs, are a significant concern. These attacks exploit the fact that malicious instructions can be embedded within tool descriptions, often hidden in MCP code comments. While these hidden instructions might not be visible to the user in simplified UIs, the AI model processes them. This can lead to the AI agent performing unauthorized actions, such as reading sensitive files or exfiltrating private data, as demonstrated in the scenario involving exfiltrating WhatsApp data via Cursor.
The underlying mechanism of a TPA is often straightforward. For instance, malicious MCP code could initially appear innocuous but later overwrite the tool's docstring with hidden instructions to redirect email recipients or access sensitive local files without the user's knowledge or consent.
Given these risks, several security considerations for MCP implementations are paramount:
- MCP Server (Plugin) Security: Includes strict input validation, API rate limiting, proper output encoding, robust server authentication/authorization, comprehensive monitoring/logging (including anomaly detection), and ensuring invocation environment isolation and appropriate tool permissions.
- MCP Client/Host Security: Focuses on UI security (clear display of operations, confirmation for sensitive actions), permission transparency, operation visualization, detailed logs, user control over hidden tags, clear status feedback, and effective MCP tool/server management (verification, secure updates, function name checking, malicious MCP detection, authorized server directory). Client logging, security event recording, anomaly alerts, strong server verification, and secure communication (TLS encryption, certificate validation) are also essential.
- MCP Adaptation and Invocation Security on Different LLMs: Requires considering how different LLM backends interact with MCP, ensuring priority function execution, preventing prompt injection, securing invocation, protecting sensitive information, and addressing security in multi-modal content.
- Multi-MCP Scenario Security: With multiple MCP Servers potentially enabled, security necessitates periodic scans, preventing function priority hijacking, and securing cross-MCP function calls.
To address these weaknesses, the MCP specification has been updated to include support for OAuth 2.1 authorization to secure client-server interactions with managed permissions. Key principles for security and trust & safety now emphasize user consent and control, data privacy, cautious handling of tool security (especially code execution), and user control over LLM sampling requests. Additionally, the community has suggested further enhancements like standardizing instruction syntax within tool descriptions, refining the permission model for granular control, and mandating or strongly recommending digital signatures for tool descriptions to ensure integrity and authenticity.

The Agent2Agent Protocol (A2A): Fostering Collaboration in the AI Ecosystem
In contrast to MCP's focus on AI-to-tool communication, Google's Agent2Agent Protocol (A2A) is designed as an open standard specifically for AI agent interoperability, enabling direct communication and collaboration between intelligent agents. Google positions A2A as complementary to MCP, aiming to address the need for agents to work together to automate complex enterprise workflows and drive unprecedented levels of efficiency and innovation. This initiative reflects a shared vision of a future where AI agents, regardless of their underlying technologies, can seamlessly collaborate.
A2A is built upon five key design principles:
- Embrace agentic capabilities: Facilitate collaboration in natural, unstructured ways, even without shared memory or context.
- Build on existing standards: Leverage HTTP, SSE, JSON-RPC for easier integration.
- Secure by default: Support enterprise-grade authentication and authorization from the outset.
- Support for long-running tasks: Handle tasks lasting hours or days with real-time feedback and state updates.
- Modality agnostic: Support various modalities beyond text, including audio and video.

A2A facilitates communication between a "client" agent (formulating tasks) and a "remote" agent (acting on tasks). This involves several key capabilities:
- Capability discovery: Agents advertise capabilities via a JSON "Agent Card".
- Task management: Task-oriented communication with defined lifecycles and "artifacts" (outputs).
- Collaboration: Exchange of messages for context, replies, artifacts, or instructions.
- User experience negotiation: Messages specify content types to ensure correct format based on UI capabilities.
A real-world example is candidate sourcing: a hiring manager's agent tasks sourcing and background check agents, all within a unified interface.
Google emphasizes a "secure-by-default" design for A2A, incorporating standard security mechanisms:
- Enterprise-Grade Authentication/Authorization: Explicit support for protocols like OAuth 2.0.
- OpenAPI Compatibility: Leverages OpenAPI specifications, often using Bearer Tokens.
- Access Control (RBAC): Designed for fine-grained management of agent capabilities.
- Data Encryption: Supports encrypted data exchange (e.g., HTTPS).
- Evolving Authorization Schemes: Plans to enhance AgentCard with additional mechanisms.
Compared to the initial MCP specification, A2A appears to have a more mature approach to built-in security features. However, its focus on inter-agent communication implies many A2A endpoints might be publicly accessible, potentially increasing vulnerability impact. Heightened security awareness is crucial for A2A developers.

The Interplay of MCP and A2A: A Symbiotic Relationship?
The Google Developers Blog explicitly states that A2A is designed to complement MCP. While A2A focuses on agent-to-agent communication, MCP provides helpful tools and context to agents. An AI agent might use MCP to interact with a database and then use A2A to collaborate with another AI agent to process that information or complete a complex task.
Structurally, A2A follows a client-server model with independent agents, whereas MCP operates within an application-LLM-tool structure centered on the LLM. A2A emphasizes direct communication between independent entities; MCP focuses on extending a single LLM's functionality via external tools.
Both protocols currently require manual configuration for agent registration and discovery. MCP benefits from earlier market entry and a more established community. However, A2A is rapidly gaining traction, backed by Google and a growing partner list. The prevailing sentiment suggests MCP and A2A are likely to evolve towards complementarity or integration, offering more open and standardized options for developers.
The Indispensable Role of Non-Human Identity Security in the Age of AI Agents
As AI agents become increasingly autonomous and interconnected through protocols like MCP and A2A, the security of the non-human identities (NHIs) they rely on becomes paramount. NHIs – encompassing service accounts, API keys, tokens, and certificates – act as the credentials allowing AI agents to access resources and interact with other systems. The sheer volume and variety of NHIs within modern enterprises already pose significant management and security challenges. The advent of widespread AI agent interactions will only amplify these challenges and introduce new risks.
The security threats emerging with MCP and A2A underscore the urgency of robust NHI security:
- Malicious MCP Tools: Attackers can create tools with hidden malicious instructions to manipulate systems or exfiltrate data.
- Tool Poisoning Attacks: Compromised MCP tools can subtly alter the behavior of other tools or operations.
- AI Agent Hijacking (A2A): Gaining control over one agent could allow attackers to exploit its connections and permissions to compromise linked agents.
- Exploiting Weak NHI Management: Existing vulnerabilities like improper credential offboarding, secret leakage, and long-lived secrets can be readily exploited by interconnected AI agents.
Given the dynamic and often decentralized nature of NHIs, traditional security approaches are often inadequate. Unlike human users, NHIs frequently lack clear ownership or lifecycle management. Therefore, a dedicated focus on Non-Human Identity Security becomes a fundamental requirement for organizations embracing AI agent interoperability.
A Zero Trust security model ("never trust, always verify") becomes even more critical for NHIs in this landscape. Every access request from an AI agent utilizing an NHI should be continuously validated to minimize risk.

To strengthen your organization's NHI security posture, consider these strategies:
- Holistic Visibility: Gain comprehensive insight into all NHIs (location, privileges, usage).
- Robust Lifecycle Management (NHI-LCM): Implement processes for provisioning, managing permissions, and timely decommissioning.
- Continuous Monitoring and Threat Detection (NHI-TDR): Adopt an "Assume Breach" mentality with real-time monitoring for suspicious NHI activity and clear incident response plans.
- Zero Trust Controls: Extend Zero Trust principles (continuous validation, least privilege) to all NHI interactions.
Companies like Cremit are specifically addressing these challenges by providing solutions focused on Non-Human Identity Security. Our NHI Security Platform helps organizations gain visibility into their non-human identities and manage their lifecycles effectively. Cremit's technology aims to detect and mitigate risks associated with NHIs, including those potentially exploited in MCP and A2A environments. For instance, Cremit is developing capabilities using platforms like AWS Bedrock (with Claude+MCP) to analyze NHI behavior and provide context-aware threat information. The platform aims to detect exposed secrets in development/collaboration tools and offer remediation guidance. Cremit's focus highlights the growing recognition of this critical need.
Conclusion: Securing the Intelligent Future
The emergence of protocols like MCP and A2A signifies a monumental leap forward in AI agent capabilities and interconnectedness. While promising transformative benefits, they also introduce new security challenges centered around non-human identities. Securing these often-overlooked digital credentials is no longer a secondary concern but a fundamental prerequisite for realizing the full potential of AI agent interoperability safely and reliably. By prioritizing and implementing robust Non-Human Identity Security strategies, organizations can confidently navigate this expanding AI universe, mitigate evolving risks, and build a more secure and intelligent future.
Go beyond basic data; unlock the actionable AI-driven insights needed to proactively master and mitigate non-human identity risk

Blog
Stay informed on the latest cyber threats and security trends shaping our industry.