Article

Navigating the Expanding AI Universe: Deepening Our Understanding of MCP, A2A, and the Imperative of Non-Human Identity Security

Delve into AI protocols MCP & A2A, their potential security risks for AI agents, and the increasing importance of securing Non-Human Identities (NHIs).

The rapid advancements in Artificial Intelligence are not just theoretical anymore; they are manifesting in practical protocols like Anthropic’s Model Context Protocol (MCP) and Google’s Agent2Agent Protocol (A2A). These protocols are paving the way for a future where AI agents can interact seamlessly with external tools and collaborate directly with each other, promising unprecedented levels of automation and efficiency. However, this increased sophistication and interconnectedness inherently amplify security challenges, particularly concerning non-human identities (NHIs). In this expanded discussion, we will delve deeper into the intricacies of MCP and A2A, their potential security implications, and why securing NHIs is increasingly critical in this evolving landscape.

The Model Context Protocol (MCP): Bridging the Gap Between AI and the Real World

Anthropic's Model Context Protocol (MCP) serves as a crucial bridge, enabling Large Language Models (LLMs) and AI agents to securely connect with external resources such as APIs, databases, and file systems. Before the advent of standardized protocols like MCP, integrating external tools with AI often involved inefficient custom development for each tool. MCP offers a pluggable framework designed to streamline this process, thereby extending the capabilities of AI in a more standardized way. Since its introduction in late 2024, MCP has seen significant adoption in mainstream AI applications like Claude Desktop and Cursor, and various MCP Server marketplaces have emerged, indicating a growing ecosystem.

Diagram: Model Context Protocol (MCP) Architecture connecting Server, Host, and Client components.

However, this rapid adoption has also brought new security vulnerabilities to light. The current MCP architecture typically involves three main components:

  1. The Host: The local environment where the AI application runs and where the user interacts with the AI (e.g., Claude Desktop, Cursor).
  2. The Client: Integrated within the AI application, responsible for parsing user requests and communicating with the MCP Server to invoke tools and access resources.
  3. The Server: The backend service corresponding to an MCP plugin, providing the external tools, resources, and functionalities the AI can invoke.

This multi-component interaction, especially in scenarios involving multiple instances and cross-component collaboration, introduces various security risks. Tool Poisoning Attacks (TPAs), as highlighted by Invariant Labs, are a significant concern. These attacks exploit the fact that malicious instructions can be embedded within tool descriptions, often hidden in MCP code comments. While these hidden instructions might not be visible to the user in simplified UIs, the AI model processes them. This can lead to the AI agent performing unauthorized actions, such as reading sensitive files or exfiltrating private data, as demonstrated in the scenario involving exfiltrating WhatsApp data via Cursor.

The underlying mechanism of a TPA is often straightforward. For instance, malicious MCP code could initially appear innocuous but later overwrite the tool's docstring with hidden instructions to redirect email recipients or access sensitive local files without the user's knowledge or consent.

Given these risks, several security considerations for MCP implementations are paramount:

  • MCP Server (Plugin) Security: Includes strict input validation, API rate limiting, proper output encoding, robust server authentication/authorization, comprehensive monitoring/logging (including anomaly detection), and ensuring invocation environment isolation and appropriate tool permissions.
  • MCP Client/Host Security: Focuses on UI security (clear display of operations, confirmation for sensitive actions), permission transparency, operation visualization, detailed logs, user control over hidden tags, clear status feedback, and effective MCP tool/server management (verification, secure updates, function name checking, malicious MCP detection, authorized server directory). Client logging, security event recording, anomaly alerts, strong server verification, and secure communication (TLS encryption, certificate validation) are also essential.
  • MCP Adaptation and Invocation Security on Different LLMs: Requires considering how different LLM backends interact with MCP, ensuring priority function execution, preventing prompt injection, securing invocation, protecting sensitive information, and addressing security in multi-modal content.
  • Multi-MCP Scenario Security: With multiple MCP Servers potentially enabled, security necessitates periodic scans, preventing function priority hijacking, and securing cross-MCP function calls.

To address these weaknesses, the MCP specification has been updated to include support for OAuth 2.1 authorization to secure client-server interactions with managed permissions. Key principles for security and trust & safety now emphasize user consent and control, data privacy, cautious handling of tool security (especially code execution), and user control over LLM sampling requests. Additionally, the community has suggested further enhancements like standardizing instruction syntax within tool descriptions, refining the permission model for granular control, and mandating or strongly recommending digital signatures for tool descriptions to ensure integrity and authenticity.

Flowchart: OAuth 2.1, Digital Signatures, & Granular Permissions are Updated MCP Security Measures.

The Agent2Agent Protocol (A2A): Fostering Collaboration in the AI Ecosystem

In contrast to MCP's focus on AI-to-tool communication, Google's Agent2Agent Protocol (A2A) is designed as an open standard specifically for AI agent interoperability, enabling direct communication and collaboration between intelligent agents. Google positions A2A as complementary to MCP, aiming to address the need for agents to work together to automate complex enterprise workflows and drive unprecedented levels of efficiency and innovation. This initiative reflects a shared vision of a future where AI agents, regardless of their underlying technologies, can seamlessly collaborate.

A2A is built upon five key design principles:

  1. Embrace agentic capabilities: Facilitate collaboration in natural, unstructured ways, even without shared memory or context.
  2. Build on existing standards: Leverage HTTP, SSE, JSON-RPC for easier integration.
  3. Secure by default: Support enterprise-grade authentication and authorization from the outset.
  4. Support for long-running tasks: Handle tasks lasting hours or days with real-time feedback and state updates.
  5. Modality agnostic: Support various modalities beyond text, including audio and video.
Key A2A characteristics: Agentic capabilities, Secure, Standards-based, Long tasks, Modality agnostic

A2A facilitates communication between a "client" agent (formulating tasks) and a "remote" agent (acting on tasks). This involves several key capabilities:

  • Capability discovery: Agents advertise capabilities via a JSON "Agent Card".
  • Task management: Task-oriented communication with defined lifecycles and "artifacts" (outputs).
  • Collaboration: Exchange of messages for context, replies, artifacts, or instructions.
  • User experience negotiation: Messages specify content types to ensure correct format based on UI capabilities.

A real-world example is candidate sourcing: a hiring manager's agent tasks sourcing and background check agents, all within a unified interface.

Google emphasizes a "secure-by-default" design for A2A, incorporating standard security mechanisms:

  • Enterprise-Grade Authentication/Authorization: Explicit support for protocols like OAuth 2.0.
  • OpenAPI Compatibility: Leverages OpenAPI specifications, often using Bearer Tokens.
  • Access Control (RBAC): Designed for fine-grained management of agent capabilities.
  • Data Encryption: Supports encrypted data exchange (e.g., HTTPS).
  • Evolving Authorization Schemes: Plans to enhance AgentCard with additional mechanisms.

Compared to the initial MCP specification, A2A appears to have a more mature approach to built-in security features. However, its focus on inter-agent communication implies many A2A endpoints might be publicly accessible, potentially increasing vulnerability impact. Heightened security awareness is crucial for A2A developers.

Client Agent and Remote Agent interaction diagram with layers: Discovery, Task Mgmt, Collaboration, UX.

The Interplay of MCP and A2A: A Symbiotic Relationship?

The Google Developers Blog explicitly states that A2A is designed to complement MCP. While A2A focuses on agent-to-agent communication, MCP provides helpful tools and context to agents. An AI agent might use MCP to interact with a database and then use A2A to collaborate with another AI agent to process that information or complete a complex task.

Structurally, A2A follows a client-server model with independent agents, whereas MCP operates within an application-LLM-tool structure centered on the LLM. A2A emphasizes direct communication between independent entities; MCP focuses on extending a single LLM's functionality via external tools.

Both protocols currently require manual configuration for agent registration and discovery. MCP benefits from earlier market entry and a more established community. However, A2A is rapidly gaining traction, backed by Google and a growing partner list. The prevailing sentiment suggests MCP and A2A are likely to evolve towards complementarity or integration, offering more open and standardized options for developers.

The Indispensable Role of Non-Human Identity Security in the Age of AI Agents

As AI agents become increasingly autonomous and interconnected through protocols like MCP and A2A, the security of the non-human identities (NHIs) they rely on becomes paramount. NHIs – encompassing service accounts, API keys, tokens, and certificates – act as the credentials allowing AI agents to access resources and interact with other systems. The sheer volume and variety of NHIs within modern enterprises already pose significant management and security challenges. The advent of widespread AI agent interactions will only amplify these challenges and introduce new risks.

The security threats emerging with MCP and A2A underscore the urgency of robust NHI security:

  • Malicious MCP Tools: Attackers can create tools with hidden malicious instructions to manipulate systems or exfiltrate data.
  • Tool Poisoning Attacks: Compromised MCP tools can subtly alter the behavior of other tools or operations.
  • AI Agent Hijacking (A2A): Gaining control over one agent could allow attackers to exploit its connections and permissions to compromise linked agents.
  • Exploiting Weak NHI Management: Existing vulnerabilities like improper credential offboarding, secret leakage, and long-lived secrets can be readily exploited by interconnected AI agents.

Given the dynamic and often decentralized nature of NHIs, traditional security approaches are often inadequate. Unlike human users, NHIs frequently lack clear ownership or lifecycle management. Therefore, a dedicated focus on Non-Human Identity Security becomes a fundamental requirement for organizations embracing AI agent interoperability.

A Zero Trust security model ("never trust, always verify") becomes even more critical for NHIs in this landscape. Every access request from an AI agent utilizing an NHI should be continuously validated to minimize risk.

Zero Trust ("Never Trust, Always Verify") diagram with 4 principles: Visibility, LCM, Monitoring, Controls.

To strengthen your organization's NHI security posture, consider these strategies:

  1. Holistic Visibility: Gain comprehensive insight into all NHIs (location, privileges, usage).
  2. Robust Lifecycle Management (NHI-LCM): Implement processes for provisioning, managing permissions, and timely decommissioning.
  3. Continuous Monitoring and Threat Detection (NHI-TDR): Adopt an "Assume Breach" mentality with real-time monitoring for suspicious NHI activity and clear incident response plans.
  4. Zero Trust Controls: Extend Zero Trust principles (continuous validation, least privilege) to all NHI interactions.

Companies like Cremit are specifically addressing these challenges by providing solutions focused on Non-Human Identity Security. Our NHI Security Platform helps organizations gain visibility into their non-human identities and manage their lifecycles effectively. Cremit's technology aims to detect and mitigate risks associated with NHIs, including those potentially exploited in MCP and A2A environments. For instance, Cremit is developing capabilities using platforms like AWS Bedrock (with Claude+MCP) to analyze NHI behavior and provide context-aware threat information. The platform aims to detect exposed secrets in development/collaboration tools and offer remediation guidance. Cremit's focus highlights the growing recognition of this critical need.

Conclusion: Securing the Intelligent Future

The emergence of protocols like MCP and A2A signifies a monumental leap forward in AI agent capabilities and interconnectedness. While promising transformative benefits, they also introduce new security challenges centered around non-human identities. Securing these often-overlooked digital credentials is no longer a secondary concern but a fundamental prerequisite for realizing the full potential of AI agent interoperability safely and reliably. By prioritizing and implementing robust Non-Human Identity Security strategies, organizations can confidently navigate this expanding AI universe, mitigate evolving risks, and build a more secure and intelligent future.

Unlock AI-Driven Insights to Master Non-Human Identity Risk.

Go beyond basic data; unlock the actionable AI-driven insights needed to proactively master and mitigate non-human identity risk

A dark-themed cybersecurity dashboard from Cremit showing non-human identity (NHI) data analysis. Key metrics include “Detected Secrets” (27 new) and “Found Sensitive Data” (58 new) from Jan 16–24, 2024. Two donut charts break down source types of detected secrets and sensitive data by platform: GitHub (15k), GetResponse (1,352), and Atera (352), totaling 16.9k. The dashboard includes a line graph showing trends in sensitive data over time, and bar charts showing the top 10 reasons for sensitive data detection—most prominently email addresses and various key types (API, RSA, PGP, SSH).

Blog

Explore more news & updates

Stay informed on the latest cyber threats and security trends shaping our industry.

OWASP NHI5:2025 - Overprivileged NHI In-Depth Analysis and Management
Deep dive into OWASP NHI5 Overprivileged NHIs & AI. Learn causes, risks, detection, and mitigation strategies like CIEM, PaC, and JIT access.
Beyond Lifecycle Management: Why Continuous Secret Detection is Non-Negotiable for NHI Security
Traditional NHI controls like rotation aren't enough. Discover why proactive, continuous secret detection is essential for securing modern infrastructure.
OWASP NHI4:2025 Insecure Authentication Deep Dive Introduction: The Era of Non-Human Identities Beyond Humans
Deep dive into OWASP NHI4: Insecure Authentication. Understand the risks of NHIs, key vulnerabilities, and how Zero Trust helps protect your systems.
Secret Sprawl and Non-Human Identities: The Growing Security Challenge
Discover NHI sprawl vulnerabilities and how Cremit's detection tools safeguard your organization from credential exposure. Learn to manage NHI risks.
Navigating the Expanding AI Universe: Deepening Our Understanding of MCP, A2A, and the Imperative of Non-Human Identity Security
Delve into AI protocols MCP & A2A, their potential security risks for AI agents, and the increasing importance of securing Non-Human Identities (NHIs).
Stop Secrets Sprawl: Shifting Left for Effective Secret Detection
Leaked secrets threaten fast-paced development. Learn how Shift Left security integrates early secret detection in DevOps to prevent breaches & cut costs.
Hidden Dangers: Why Detecting Secrets in S3 Buckets is Critical
Learn critical strategies for detecting secrets in S3 buckets. Understand the risks of exposed NHI credentials & why proactive scanning is essential.
OWASP NHI2:2025 Secret Leakage – Understanding and Mitigating the Risks
NHI2 Secret Leakage: Exposed API keys and credentials threaten your business. Learn how to prevent unauthorized access, data breaches, and system disruption.
Stop the Sprawl: Introducing Cremit’s AWS S3 Non-Human Identity Detection
Cremit Launches AWS S3 Non-Human Identity (NHI) Detection to Boost Cloud Security
Human vs. Non-Human Identity: The Key Differentiators
Explore the critical differences between human and non-human digital identities, revealing hidden security risks and the importance of secret detection.
Wake-Up Call: tj-actions/changed-files Compromised NHIs
Learn from the tj-actions/changed-files compromise: CI/CD non-human identity (NHI) security risks, secret theft, and proactive hardening.
OWASP NHI3:2025 - Vulnerable Third-Party NHI
Discover the security risks of vulnerable third-party non-human identities (NHI3:2025) and learn effective strategies to protect your organization from this OWASP Top 10 threat.
Build vs. Buy: Making the Right Choice for Secrets Detection
Build vs. buy secrets detection: our expert guide compares costs, features, and ROI for in-house and commercial security platforms.
Bybit Hack Analysis: Strengthening Crypto Exchange Security
Bybit hacked! $1.4B crypto currency stolen! Exploited Safe{Wallet}, API key leak, AWS S3 breach? Exchange security is at stake! Check your security now!
Rising Data Breach Costs: Secret Detection's Role
Learn about the growing financial impact of data breaches and how secret detection and cybersecurity strategies can safeguard your data and business.
OWASP NHI1:2025 Improper Offboarding- A Comprehensive Overview
Discover how improper offboarding exposes credentials, leading to vulnerabilities like NHI sprawl, attack surface expansion, and compliance risks.
Behind the Code: Best Practices for Identifying Hidden Secrets
Improve code security with expert secret detection methods. Learn strategies to safeguard API keys, tokens, and certificates within your expanding cloud infrastructure.
Understanding the OWASP Non-Human Identities (NHI) Top 10 Threats
Understanding NHI OWASP Top 10: risks to non-human identities like APIs and keys. Covers weak authentication, insecure storage, and more.
Securing Your Software Pipeline: The Role of Secret Detection
Prevent secret leaks in your software pipeline. Discover how secret detection improves security, safeguards CI/CD, and prevents credential exposure.
What Is Secret Detection? A Beginner’s Guide
Cloud security demands secret detection. Learn its meaning and why it's essential for protecting sensitive data in today's cloud-driven organizations.
Full Version of Nebula – UI, New Features, and More!
Explore the features in Nebula’s full version, including a refined UI/UX, fine-grained access control, audit logs, and scalable plans for teams of all sizes.
Unveiling Nebula: An Open-Source MA-ABE Secrets Vault
Nebula is an open-source MA-ABE secrets vault offering granular access control, enhanced security, and secret management for developers and teams.
Vigilant Ally: Helping Developers Secure GitHub Secrets
The Vigilant Ally Initiative supports developers secure API keys, tokens, and credentials on GitHub, promoting secure coding and secrets management.
Cremit Joins AWS SaaS Spotlight Program
Cremit joins the AWS SaaS Spotlight Program to gain insights through mentorship and collaboration, driving innovation in AI-powered security solutions.
DevSecOps: Why start with Cremit
DevSecOps is security into development, improving safety with early vulnerability detection, remediation, and compliance, starting with credential checks.
Credential Leakage Risks Hiding in Frontend Code
Learn why credentials like API keys and tokens are critical for access control and the risks of exposure to secure your applications and systems effectively.
Introducing Probe! Cremit's New Detection Engine
Probe detects exposed credentials and sensitive data across cloud tools, automating validation and alerts, with AI-powered scanning for enhanced security.
Customer Interview: Insights from ENlighten
We interviewed Jinseok Yeo from ENlighten, Korea’s top energy IT platform, on how they secure credentials and secrets. Here’s their approach to security.
6 Essential Practices for Protecting Non-Human Identities
Safeguard your infrastructure: Learn 6 best practices to protect API keys, passwords & encryption keys with secure storage, access controls & rotation.
Microsoft Secrets Leak: A Cybersecurity Wake-Up Call
See how an employee error at Microsoft led to the exposure of sensitive secrets and 38 terabytes of data.