NEW: RSAC 2026 NHI Field Report — How Non-Human Identity became cybersecurity's central axis
Back to Blog
Security

NHI Kill Chain: Ghost Key — The Departed Developer Whose AWS Key Still Clocks In Every Morning

A departed developer's AWS key stayed active for 92 days. When an infostealer hit their personal laptop, the key was sold on the dark web. Inside the Ghost Key kill chain and how to defend against orphaned credentials.

Ben Kim
Written by
Ben Kim
12 min read
Share:
NHI Kill Chain: Ghost Key — The Departed Developer Whose AWS Key Still Clocks In Every Morning

Key Takeaways

  • A "Ghost Key" is an NHI credential whose human creator has departed the organization, but the credential itself remains active — unmonitored, unrotated, and unrevoked
  • Only 19% of organizations have formal API key offboarding processes, meaning the vast majority leave orphaned service accounts and stale credentials behind after every departure
  • The average employee touches 31 SaaS applications, each potentially backed by multiple service accounts and API tokens — 100 departures per year can leave thousands of ghost credentials scattered across an organization's infrastructure
  • Infostealer malware on a former employee's personal device can harvest cached credentials months after departure, putting them on dark web marketplaces within hours
  • OWASP's NHI Top 10 ranks Improper Offboarding as the number-one non-human identity risk, ahead of secret exposure, excessive permissions, and every other category
  • Ghost Keys have extremely high validity rates on dark web credential markets because, unlike active credentials, no one is rotating them
  • Detecting Ghost Keys requires mapping every NHI credential to a human owner and automatically triggering revocation when that owner departs — manual offboarding checklists are not sufficient

3:17 AM — A Key That Should Have Been Dead

Early 2025. A mid-size SaaS company in Seoul — roughly 100 engineers, growing fast, Series C closed the previous quarter. The kind of company where the DevOps team runs lean and the infrastructure runs deep.

K was a senior DevOps engineer who had been with the company for three years. He knew the infrastructure better than anyone. He had built the Terraform deployment pipeline from scratch, configured the IAM roles that governed production access, set up the Slack notification bot that alerted the on-call rotation, integrated the Datadog monitoring stack, and maintained the CI/CD service accounts that pushed code to production six times a day. K was, in the way that matters most to infrastructure security, the human behind dozens of non-human identities.

In November 2024, K resigned. Good terms. Two weeks' notice. A farewell dinner with the team.

HR ran the standard offboarding checklist. Laptop returned and wiped. Google Workspace account deactivated. Office badge disabled. Slack account set to deactivated. The IT team confirmed the checklist was complete within 48 hours of K's last day. By every measure HR tracked, K's departure was clean.

But here's what the checklist didn't include: the AWS IAM access key K had generated for Terraform deployments. The Slack bot token K had created under his personal Slack developer account. The Datadog API key K had provisioned and embedded in three separate configuration files. The four CI/CD service account credentials K had set up in GitHub Actions workflows. The .env file on K's personal laptop — the one he'd used on weekends when working from home — containing AWS credentials, a database connection string, and two internal API tokens.

None of these appeared on the HR offboarding checklist, because the HR offboarding checklist only covers human identities. Active Directory. Email. Badge. Laptop. These are the things HR systems know about. NHI credentials — service accounts, API keys, bot tokens, IAM access keys — exist in a different universe entirely. No field in Workday or BambooHR tracks them. No SCIM integration deprovisions them.

K's human identity was fully deactivated within 48 hours. K's non-human identities — every one of them — remained active.

Two months passed. The Terraform pipeline kept running. The Slack bot kept posting alerts. The Datadog integration kept collecting metrics. The CI/CD pipelines kept deploying. Everything worked. Nobody noticed that the human behind these credentials was gone, because NHI credentials don't take sick days, don't miss standups, and don't send farewell emails. They just keep authenticating.

In late January 2025, K installed a cracked version of a commercial video editing application on his personal laptop. This is a story that information security researchers have documented thousands of times: pirated software bundled with malware. In this case, the payload was Lumma Stealer — the most prevalent infostealer malware of 2024, with roughly double the detection volume of the previous year according to ESET's H2 2024 Threat Report. Lumma operates with mechanical efficiency. Within minutes of execution, it had harvested K's browser-stored passwords, session cookies, and — critically — the contents of K's ~/.aws/credentials file and every .env file on the machine. The entire package was uploaded to a credential marketplace — the kind that operates on the same commercial model as any SaaS platform, complete with subscription tiers and customer support.

A buyer on the marketplace purchased K's credential dump as part of a bulk lot. The buyer wasn't targeting K's company specifically. This is the economics of credential theft: buy in volume, validate in bulk, exploit whatever works.

At 3:17 AM on a Saturday in early February 2025, CloudTrail recorded an API call from K's AWS access key. The call originated from an IP address in a European hosting provider's range, resolving to a region the company had never operated in: us-west-2. K's key had only ever been used from ap-northeast-2 — Seoul. The call was sts:GetCallerIdentity. The attacker was checking whether the key still worked.

It did. Of course it did. Nobody had touched it in three months.

Over the next 36 hours — the remainder of a quiet weekend — the attacker methodically explored the blast radius. S3 bucket listings. RDS snapshot access. Lambda function configurations. Every API call authenticated with K's credentials, which still carried the broad permissions of a senior DevOps engineer who needed production-level access to do his job.

Then the attacker found the Slack bot token. Internal Slack channels suddenly received messages that looked like routine deployment notifications but contained links to credential-harvesting pages. Three engineers clicked before anyone noticed something was off.

Monday morning. The security team was running a routine CloudTrail review — a weekly process, not real-time. An analyst flagged the us-west-2 API calls. A key that should have been dead three months ago was making calls from a continent away. The investigation took the rest of the week.

In the Public Key post in this series, we showed how an exposed credential gets found by attacker bots in four minutes. The timeline is terrifying because it's fast. The Ghost Key is terrifying for the opposite reason: it's slow. Three months of silence. No alerts. No anomalies. Just a credential, quietly waiting for someone to use it — and when someone finally did, no one was watching.

Ghost Key Timeline — Day 0 departure to Day 92 attack initiation to Day 95 detection
Ghost Key Timeline — Day 0 departure to Day 92 attack initiation to Day 95 detection

Why This Key Is Dangerous

A Ghost Key is an NHI credential whose human owner has departed the organization, but the credential itself remains active, unmonitored, and unrevoked. The name captures the essential problem: the person is gone, but their digital authority persists. The key keeps authenticating. The service account keeps running. The bot token keeps granting access. There is no human on the other end anymore — just a ghost in the machine, clocking in every morning.

Ghost Keys accumulate for a simple structural reason: HR offboarding processes were designed for human identities, and NHI credentials were never added to the scope. When an employee departs, HR deactivates their Active Directory account, revokes email access, disables their badge, and collects their laptop. These are the identities HR systems were built to manage. But the API keys that employee generated, the service accounts they provisioned, the bot tokens they created, the IAM access keys they configured — none of these show up in HR's systems. There is no field in Workday for "Terraform deployment IAM key." There is no SCIM integration that deprovisions a Slack bot token when its creator's employment status changes to "terminated."

The numbers confirm this is not an edge case. CSA's 2026 State of NHI Security report found that only 19% of organizations have formal API key offboarding processes. That means 81% of organizations — the overwhelming majority — have no systematic way to identify and revoke the NHI credentials associated with a departing employee. The credentials simply persist.

The scale compounds the problem. Research from Productiv and others consistently shows that the average employee touches approximately 31 SaaS applications. Behind each of those applications may sit one or more service accounts, API tokens, or integration credentials that the employee created or manages. A company with 100 employee departures per year isn't leaving behind 100 orphaned credentials. It's leaving behind hundreds, potentially thousands, scattered across cloud providers, SaaS platforms, CI/CD systems, and internal tools. Each one is an active credential with no human owner. Each one is a Ghost Key.

The infostealer dimension makes this exponentially worse. Even if an organization runs a perfect internal offboarding — revoking every key from every system the employee accessed — it cannot control what's cached on personal devices. K used his personal laptop for weekend work. His .aws/credentials file, his .env files, his browser-stored tokens — all of these existed outside the company's perimeter. When Lumma Stealer harvested those files two months after K's departure, the company had no way to know, no way to prevent it, and no way to detect it until the credentials were already in an attacker's hands.

This is the distinction between a Public Key and a Ghost Key, and it matters. A Public Key is an exposed credential — pushed to a public repository, pasted into a public channel, visible to anyone who looks. The danger is in the exposure. A Ghost Key is a forgotten credential — still active, still powerful, but invisible because no one knows it exists anymore. A Public Key gets found because someone is scanning for it. A Ghost Key gets found because someone stumbles across it — on a dark web marketplace, in a compromised laptop's file system, in a credential dump. The Public Key is dangerous because it's visible. The Ghost Key is dangerous because it's not.

Kill Chain — How a Ghost Key Becomes an Active Breach

The Ghost Key attack chain differs from a Public Key scenario in one critical respect: the credential isn't found through scanning public sources. It's harvested from a private environment — typically a former employee's personal device — and sold on dark web marketplaces. From the attacker's perspective, the kill chain follows five stages.

Stage 1: Credential Harvesting. The attack begins on the former employee's personal device. Infostealer malware — Lumma Stealer, Raccoon, RedLine, Vidar — executes and systematically collects everything of value: browser-stored passwords, session cookies, ~/.aws/credentials, every .env file on the filesystem, SSH keys, Kubernetes configs. The harvest is comprehensive and automated. Within minutes, the credential payload is uploaded to a dark web marketplace — Russian Market, Genesis Market's successors, or Telegram-based credential shops — packaged, priced, and listed alongside millions of other stolen credential sets.

Stage 2: Validation. A buyer purchases the credential dump. The first step is always validation: which of these credentials are still alive? For AWS keys, the test is sts:GetCallerIdentity — a call that requires zero permissions and simply confirms whether the key is active. For GitHub tokens, it's the /user endpoint. For Slack tokens, auth.test. Ghost Keys have an extraordinarily high validation rate compared to other stolen credentials. Active employees' credentials get rotated, expire, or trigger anomaly detection. Ghost Keys do none of these things. Nobody is rotating a key that nobody knows about. The key K generated for Terraform deployments hadn't been touched since the day he created it. Three months later, it was exactly as valid as the day it was minted.

Stage 3: Initial Access. The attacker authenticates using the departed employee's credentials — and inherits the departed employee's permissions. This is where the specific role of the departed employee matters enormously. A departed marketing analyst's SaaS API token might grant access to a single platform. A departed DevOps engineer's credentials are a different story entirely. K's IAM key had the permissions of a senior DevOps engineer who needed to deploy infrastructure to production: S3 access, RDS access, Lambda management, EC2 provisioning, and the ability to read secrets from AWS Systems Manager Parameter Store. The attacker didn't need to escalate privileges. K's legitimate role had already provided them.

Stage 4: Persistence and Lateral Movement. Ghost Keys exist in a monitoring blind spot. When K's key made API calls at 3:17 AM on a Saturday, there was no alert, because nobody had configured alerts for K's credentials. K was gone. His credentials weren't part of any active monitoring scope. The attacker exploited this blind spot to create persistence: a new IAM user, a new access key, a new avenue of access that would survive even if K's original key was eventually discovered and revoked. The Slack bot token opened an entirely separate lateral movement path — internal channels, internal trust, internal phishing. CI/CD service account tokens opened yet another: access to deployment pipelines, build configurations, and potentially the ability to inject code into production artifacts.

Stage 5: Impact. The consequences cascade across multiple dimensions simultaneously. Data exfiltration through S3 and RDS access. Internal social engineering through the compromised Slack bot token — phishing messages that appeared to come from a trusted internal system, not an external attacker. Supply chain risk through CI/CD token compromise — the potential to inject malicious code into builds that ship to customers. And persistent backdoor access through newly created credentials that the attacker controls directly. The blast radius of a single Ghost Key, in the hands of a motivated attacker with a full weekend of unmonitored access, is organizational.

Kill Chain Diagram — Ghost Key attack 5 stages from credential orphaning to infrastructure breach
Kill Chain Diagram — Ghost Key attack 5 stages from credential orphaning to infrastructure breach

Why Traditional Security Tools Miss It

The Ghost Key problem falls into a gap between three categories of tools that were never designed to work together: HR systems, identity management platforms, and security monitoring tools. Each one covers part of the picture. None of them cover the whole thing.

HR systems don't track NHI credentials. Workday, BambooHR, Rippling — these platforms manage employee lifecycle data. They are the system of record for human identities. But they have no concept of the non-human identities an employee creates during their tenure. There is no "API keys provisioned" field on a Workday profile. When HR triggers an offboarding workflow, it covers everything HR knows about — which excludes everything it doesn't.

IAM tools don't link service accounts to their human creators. AWS IAM, GCP IAM, Azure AD — these platforms manage non-human identities, but they typically don't maintain a reliable "owner" field mapping each service account back to the human who created it. Which employee provisioned a given IAM user? The answer is buried in CloudTrail logs that nobody queries, or in the institutional memory of the team. When that human departs, there is no automated way to enumerate and revoke their NHI credentials.

SCIM/SAML deprovisioning only covers direct user accounts. Deactivating K's Slack user account does not deactivate the Slack bot token K created under his developer account. Deactivating K's AWS SSO access does not deactivate the IAM access key K generated manually. SCIM was designed for human identity lifecycle management. NHI credentials are out of scope.

Periodic audits leave months of accumulation between reviews. Quarterly access reviews may eventually catch orphaned credentials — but "eventually" can mean three to twelve months of exposure. K departed in November. If the next quarterly review happened in March, that's four months of an active Ghost Key. And quarterly reviews typically focus on human access, not NHI credentials.

No automated correlation between "employee departed" and "revoke all their NHI credentials." This is the fundamental gap. The HR system knows K left. The IAM system knows K's access key exists. No system connects these two facts. Without that link, Ghost Keys are an inevitability — not a risk, but a certainty.

The infostealer dimension defeats even perfect internal offboarding. Suppose an organization revokes every NHI credential K created within 24 hours of departure. K's personal laptop still has cached credentials. The company has no visibility into K's personal device. When Lumma Stealer harvests those credentials two months later, the company's perfect offboarding is irrelevant — the credentials are already on the dark web.

Offboarding Gap — HR deactivated items vs still-active NHI credentials comparison
Offboarding Gap — HR deactivated items vs still-active NHI credentials comparison

Real-World Breaches and Industry Data

Ghost Keys are not a theoretical risk category. They are a documented, recurring cause of some of the most significant security incidents of the past several years.

OWASP's NHI Top 10 ranks Improper Offboarding as the number-one risk in non-human identity security. Not second. Not tied for first. Number one. The rationale is straightforward: orphaned service accounts and stale credentials from departed employees represent the single largest unmanaged attack surface in most organizations. The credentials are active, the permissions are real, and nobody is watching.

CSA's 2026 State of NHI Security report quantifies the gap. Only 19% of organizations have formal API key offboarding processes. Only 12% report being highly confident in their ability to prevent NHI-based attacks. These numbers describe an industry that knows the risk exists and has not yet built the processes to address it.

The Verizon 2025 Data Breach Investigations Report found that credential exploitation — encompassing stolen, leaked, and orphaned credentials — was a factor in approximately 20% of all breaches analyzed. Credential-based attacks remain one of the most consistent and effective intrusion vectors year after year.

The CircleCI breach in January 2023 is the closest public analog to this scenario. A CircleCI engineer's laptop was infected with infostealer malware that stole a session token, giving the attacker access to CircleCI's production environment — and the customer secrets stored there. The attack didn't exploit a code vulnerability. It exploited a credential on a device. The same vector that turns a departed employee's personal laptop into a pipeline to the dark web.

The SolarWinds breach demonstrated how inadequately monitored credentials can persist in infrastructure for months. Credential-based access to SolarWinds' build environment went unnoticed from at least October 2019 to December 2020 — over a year. The credentials existed in the spaces between what security teams were actively watching.

The Uber breach of September 2022 showed credential chaining at organizational scale. An attacker used credentials found in internal systems — PowerShell scripts, network shares — to reach Uber's AWS environment, Google Workspace, Slack, and HackerOne. Orphaned service accounts and stale credentials in internal systems were part of the chain.

The infostealer market that enables Ghost Key exploitation is itself growing rapidly. Lumma Stealer was the most prevalent infostealer malware in 2024, with detection volumes roughly doubling year over year. The business model is mature: malware-as-a-service subscriptions, automated credential harvesting, bulk upload to marketplaces, and structured pricing. The barrier to entry for credential theft has never been lower.

And once a Ghost Key reaches one of these marketplaces, the clock is ticking. GitGuardian's 2024 State of Secrets Sprawl report found that over 90% of exposed secrets in GitHub repositories were still valid five days after detection. Ghost Keys, by their nature, have even longer validity windows — because nobody knows they need to be rotated. A credential that no one is watching, owned by a person who no longer works at the company, can remain valid indefinitely.

Detection and Response Guide

Detecting and remediating Ghost Keys requires a fundamentally different approach from detecting exposed credentials in public repositories. A Public Key is found by scanning public spaces. A Ghost Key is found by understanding ownership — specifically, by knowing which human created or manages each NHI credential, and acting when that human departs.

Build a complete NHI credential inventory mapped to human owners. Every API key, service account, bot token, and IAM access key needs to be cataloged and linked to the human who created or manages it. If a credential has no identifiable owner, treat it as a Ghost Key by default — because if no one owns it, no one will revoke it. The inventory must span every platform: AWS, GCP, Azure, GitHub, Slack, Datadog, CI/CD systems, and every SaaS integration.

Integrate NHI credential revocation into the employee offboarding workflow. When an employee's status changes to "terminated," the process must automatically enumerate every NHI credential associated with that employee and initiate revocation. This cannot be a manual step. Manual steps get skipped — especially during busy periods, or when the departing employee was the only person who knew which credentials they created. HR event triggers enumeration, enumeration triggers revocation, revocation triggers confirmation. No gaps.

Implement dormant credential detection. Flag any NHI credential unused for 30, 60, or 90 days. Dormancy is not proof of orphaning, but it's a strong signal — especially when correlated with employment data. A credential inactive since the day its creator departed is, with near-certainty, a Ghost Key. For a deeper dive on building detection capabilities, see Secret Detection: Complete Guide for 2026.

Enforce mandatory rotation intervals. Ghost Keys survive because they're never rotated. Mandatory rotation — 90 days is a common standard, shorter for high-privilege credentials — ensures that even if a Ghost Key is missed during offboarding, it expires before it can be exploited.

When a Ghost Key is found, respond with the assumption it has already been compromised. Revoke the key immediately, then audit access logs for the entire period since the credential's owner departed. In K's case, that means reviewing three months of CloudTrail logs. Assess the blast radius — every service and data store the credential could reach. Check for credential chaining: did the attacker create additional credentials, access other keys in configuration files, or move laterally? Finally, check whether the credential was shared across systems — if K's AWS key also appeared in CI/CD pipelines, every referencing system needs updating. For implementation details, see Git Secret Scanning: Complete Implementation Guide.

Response Flow — Ghost Key response 4 steps: identify orphaned keys, revoke and rotate, audit access logs, implement lifecycle policy
Response Flow — Ghost Key response 4 steps: identify orphaned keys, revoke and rotate, audit access logs, implement lifecycle policy

How Cremit Argus Detects Ghost Keys

The gaps that allow Ghost Keys to persist — no owner mapping, no offboarding integration, no dormant detection — are exactly what Cremit Argus was built to close.

Argus maintains a live map of NHI credentials linked to their human owners. Every API key, service account, bot token, and access key in the organization is cataloged and associated with the person who created or manages it. When an employee departs, Argus can immediately surface every NHI credential tied to that individual — not through a manual audit that takes days, but through a query that takes seconds. The result is a precise list of credentials that need to be revoked, rotated, or reassigned.

Argus monitors for dormant credential reactivation in real time. This is the exact detection that would have caught K's scenario: a credential that had been inactive for months suddenly making API calls at 3:17 AM from an unexpected geographic region. The pattern — long dormancy followed by sudden reactivation, often from an unusual IP or region — is a high-confidence indicator of Ghost Key exploitation. Argus flags these events immediately, rather than waiting for a weekly CloudTrail review to surface them days later.

Argus provides cross-platform visibility across the full surface area where Ghost Keys hide. GitHub repositories. Slack workspaces. CI/CD pipelines. Cloud provider consoles. Confluence documentation. Datadog configurations. Ghost Keys don't confine themselves to a single platform, and neither does Argus. The same credential that exists as an IAM access key in AWS may also appear in a GitHub Actions workflow, a Slack bot configuration, and a .env file documented in Confluence. Argus tracks credentials across all of these surfaces, so when one Ghost Key is found, every instance of it is found.

See how Argus identifies and eliminates Ghost Keys at cremit.io.

NHI Kill Chain Series Overview

This post is part of the NHI Kill Chain series. Across eight posts, we analyze the eight most dangerous types of NHI credentials that hide inside organizations — each mapped to Cremit's CRE classification system.

  1. Ghost Key — Active credentials from departed team members (current post)
  2. Shadow Key — Credentials in non-code sources (Slack, Jira, Confluence) (coming soon)
  3. Aged Key — Credentials unrotated for over 90 days (coming soon)
  4. Over-shared Key — Secrets found in 3+ scan sources (coming soon)
  5. Zombie Key — Deleted files with still-valid credentials (coming soon)
  6. Drifted Key — Credentials spread across 2+ platform types (coming soon)
  7. Public Key — Secrets in public repositories, accessible to anyone
  8. Unattributed Key — Secrets with no identifiable owner (coming soon)

Related: NHI Kill Chain: Public Key — Secrets in public repositories

Next post: NHI Kill Chain: Shadow Key — Credentials in non-code sources

Cremit is an NHI security company. Learn more at cremit.io

NHI SecurityAPI KeysDark WebDevSecOpsCloud Security

Enjoyed this post?

Share it with your network

Share:
NHI Kill Chain: Ghost Key — The Departed Developer Whose AWS Key Still Clocks In Every Morning | Cremit