NHI Kill Chain: Public Key — What Happens 4 Minutes After a .env Hits
A .env file pushed to a public GitHub repo is found by attacker bots in 4 minutes. We map the full kill chain — from credential exposure to infrastructure compromise and show how to detect and respond before the damage is done.

Key Takeaways
- AWS credentials pushed to a public GitHub repository are detected by automated attacker bots in an average of 4 minutes
- In 2023 alone, 12.8 million secrets were discovered in public GitHub repositories, and more than 90% remained valid even after detection
- Running git rm to delete a file does not erase the credential from git history — simple deletion does not resolve the exposure
- Non-human identity (NHI) credentials outnumber human accounts at a ratio of approximately 100:1 in enterprise environments according to OWASP, and only 12% of organizations are highly confident in preventing NHI-based attacks
- Credential-based breaches take an average of 258 days to identify and contain, making them the most expensive category of data breach
- Immediate rotation of exposed keys, access log auditing, and blast radius assessment are the pillars of effective incident response
- Prevention requires pre-commit hooks, CI pipeline scanning, and secrets manager adoption as organizational standards, not individual choices
Friday, 5:47 PM — The Push
Fall 2024. A startup in Seoul, three months past its Series B funding round, was scaling fast. The engineering team of roughly twenty developers was building out a SaaS platform at the kind of velocity that investors love and security teams dread. They had recently migrated from a monolithic architecture to microservices, tripling the number of API integrations — and, by extension, the number of credentials floating through their development workflow. On a Friday afternoon, J — a junior developer two weeks into the job — was setting up a local development environment. A senior engineer had handed over a .env file containing AWS Access Keys, RDS database credentials, and a Stripe API key. It arrived via Slack DM, copied from a shared internal document. Standard onboarding. Nothing unusual.
J created a feature branch, wrote some code, committed, and pushed. A routine Friday afternoon. Except for one detail: the repository was public. And .gitignore did not include .env.
At 5:47 PM, the git push completed.
At 5:51 PM, an automated bot monitoring the GitHub Events API in real time flagged the commit. It had matched the telltale pattern of an AWS Access Key — a 20-character string beginning with AKIA. Four minutes. Faster than it takes to brew a pour-over. The attacker's infrastructure had already ingested the credentials.
At 6:12 PM, the attacker called sts:GetCallerIdentity using the stolen AWS key. This API requires zero permissions to invoke; it simply confirms whether a key is valid and returns the associated account information. The key was alive. The IAM user it belonged to had broader permissions than anyone had intended. It could spin up EC2 instances. It could access S3 buckets. It could read RDS snapshots.
At 7:30 PM, eight p3.2xlarge instances spun up in us-east-1. GPU-optimized machines. Purpose-built for cryptocurrency mining. Simultaneously, the attacker began copying customer data backups from S3 to their own infrastructure.
J had already left the office. No one knew.
Monday morning, the DevOps engineer opened an AWS cost alert email. Weekend cloud spending had exceeded the normal baseline by a factor of 40. But the cost was not the real problem. Customer data had already been exfiltrated.
One .env file. Four minutes of exposure. And it took a single weekend for an organization's data and trust to unravel. The Series B funds that were supposed to fuel growth would now be redirected toward incident response, legal counsel, and customer notification. The startup had learned the hardest possible lesson about how exposed credentials in GitHub repositories can turn into a catastrophe.

Why This Key Is Dangerous
First, a clarification: "Public Key" in this series has nothing to do with public key cryptography. It means active, live credentials that have ended up somewhere publicly accessible — public repositories, public channels, public documentation. An .env file pushed to a public GitHub repo, an API key pasted into a Stack Overflow answer, a database password sitting in an open Confluence page. All of these are Public Keys in the sense that matters here.
How does this happen? More ways than you'd expect. The most common is a missing .gitignore entry — a new project spun up without one, or an existing .gitignore that simply omits .env. Code migrated from a private repository to a public one is another frequent culprit: credentials that were fine in a private context suddenly become visible to the entire world. When teams split a monorepo and carry the full commit history along, secrets buried in old commits travel with it. And when new developers get onboarded without any training on secret hygiene, the Friday afternoon scenario writes itself.
Here's the thing about git: it's designed to preserve everything. When you realize the mistake and delete the .env file in your next commit, the previous commit still contains the file in full. git rm removes a file from the current working directory — it does not erase history. Truly purging a credential from git requires history rewriting tools like git filter-repo to purge it from every commit, a complex and risky operation that most developers never attempt.
A lot of developers assume nobody is watching their repositories. The reality is the opposite. The GitHub Events API provides a real-time stream of every public event on the platform. Attackers subscribe to this feed around the clock, scanning every newly pushed commit for credential patterns. Automated bots don't distinguish between a small side project and a high-profile open source repo. Every public commit is an equal target.
Scale makes this problem structurally different from most security risks. According to OWASP's NHI Top 10, non-human identity (NHI) credentials outnumber human accounts at a ratio of approximately 100:1 in enterprise environments. API keys, service account tokens, OAuth secrets, database connection strings, CI/CD pipeline tokens — the number of non-human credentials required to run a single service is far larger than most people realize. Yet CSA's 2026 State of NHI Security report found that only 12% of organizations are highly confident in their ability to prevent attacks via NHIs, and 24% take more than 24 hours to rotate or revoke a credential after a potential exposure. If even one of those credentials lands in a public space, it becomes a skeleton key to the organization's infrastructure.
Kill Chain — What Attackers Do With Your Exposed Credentials
From the attacker's perspective, a credential in a public repository isn't just a lucky find. It's the entry point of a methodical, automated attack chain where each stage sets up the next.

Stage 1: Discovery. Attackers subscribe to the GitHub Events API's real-time feed, ingesting thousands of push events every second. They apply pattern matching techniques — regular expressions, entropy analysis, and format heuristics — to identify known credential formats: AWS Access Keys (20-character strings beginning with AKIA), GCP service account JSON blobs, Stripe API keys, and hundreds of other patterns. Palo Alto Networks Unit 42's analysis of the EleKtra-Leak campaign found the average gap between a valid AWS key being pushed to a public repository and a bot detecting it is four minutes. This isn't manual hacking — it's industrialized credential harvesting.
Stage 2: Validation. Now the attacker checks whether the credential actually works. For AWS keys, the call is sts:GetCallerIdentity — an API requiring zero permissions that simply confirms whether the key is valid and returns account metadata. For GitHub tokens, it's the /user endpoint. For Stripe keys, /v1/charges. Invalid keys get discarded instantly. Only confirmed-live credentials advance. Most service providers don't flag these validation calls as anomalous, so the attacker moves through this stage undetected.
Stage 3: Initial access. What a validated credential unlocks depends on its type. An AWS IAM key opens the door to the entire cloud console. Database credentials lead straight to customer data. A SaaS API key grants access to the full functionality of that service. The attacker maps every available API to understand the scope of permissions attached to the key. But at this stage, data isn't the only objective — the bigger question is: can this key lead to more keys?
Stage 4: Lateral movement. This is where a single compromised credential can cascade across an entire infrastructure. In an AWS environment, one IAM key can lead to service credentials stored in Systems Manager Parameter Store, database passwords embedded in S3 bucket configuration files, and external API keys sitting in Lambda function environment variables. The less disciplined the organization's credential management, the faster this spreads.
Stage 5: Impact. The endgame depends on motivation. Financially motivated attackers spin up GPU instances to mine cryptocurrency at the victim's expense — still the most common outcome. Data-motivated attackers exfiltrate customer information, intellectual property, and internal documents. More sophisticated attackers go for persistence: new IAM users created, backdoor code injected into Lambda functions, S3 bucket policies modified to maintain access after the original key is rotated. In the worst case, the attack becomes a supply chain compromise — if the stolen credentials reach a CI/CD pipeline, malicious code can be injected into the build process, extending the damage to customers downstream.
What starts with a single .env file grows more complex at every stage, and the blast radius expands exponentially. The entire kill chain — discovery to impact — can complete in hours. For organizations without real-time detection, the first sign is usually a billing alert or a customer complaint, arriving days or weeks after the fact.
Why Traditional Security Tools Miss It
Most organizations think they're covered. GitHub's built-in secret scanning, pre-commit hooks, .gitignore templates — the standard toolkit. But when you look at real-world breaches, the gaps become hard to ignore.
GitHub's secret scanning is genuinely useful, but its scope is limited by design. GitHub partners with over 200 service providers to detect their specific credential patterns. API keys from services outside that program, internally generated tokens, credentials for custom authentication systems — none of these are covered. Pattern matching also struggles with credentials that don't follow a clean format: a password embedded inside a database connection string, or a token nested deep in a YAML config structure. Context-dependent credentials are hard to catch without understanding what the surrounding code is doing.
Pre-commit hooks sound like a perfect first line of defense. In theory, they are. Secret scanning tools, when configured as pre-commit hooks, can block credentials before they ever enter a commit. The problem is that hook installation depends entirely on individual developers. New machines get set up without hooks. Urgent builds get pushed with --no-verify to bypass the check. Enough false positives, and developers start ignoring the warnings altogether. Without an organizational mechanism to enforce hooks across every developer's environment, they stay a recommendation, not a requirement.
Then there's the git rm misconception. A lot of developers believe that once they delete the file and push a new commit, the problem is solved. It isn't. git clone --mirror replicates the full history, and pulling a credential from a pre-deletion commit takes nothing more than basic git commands. Attacker bots capture the credential the instant the original commit lands — by the time the developer pushes a deletion, the attacker already has what they need.
Here's the core problem with most traditional tooling: it doesn't operate in real time. Some organizations run git secret scanning periodically, but scan intervals are typically daily or weekly. Against a four-minute attacker window, that gap is effectively no defense at all. Traditional tools help you eventually find out about an exposure — not find out before the attacker does.

Real-World Breaches and Industry Data
This isn't theoretical. Credential exposure through public repositories is a recurring, documented, and escalating threat. The numbers back it up.
The GitGuardian 2024 State of Secrets Sprawl report found 12.8 million secrets in public GitHub repositories in 2023 — a 28% jump over the prior year. What's even more striking: over 90% of those secrets were still valid five days after detection. That means the vast majority of organizations either never knew about the exposure or simply didn't rotate the keys. The report also notes that around 12.8% of developers who commit to GitHub have exposed at least one secret in their commit history.
Palo Alto Networks Unit 42 put hard numbers on what we already suspected through their analysis of the EleKtra-Leak campaign. The average time between a valid AWS key being pushed to a public repository and an attacker bot detecting it was four minutes. A separate Comparitech honeypot study recorded exploitation in as little as one minute. The bot infrastructure monitoring the GitHub Events API isn't some script running on a hobbyist's server — it's industrial-scale credential harvesting.
The 2022 Uber breach shows exactly how credential chaining plays out at scale. Attackers used credentials found in internal systems to reach Uber's AWS accounts, Google Workspace, Slack, and even the HackerOne bug bounty platform. One credential, systematically followed through the infrastructure, gave the attacker control over the entire organization.
IBM's 2024 Cost of a Data Breach report puts the financial picture in focus. Breaches from stolen credentials take an average of 258 days to identify and contain — the longest lifecycle of any breach category. That timeline translates directly to cost: the longer detection is delayed, the higher the bill.
OWASP recognized non-human identity security as a distinct threat category in its 2024 NHI Top 10, with Secret Exposure and Improper Offboarding among the top items. The significance is in the framing: credential management failures are no longer treated as individual developer mistakes. They're structural, organizational risks.
The pattern across all of this data is consistent: exposures are common, detection is slow, costs are high, and the trend is getting worse. The window between exposure and exploitation has shrunk to the point where human-speed response isn't viable without automation.
Detection and Response Guide
What to scan — and where. A lot of organizations only scan the HEAD commit of active branches. That's not enough. The entire git history needs to be in scope, because deleted files in past commits can still contain valid credentials. Forked repositories are another blind spot — when a private repo is forked and the fork goes public, every secret in the original repository's history is exposed. CI/CD build logs are easy to overlook too. A pipeline running in debug mode that prints environment variables to a log accessible to the whole team is effectively the same as publishing those credentials publicly.
The .env file is the obvious place to look, but credentials hide in a lot of other spots. terraform.tfstate files record the actual state of cloud infrastructure and frequently contain database passwords and API keys in plaintext. Hardcoded credentials show up in the environment sections of docker-compose.yml files, in GitHub Actions workflow files, and in Jupyter notebook cell outputs. The surface area is larger than most teams expect. For a full breakdown of scan targets and methodologies, see Secret Detection: Complete Guide for 2026.

When exposure is confirmed, you're in a race. The first move is to rotate the exposed key immediately — not delete the file from the repository. Deleting the file doesn't matter if the attacker already captured the key, which is likely given the four-minute detection window. Once rotated, audit the access logs. Check AWS CloudTrail, GCP Audit Logs, or Azure Activity Logs for anomalous calls made after the moment of exposure. Then assess the blast radius: every service, data store, and infrastructure component the exposed key could reach needs to be checked. Finally, notify the owners of every affected service right away to confirm no further credential chaining has occurred.
Prevention is always cheaper than response. Configure pre-commit hooks with secret scanning tools so credentials are blocked before they leave a developer's machine — and treat it as an organizational standard, not a personal choice. Include .gitignore in repository templates so it's automatically applied to every new repo. Add a secret scanning step to CI pipelines that fails the build when credentials are detected. The most fundamental fix is adopting a secrets manager so credentials never get embedded in code in the first place. And underlying all of this is developer onboarding. If J had gotten training on secret hygiene on day one, the Friday afternoon incident would never have happened.
How Cremit Argus Detects Exposed Credentials
The gaps we've described — detection limited to known patterns, no real-time monitoring, blind spots around context-dependent credentials — are exactly what Cremit Argus was built to address.
Argus monitors public GitHub repositories in real time. It continuously analyzes the GitHub Events API feed, detecting credential exposure the moment it occurs and triggering immediate alerts. The logic here is simple: if attacker bots find exposed credentials within four minutes, the defender's detection has to operate on the same timeline or faster. Argus is built to meet that bar.
GitHub's native secret scanning covers patterns from its partner program. Argus goes further, extending detection to custom patterns and third-party tokens — internally generated API keys, authentication tokens from partner organizations, non-standard database connection strings. These are the gaps GitHub's scanning leaves open. And rather than relying on pattern matching alone, Argus combines entropy analysis with contextual analysis to understand not just whether a string looks like a credential, but what role it plays in the surrounding code.
Argus also covers more than GitHub. Credentials shared in Slack channels, connection strings documented in Confluence pages, environment variables printed in CI/CD build logs — all of these are in scope. Secrets don't only leak through code repositories. Developers share API keys via Slack DMs, document connection details in Confluence onboarding guides, and attach debugging credentials to Jira tickets as part of everyday work. Argus brings all of those exposure surfaces into one platform.
See how Argus detects exposed credentials in real time at cremit.io.
NHI Kill Chain Series Overview
This post is the first installment in the NHI Kill Chain series. Over seven posts, we will analyze the seven most dangerous types of NHI credentials that hide inside organizations, each representing a distinct — and interconnected — risk.
A key exposed in a public repository, if left unrotated, becomes an Aged Key. A departed employee's key, if never deactivated, becomes a Ghost Key. Understanding how one credential management failure cascades into another risk category is the central purpose of this series.
- Public Key — What Happens 4 Minutes After a .env Hits GitHub (current post)
- Ghost Key — The Departed Developer Whose AWS Key Still Clocks In Every Morning (coming soon)
- Aged Key — The Skeleton Key That Held Production Together for 3 Years (coming soon)
- Zombie Key — Deleting It from Code Doesn't Mean It's Dead (coming soon)
- Over-shared Key — What Happens When 10 People Share a Single Slack Bot Token (coming soon)
- Shadow Key — Quietly Hardcoded Right Next to the Secrets Manager (coming soon)
- Drifted Key — When the CI/CD Bot Auto-Attaches a DB Password to Jira (coming soon)
Next post: NHI Kill Chain: Ghost Key — The Departed Developer Whose AWS Key Still Clocks In Every Morning
