The "Out of Scope" Loophole: Why Bug Bounties Look Away From Credential Exposure
An organization's core credentials sat in public repositories for years. The security industry's answer: "Out of scope."
Share:
On this page(17)
Table of Contents
An organization's core credentials sat in public repositories for years. The security industry's answer: "Out of scope."
Two Keys, Two Programs, Zero Accountability
A security research team discovered two API keys, Admin-level or functionally equivalent in blast radius, sitting in public GitHub repositories. Not buried in obscure corners of the internet. Plain text, accessible to anyone with a browser. The problem is what happened after the disclosures.
First key: Slack Bot Token (3 years of exposure)
A Slack Bot Token had been sitting in a public GitHub repository for three years. Slack is no longer a messenger in any meaningful sense. Strategy discussions, HR conversations, customer data, and technical infrastructure information flow in real time across hundreds of channels.
Based on the granted scopes, the token allowed broad access: channel message reading, file downloads, user directory enumeration, and more. The more serious concern is lateral movement. Channels routinely carry credentials for other services. Messages along the lines of "staging server access info" or "sharing the AWS key" survive in pinned messages, DMs, and private channels at more organizations than anyone would admit. Bots and webhook integrations wire Slack into dozens of services, including CI/CD pipelines, Jira, GitHub, and monitoring systems. Slack Connect channels can even expose partner organization data. A single Bot Token effectively provides a map of an organization's entire technology stack.
The research team reported the finding through the organization's official bug bounty program. The classification that came back: "Out of scope."
The core contradiction is unavoidable. An API key is a company asset. The entire purpose of a bug bounty program is to protect those assets. Vulnerabilities matter because they enable unauthorized access to corporate assets. Classifying a publicly exposed key that grants direct access to those same assets as "out of scope" contradicts the program's reason for existing.
Second key: Asana Admin API Key (2 years of exposure)
This key originated from a previously independent organization that had since been consolidated. The important detail is that the parent organization was actively using the Asana workspace tied to this key after consolidation. It was not a neglected relic. It was active infrastructure carrying project timelines, task assignments, and strategic documents. The key was exposed in a public GitHub repository for two years and carried full read/write permissions across every project in the workspace.
This was also reported through the official bug bounty program. The classification, again: "Out of scope." The rationale: the asset originated from a separate entity, so it falls outside program boundaries. Actively used and managed, yet the security responsibility is declared out of scope.
The blind spot "Out of scope" creates
What both cases share is that the organizations do not treat their own credential exposures as managed risk, and "Out of scope" is the classification that formalizes that blind spot.
The irony: both organizations classified these findings as "Out of scope" yet took action anyway. They revoked the keys and rotated them. One organization explicitly stated in its response that, based on the disclosure, it had conducted a broader review and remediated additional cases. The risk was acknowledged, the value of the disclosure was used, and yet the official classification remained "Out of scope." Out-of-scope action for what is allegedly an out-of-scope finding. This contradiction is the clearest window into what is broken.
These two cases are representative examples from dozens of similar NHI exposure findings. The same pattern repeats every time. Reports are dismissed, and critical access is handled as "Out of scope." Case after case, one conclusion became inescapable: the bug bounty credential-exposure handling model itself is broken.
Why This Keeps Happening
The repetition is structural. The design assumptions behind bug bounty programs and the nature of credential exposure as a threat are fundamentally misaligned.
Bug bounty programs were designed to find "program vulnerabilities," meaning bugs. RCE, SQL injection, XSS, and other code-level flaws get reported, receive severity ratings based on potential impact, and are rewarded accordingly. CVSS underpins this model, and the entire pipeline, from triage to payout, assumes that "an exploitable flaw was reported; we evaluate it by impact."
Credential exposure is a different category of threat. A leaked credential is not a program flaw waiting to be exploited; it is direct access to company assets. The program does not have a hole in it; the vault key is sitting in the street. The moment a valid API key lands in a public repository, the risk is immediate, not theoretical. There is no exploit to write, no chain to construct, no PoC to demonstrate. The key itself is the access.
Because this does not fit cleanly into the vulnerability-exploit-impact frame, it gets sorted into the lowest tier or declared "Out of scope." Credential exposure is undervalued not because the risk is low, but because the evaluation apparatus was never built to measure this kind of risk.
A necessary distinction: for companies that provide API services, it is impractical to take responsibility for every customer's carelessness with their own API keys, and the accountability structure is different. What this article is addressing is the organization's own credentials. Keys that grant access to the organization's core assets, such as Slack Bot Tokens, Asana Admin Keys, and similar credentials used for internal infrastructure, sitting in public repositories. This is not a customer mistake. It is a failure of the organization's own asset management.
So is the bug bounty program the right channel for this problem? Honestly, not in its current form. Bug bounty is optimized for program vulnerabilities, and identifying an organization's own asset exposures is a different class of activity. Two directions are viable: extend existing bug bounty scopes to explicitly include organizational credential exposure, or stand up a separate program dedicated to credential exposure identification. Either is better than the current "Out of scope" default.
The second case illustrates how "Out of scope" drifts from boundary tool to defense mechanism. An asset the organization was actively using and managing was classified out of scope because it originated elsewhere. That is administrative avoidance, disconnected from actual risk. The moment scope becomes a justification for blind spots rather than a focus mechanism, it gets harder to explain what the program is actually for.
"It's Just a Misconfiguration"
A common counterargument: "Credential exposure is a configuration problem, not a vulnerability. It belongs to IT hygiene, not bug bounty."
Superficially, yes. A secret was committed to a repository that should have been private, or a credential was not rotated. It is a process failure, not a code defect. Take one more step, though. RCE is also the result of bad code: input validation failures, memory mishandling, missing boundary checks. Nobody evaluates RCE by its cause ("bad code"). We evaluate it by impact, by what the attacker can do. Cause doesn't enter the severity calculation.
Evaluating only credential exposure by its cause ("misconfiguration") while evaluating everything else by impact is not a principled framework. It is a double standard. The unauthorized access granted by an Admin-level API key in a public repository is the same whether the cause was a careless commit, a CI/CD misconfiguration, or a script left behind by a departing employee. The blast radius is identical.
Severity should be determined by what can happen, not by how it happened.
The Missing Dimension: Time
Time is the most consequential blind spot in credential exposure evaluation. A code vulnerability is discovered and patched; the window closes. A leaked credential, in contrast, provides continuous access from the moment of exposure until the key is revoked. For as long as the key is alive, anyone who finds it can use it without restriction.
The Slack Bot Token in the first case was exposed for three years. The Asana Admin API Key for two. Almost no program factors this exposure duration into its severity. There is no multiplier for duration, no escalation for the compounding risk of years-long exposure. The finding gets evaluated as if it had been reported on the day it was committed, a snapshot of what is in fact an ongoing breach.
GitHub already operates Secret Scanning, detecting exposed secrets and notifying vendors. Even so, the cases above sat exposed for three and two years. A detection tool does not solve the problem by itself. What remains is the response system: who owns remediation, how the severity is assessed, how quickly the key is revoked. The gap between detection and response is the attacker's opportunity.
The security industry has spent decades refining vulnerability severity models. It is time those models caught up with reality. Credential exposure is a fundamentally different threat than exploit-chain vulnerabilities, and in many cases more dangerous. The first step is acknowledging that existing tools cannot measure it properly.
The NHI Exposure Severity Index: Filling the Gap
CVSS has been a valid tool for code-level vulnerability evaluation for over twenty years. But credential exposure is a different class of threat. An exploit chain is not required for a leaked API key; copy-paste is enough. Attack complexity and exploit maturity become meaningless.
This is not an argument to discard CVSS. The NHI Exposure Severity Index proposed here is a complementary framework, not a replacement.
Established standards already cover credential management comprehensively. OWASP API Security Top 10 addresses it under API2:2023 Broken Authentication. CWE-798 catalogs hard-coded credentials as a distinct weakness. NIST SP 800-53 Rev. 5 (IA-5) and NIST SP 800-63B specify authenticator lifecycle management in detail. NIST CSF 2.0 groups the controls under PR.AA. Every one of these is a prevention or control framework, guidance for what organizations should do before a credential leaks. None of them provide a structured way to evaluate the severity of a credential that has already been discovered in the wild, which is the post-discovery response question. That is the gap this framework aims to fill.
This framework is not a finalized standard. It is a draft for industry discussion. It targets credentials that are active at the time of discovery, and it is intended to evolve through real-world application and feedback.
The Six-Axis Evaluation Model
The framework evaluates each credential exposure across six independent axes, each scored from 1 (lowest) to 5 (highest).
The evaluation basis is real-world business risk to the organization. The goal is not to score the technical risk of an individual exposed asset but to measure the business risk that exposure creates for the organization as a whole. The axes were chosen to capture dimensions that matter most for NHI exposure and that traditional vulnerability scoring ignores or addresses only indirectly.
Score
Privilege Scope
Cumulative Risk Duration
Blast Radius
Exposure Accessibility
Data Sensitivity
Lateral Movement Potential
1
Read-only (limited)
Less than 24 hours
Single resource
Private repo (auth required)
Public info / logs
Isolated within single service
2
Read-only (broad)
1–7 days
Team-level
Private repo (org-internal)
Internal operational data
Other resources in same service
3
Write access
1 week – 1 month
Department / BU
Public repo (hard to search)
Internal communications
1–2 connected services exposed
4
Admin
1–6 months
Multiple departments
Public repo (easily searchable)
PII / customer data
Credentials for multiple integrated systems harvestable
5
Super Admin / Owner
6+ months
Organization-wide
Indexed by search engines
Credentials / financial / medical
Full infrastructure pivot possible
Note on Cumulative Risk Duration: The moment a credential is exposed, it is already an incident. A short duration does not mean the exposure is not severe. What this axis measures is the organizational risk that accumulates over time. The longer exposure persists, the more potential finders there are; the more sensitive data produced and moved through the organization during the exposure window; and the higher the probability that the credential has already been exploited. A 24-hour exposure and a three-year exposure are both incidents, but three years of accumulated risk is qualitatively different.
Each axis evaluates the following:
Axis
What It Measures
Privilege Scope
What the credential can do
Cumulative Risk Duration
How long the exposure has persisted
Blast Radius
How far the impact reaches within the organization
Exposure Accessibility
How easily an attacker could discover it (higher = more risk)
Data Sensitivity
Risk level of accessible data
Lateral Movement Potential
Whether the compromise can extend to other systems
The six axes sum to a maximum of 30 points, which maps to severity tiers. The current design uses equal weighting across the six axes for simplicity of adoption. Depending on the organization's environment or industry, weighting specific axes differently is a natural extension. A financial institution might weight Data Sensitivity more heavily; a SaaS-heavy organization might weight Lateral Movement Potential more heavily.
Severity Tiers
Total Score
Severity
6–10
Low
11–15
Medium
16–21
High
22–26
Critical
27–30
Critical+
Organizational context fills in the detail within a tier. A credential exposing a development sandbox and one exposing a production payment system warrant different responses even at the same score.
Scoring the Cases
Applying the NHI Exposure Severity Index to the two cases makes the distance between the current "Out of scope" classification and reality visible.
Radar chart — NHI Exposure Severity Index scoring for both cases
Case A: Slack Bot Token (26/30, Critical)
Axis
Score
Rationale
Privilege Scope
4
Broad channel access, file downloads, user directory enumeration based on granted scopes
Cumulative Risk Duration
5
3 years of exposure (6+ months)
Blast Radius
5
Organization-wide communication platform
Exposure Accessibility
4
Public GitHub, discoverable with basic searches
Data Sensitivity
4
Private channels, DMs, files may include PII
Lateral Movement Potential
4
Other services' credentials in channels; bot and webhook integrations enable access to CI/CD, Jira, GitHub, etc.
Slack is not simply a messaging platform. It is the hub of modern organizations. A single AWS key shared in a channel, or a Jira integration webhook, becomes the launch point for the next stage of an attack. Having Bot Token access to all of that content means reconnaissance across the entire organization's infrastructure was possible, not just a communications compromise. A clear Critical by the framework. The organizational judgment: "Out of scope."
Case B: Asana Admin API Key (24/30, Critical)
Axis
Score
Rationale
Privilege Scope
4
Full read/write across every project in the workspace
Project timelines, strategic planning, task assignments (competitive intelligence rather than PII)
Lateral Movement Potential
3
Internal infrastructure references in project data (staging URLs, architecture docs) enable additional reconnaissance
Asana project data reveals strategic direction, resource allocation, and operational priorities. Not direct PII, but highly useful for competitive intelligence and social engineering. A clear Critical under the framework, met again with the same organizational judgment: "Out of scope."
Both cases score 26 and 24, solidly Critical under a structured evaluation. Neither organization treated them as managed risk. This is a systematic failure to categorize an entire class of threat using any coherent standard.
The Vicious Cycle: What the Credential Blind Spot Creates
The consequences of classifying credential exposure as "Out of scope" do not stay within individual reports. They compound into a cycle that degrades the broader security ecosystem.
The most immediate consequence is researcher attrition. Security researchers allocate their time rationally. Hunting for an Admin-level API key, documenting the blast radius, establishing scope and exposure duration, and preparing a responsible disclosure report takes hours. If the outcome is "Out of scope," there is no reason to repeat the exercise. Researchers stop reporting credential exposures. Not because they stop finding them, but because the program has removed any reason to report.
Researcher departure does not reduce the number of exposed credentials in the world. The keys remain in public repositories; what changes is who finds them and what they do next. Public GitHub is open to everyone. The same search queries researchers use are equally available to threat actors, nation-state reconnaissance teams, and financially motivated cybercriminals. When researchers stop reporting, credentials do not become invisible. Only malicious actors discover and act on them.
A program designed to find threats before attackers has engineered itself to ensure only attackers find them. That is the inverse of a bug bounty program's purpose.
The problem of aged credentials that persist far past their intended lifespan has been studied as a distinct failure pattern in NHI security. So have keys with unclear ownership. When researchers who surface these problems repeatedly receive "Out of scope" responses, the responsible disclosure ecosystem itself contracts. Every credential researcher who walks away is a pair of eyes the organization loses for scrutinizing its own public repositories.
These consequences feed each other. Researcher attrition → credentials found only by adversaries → enterprise risk increasing under "Out of scope" policy → trust declining as risk grows despite security investment → more researchers walking away. The cycle tightens, and organizations inside it become progressively less able to see the problem, because the people who would have shown it to them are gone.
The AI Code Generation Era: Accelerating Credential Exposure
Credential exposure is going to get worse before it gets better. The spread of AI coding tools is already pushing it in that direction.
The central shift is that the population of people who produce code has fundamentally changed. With GitHub Copilot, ChatGPT, Claude, and similar tools now mainstream, non-developers produce and deploy code. Marketers build automation scripts. Data analysts write API integration code. Product managers ship their own prototypes. If developers, the specialists, still miss hardcoded credentials, it is an unsurprising result that non-developers, whose security awareness is typically lower, miss them too.
The routes by which credentials end up in AI-generated code are varied. A `.env` file in the context window during generation. An AI leaving example keys unchanged in configuration files or infrastructure code (Terraform and the like). A prompt such as "write the code to connect to this API" producing output that contains values inferred from actual context. When that output is committed without sufficient review, credentials land in public repositories.
Where traditional credential leaks stemmed from individual developer oversight, AI-era leaks add another force to that: the rate of code generation now outpaces the rate of code review. CI/CD pipelines automate the path from commit to deployment, and the route from credential to public exposure gets shorter and faster.
AI-driven credential exposure will become more frequent. Without a system for detecting and reporting it, sticking with "Out of scope" classification only hardens the structure where attackers are the sole beneficiaries.
What Needs to Change
For Bug Bounty Operators
Audit your scope definitions. Do they cover SaaS credentials? Do they cover keys and tokens inherited through organizational consolidations? If the answer is no, the blind spot is large enough for a breach to pass through. Scope definitions written in 2018 or 2020 do not reflect 2026 reality, where an average enterprise runs dozens of SaaS platforms, each producing its own pool of potentially exposable NHIs.
More fundamentally, decide whether credential exposure belongs inside the existing bug bounty program or in a separate asset-exposure identification program. Program vulnerabilities (bugs) and asset exposures (credentials) have different characteristics and deserve different evaluation criteria. Programs that already treat credential exposure as a valid bounty class exist. The Starbucks bug bounty disclosure of a leaked JumpCloud API key (HackerOne #716292, 2019) is a public record of exactly that: a single API key found in a public GitHub repository, classified under CWE-798 (Use of Hard-coded Credentials), scored CVSS 9.7 (Critical), triaged, remediated, paid out, and publicly disclosed. One credential finding, run cleanly through an existing bug bounty pipeline. This is not a matter of technical impossibility. It is a matter of will and classification policy. Either direction is better than leaving the current "Out of scope" default in place.
In terms of methodology, blast-radius-over-time should be a core severity factor. A credential exposed for years is fundamentally different from a code vulnerability found and patched within days. "Out of scope" cannot be the reflexive answer to inconvenient findings. If the organization uses, manages, and stores sensitive data in the asset, the credential granting access to that asset is the organization's problem, regardless of which legal entity originally provisioned it.
Classification inversion — how credential exposure gets misclassified
For Security Researchers
Do not stop reporting credential exposures. The exit is understandable, but the space you leave behind will be filled only by attackers. The research community has the most use here. Programs respond to precedent, and every well-structured credential exposure report is a data point that shifts the norm.
Submit a framework-based severity assessment with every finding. The NHI Exposure Severity Index scores the six dimensions that matter for credential exposure; providing the score inside the report turns "how severe is this really?" from an open question into a defensible starting point. CVSS can run alongside it. The Starbucks/JumpCloud case was scored CVSS 9.7 under CWE-798, which is the precedent researchers should point at when a program says "we don't have a scoring framework for this."
When a finding is classified "Out of scope," formally request re-evaluation. Ask, in writing, why the access granted by the credential falls outside the program's managed scope, given that the organization uses and controls the asset. Forcing an explicit justification frequently reveals how unfounded the dismissal is, and the written record becomes useful whether the resolution is eventual payout, program policy change, or public disclosure after coordinated non-response.
Push for disclosure after resolution. Public records of credential exposures treated as legitimate bounty findings are what make it harder for the next program to refuse the same classification. Precedent compounds, but only when it is visible.
Adopting the NHI Exposure Severity Index
Bounty platforms should seriously consider placing a credential-specific evaluation model alongside CVSS. The six axes provide a shared language for discussing real risk, rather than scope boundaries or inherited inertia. When researchers submit a report and operators evaluate it, they need to be reading from the same ruler and speaking the same language.
Disclosure: These cases were discovered by the Cremit research team during ongoing NHI exposure research. Both were reported through official bug bounty channels, and the affected credentials have since been revoked and rotated by the respective organizations.
Argus by Cremit continuously scans your public and private repositories for exposed credentials, maps ownership across your teams, and automates rotation workflows. Start a 14-day free trial at argus.cremit.io.