
In 2025, mainstream AI adoption forever changed software engineering.
Scanned public GitHub commits
Leaked secrets in AI-assisted commits
2021
2022
2023
2024
2025
leaked secrets growth since 2021
public active developers
Secrets are leaking 1.6× faster than the developer population is growing since 2021.
Public GitHub is growing fast, but it’s also rapidly renewing: 54% of active developers made their first commit in 2025, increasing the volume of newly created code and integrations, and with it, the risk of exposed credentials.
2019
2020
2021
2022
2023
2024
2025
By analyzing the Shai-Hulud 2 supply chain attack we can answer this question: How many secrets live on a typical developer workstation?
of compromised machines held more than 10 secrets, and 5% carried over 100.
0-10
10-20
20-30
30-40
40-50
50-60
60-70
70-80
80-90
90-100
100+
59% of compromised machines were CI/CD runners rather than personal workstations, this exposure extends well beyond the individual developer into shared build infrastructure.
AI-service secrets exposed
YoY growth of secrets of AI-related services
Credentials for AI services are accelerating faster than any other category. As teams adopt new AI tools, they also create more tokens, keys, and service identities, often without equivalent governance. These leaks are also more likely to slip through controls designed around traditional developer workflows.
AI-assisted development moved from experiment to default. Code production accelerated, and credential exposure rose with it.
Claude Code co-authored commits leak secrets at ~2× the baseline across all Public GitHub commits but the human factor remains also critical.
Oct 2024
Jan 2025
Apr 2025
Jul 2025
Oct 2025
2025 showed a clear acceleration starting early in the year, followed by a steep ramp in the second half of the year as multiple assistants gained adoption.
By year-end, AI-assisted commits reached their highest levels, indicating that AI tools are becoming a standard part of how developers ship code.
Exposed credentials remain a major, repeatable path to compromise. AI-assisted development has moved from experiment to default, and credentials are now leaking at every layer of the stack.
In early 2025, the Model Context Protocol (MCP) emerged as the new standard to connect to LLMs with external tools and data sources such as APIs, search providers, or collaboration platforms. Our research found 24,008 unique secrets exposed in MCP configuration files.
of all secrets leaked in MCP configuration files are PostgreSQL DB connection strings.
Top secret types map directly to common API platforms and web-search tooling, data access layers, and developer productivity services.
Internal repos are 6× more likely than public ones to contain hardcoded secrets.
These exposed secrets are also now at risk of accidental public exposure by AI coding assistants.
Public repositories that contain at least one secret
Internal repositories that contain at least one secret
Secrets sprawl extends beyond code: ~28% of incidents originate from leaks in collaboration and productivity tools (not just repositories), where credentials can be exposed to broader audiences, automations and AI agents.
of secrets sprawl happens exclusively outside of code repositories. Only 4% appear in both.
Collaboration tools incidents are more severe with more than half being critical.
%20(1).png)
Secrets exposed in code alone are very different from those exposed through collaboration tools, meaning that scanning only the code will miss a meaningful portion of leaks.
of critical secrets leaked lack validation checkers
Generic secrets (unstructured credentials such as passwords, private keys, or custom tokens) drive most high-risk incidents but can't be validated. Prioritization based only on validation creates blind spots (46% missed) and wasted effort (many validated secrets are low-impact). You can't fix what you can't see. Without comprehensive context, teams still fail to remediate them.
Teams can't rotate secrets without risking production outages—so they don't.
We tracked valid secrets detected in 2022. Four years later,
are still active and exploitable, sitting in public repositories.
The problem isn't detection. It's remediation.
2022
2023
2024
2025
The industry is still in the early stage of addressing with the massive debt of secrets sprawl accumulated over the years, emphasizing the importance of AI-led remediation, prevention, and deception.
The dominant issue: long-lived secrets.
Duplication & internal leakage together make up nearly a third of issues (33%).
Security risk is not binary. A credential that validates successfully is not necessarily dangerous, and a secret with no validation checker is not necessarily safe.
In 2026, effective secrets security requires four capabilities working together:
"The difference between success and failure isn't finding more secrets, it's knowing which ones to fix first."

700,000 developers already use GitGuardian to prevent committing secrets and to detect compromise with honeytokens, making it the #1 app on the GitHub marketplace.