

GitHub shipped four secret scanning updates in March that collectively represent the most significant expansion of the platform’s credential detection capabilities in months. The numbers: 37 new secret detectors across 22 providers; 39 token types now push-protected by default; new validity checks for AI and developer infrastructure tokens; and — most notably — secret scanning that now works inside AI coding agents through the GitHub MCP Server.
For DevOps teams managing repositories where AI agents are increasingly generating code and opening pull requests, this last addition changes the security equation.
What Shipped in March
March 10: The big batch. 28 new secret detectors from 15 providers, including Vercel (six token types alone), Snowflake, Supabase, Lark, and Shopify. Push protection expanded to 39 detectors enabled by default — meaning commits containing matching secrets are blocked before they reach the repository. Validity checks added for Airtable, DeepSeek, npm, Pinecone, and Sentry tokens, automatically verifying whether detected secrets are still active so teams can prioritize remediation.
March 17: Secret scanning in AI coding agents. The GitHub MCP Server can now scan code changes for exposed secrets before commits or pull requests are made. In MCP-enabled environments, AI coding agents invoke the secret-scanning engine in response to prompts and instructions. Results include structured data with locations and details for any secrets found. This is in public preview for repositories with GitHub Secret Protection enabled.
March 23: Push protection exemptions at the repo level. Organizations can now designate specific roles, teams, and apps as exempt from push protection enforcement directly from repository settings. Previously, exemptions could only be managed at the organization and enterprise level. Exemption status is evaluated at the time of each push.
March 31: Nine more detectors. New secret types from seven providers, including LangChain, Salesforce, and Figma. Secrets from Figma, Google, OpenVSX, and PostHog are now push-protected by default. Validity checks added for npm access tokens.
Why the AI Agent Integration Matters Most
The March 17 MCP Server integration is the update that changes the operational model. When AI coding agents generate code — whether through GitHub Copilot, Claude Code, or any MCP-compatible tool — that code can now be scanned for secrets before it’s committed. The agent doesn’t need to know what a secret looks like. It sends the code to GitHub’s scanning engine and receives structured results.
This addresses a specific risk that’s growing as the volume of AI-generated code increases. Anthropic reported that code output per engineer grew 200% last year. GitHub Copilot’s coding agent now handles tasks autonomously through Jira and GitHub Issues. Cursor’s cloud agents run in isolated VMs, producing merge-ready PRs. The more code agents generate, the more opportunities for secrets to leak — whether from training data patterns, hallucinated credentials, or copy-paste from context that includes real tokens.
Having secret scanning available inside the agent’s workflow — not just as a post-commit gate — means credentials can be caught before they enter the repository at all. That’s earlier in the pipeline than push protection, which catches secrets at commit time, and earlier than alert-based scanning, which catches them after the fact.
“GitHub embedding secret scanning into its MCP Server positions credential detection where AI-generated code originates, inside the agent workflow. As coding agents produce pull requests autonomously, security enforcement that operates at commit time or post-commit faces a growing gap between code velocity and detection speed,” according to Mitch Ashley, VP and practice lead for software lifecycle engineering at The Futurum Group.
“Organizations deploying AI coding agents at scale need layered detection that starts inside the agent’s execution context. Teams applying the same credential hygiene assumptions to agent-generated code as human-written code will underestimate the exposure.”
The Provider Coverage Story
The specific providers added in March tell their own story about where secrets are leaking. Vercel got six new token types in a single update. Snowflake, Supabase, and Pinecone — core infrastructure for AI applications — all received new detectors. LangChain tokens are now detected, reflecting how quickly the AI agent framework ecosystem has become part of production infrastructure. DeepSeek tokens got validity checks, acknowledging the model provider’s growing developer footprint.
The push protection defaults are equally telling. When GitHub enables push protection by default for a token type, it means the pattern is reliable enough to block commits without generating excessive false positives. The expansion to 39 token types — including Airtable, Databricks, Heroku, PostHog, and Shopify — reflects confidence in the detection accuracy and a bet that developers would rather be blocked at commit time than deal with a leaked credential after the fact.
For free public repositories, push protection defaults apply automatically. That’s a significant security baseline for open-source projects where contributors may inadvertently commit credentials.
Validity Checks and Remediation Priority
The expanded validity checks for npm, Airtable, DeepSeek, Pinecone, and Sentry tokens address a practical problem in secret scanning: alert fatigue. Not every detected secret is an active credential. Some have been rotated. Some have expired. Some were test values that never worked.
Validity checks automatically verify whether a detected secret is still active. This lets security teams prioritize remediation — an active DeepSeek API key in a public repository is an immediate problem, while an expired Sentry token from a test environment can wait. GitHub previously improved validity checks for AWS Access Key IDs, where most alerts labeled “unknown” switched to “valid” or “invalid” after the update. That same pattern is now extending to more provider tokens.
What This Means for DevOps
March’s updates reflect a security infrastructure that’s adapting to how code is actually being produced in 2026. AI agents generate code at scale. That code flows through automated pipelines. Secrets can leak at any point — from the agent’s context, from patterns in training data, from copy-paste in prompts, from environment variables that get hardcoded by mistake.
GitHub’s response is to push detection earlier and make it available in more contexts: Inside AI coding agents via MCP, at commit time via push protection, and post-commit via alert scanning with validity checks. Each layer catches what the previous one missed.
The push protection exemptions at the repository level are the governance complement. Not every push needs to be blocked — CI/CD service accounts, security testing tools, and specific team roles may need exemptions. Managing those exemptions at the repo level rather than only at the org level gives teams more granular control without requiring administrator intervention for every exception.
For teams running AI coding agents in production, the recommended sequence is clear: Enable the GitHub MCP Server with secret scanning, enable push protection with appropriate exemptions, and use validity checks to prioritize remediation of active credentials. That gives you three layers of detection before a secret reaches production.