I Got a CVE. Here's How I Didn't Completely Panic. 🔐🔥
I Got a CVE. Here's How I Didn't Completely Panic. 🔐🔥
The email that changed my Sunday afternoon:
"Hey, I think I found a security issue in your library. It might be serious. Can we talk privately?"
I stared at this message for a solid 45 seconds.
My brain cycled through: Is this spam? Is this a prank? Is this the moment where a stranger tells me my code — code people are using in production — has a real, actual, exploitable security vulnerability?
It was that last one.
As a full-time developer who contributes to open source, I knew security disclosure was A Thing That Happens. I'd seen CVEs. I'd filed bug reports. I'd read postmortems. What I hadn't done was be on the receiving end as the maintainer who now had to coordinate a response, patch a vulnerability, and publish a security advisory without making my users feel like their data had been casually handed to attackers.
This is that story. And more importantly — here's the process so you're not googling "what is a GitHub Security Advisory" at 11pm on a Sunday.
What Even Is a CVE? 🤔
Before I dive in, quick vocabulary check:
CVE = Common Vulnerabilities and Exposures
= The global registry of "yes this security bug is real and documented"
= A unique ID like CVE-2025-XXXXX assigned to your specific vulnerability
= Something that will show up when people google your package
CVEs exist because the security community needed a standardized way to track vulnerabilities across the entire software ecosystem. When a dependency scanner (like Dependabot, Snyk, or npm audit) warns you about a vulnerable package version, it's looking up CVEs in a database.
The terrifying part: A CVE on your package means every developer using it will start getting scary red warnings in their CI/CD pipelines until they update.
The less terrifying part: CVEs are proof that your project is mature enough that security researchers are paying attention to it. Obscure projects don't get CVEs. Used projects do.
GitHub Private Security Advisories: The Right Way to Handle This 🔒
Here's what I didn't know before this happened: GitHub has a built-in workflow for exactly this situation called Private Security Advisories.
Before GitHub added this, the workflow was chaos:
Researcher → finds vuln → sends you an email
You → panic → reply with your personal Gmail
You → develop fix in public repo → WAIT the fix reveals the vulnerability!
You → release fix → hope nobody noticed the 3-day window
The modern workflow looks like this instead:
Step 1 — Researcher Reports Privately
GitHub has a "Report a vulnerability" button on every public repo (under the Security tab). When a researcher clicks this, it opens a private fork — a private copy of your repo visible only to you and the reporter. No public announcement. No exposure. Just a safe place to collaborate.
Step 2 — You Collaborate in Private
Inside the private advisory, you can:
- Discuss the vulnerability details
- Write and test the patch together
- Add collaborators (like co-maintainers or trusted security reviewers)
- Draft the advisory text
Step 3 — Request a CVE (if needed)
From inside GitHub's Security Advisory interface, you can request a CVE directly. GitHub is an official CVE Numbering Authority (CNA). You fill in:
- Affected versions
- Severity (using CVSS scoring)
- Description of the vulnerability
- Credit to the researcher (very important — never skip this)
GitHub will assign the CVE ID, typically within a few days.
Step 4 — Publish
You release the patched version AND publish the advisory simultaneously. GitHub automatically notifies users who depend on your package. Dependabot starts filing PRs on their repos. The world is warned. Everyone can upgrade.
The Actual Vulnerability I Had To Fix 🐛
In my case, the researcher found an input sanitization issue in a PHP authentication helper I'd published. The library let you pass a custom callback for username validation. The problem: under certain conditions, the callback return value wasn't being checked correctly, creating a tiny window where crafted input could bypass validation entirely.
It was subtle. It required specific conditions to exploit. But it was real.
CVSS score: 7.3 (High)
That number stings.
In the security community, CVSS (Common Vulnerability Scoring System) scores run from 0 to 10:
0.1 - 3.9 Low
4.0 - 6.9 Medium
7.0 - 8.9 High
9.0 - 10.0 Critical
7.3 means: this is serious, people need to know, it needs to be fixed now.
The fix itself took about 90 minutes. The coordination, testing, writing, and disclosure process took about 2 days. This is normal. The code is the easy part.
Writing the Security Advisory (Don't Skip This) 📝
The advisory you publish is what developers will read when Dependabot files a PR on their repo. It needs to be:
- Clear about what's affected
- Honest about the severity
- Specific about the fix
- Fair to the researcher
Here's the structure I used:
## Summary
[One sentence: what the vulnerability is and what it allows]
## Details
[2-3 paragraphs: explain the technical issue without being a tutorial
on how to exploit it. Enough for a developer to understand the risk.
Not enough to be a step-by-step exploit guide.]
## Impact
Who is affected? What conditions are required?
"Users of v1.0.0 through v2.3.1 who use custom validation callbacks
are affected if input includes..."
## Patches
Version X.Y.Z fixes this issue. Update immediately.
## Workarounds
[If there's a configuration change or code pattern that mitigates
the issue without upgrading, document it here.]
## Credits
Thanks to [researcher name/handle] for responsibly disclosing this
vulnerability through GitHub's private security advisory process.
That credits section matters more than you think. Researchers choose responsible disclosure partly because it's the right thing to do, and partly because they know the community will see that they did the right thing. Skipping credit is a quick way to make sure the next researcher goes straight to full public disclosure instead of contacting you first.
Coordinated Disclosure: The 90-Day Clock ⏰
Here's something the security community debates constantly: how long does a maintainer get to fix a vulnerability before a researcher can go public?
The generally accepted standard is 90 days.
Day 1: Researcher reports privately
Day 1-7: You acknowledge, begin investigating
Day 7-60: You develop and test the fix
Day 60-90: Release the fix + publish advisory
Day 90+: If no fix, researcher may publish details anyway
This is called "coordinated disclosure" (or "responsible disclosure").
Balancing work and open source taught me that 90 days sounds like a lot until you're maintaining a library in your spare time while holding down a full-time job. Be transparent about your timeline. If you're going to need 45 days to properly fix this, say so early. Researchers are generally reasonable — what frustrates them is silence.
My researcher gave me 3 weeks. We had it patched in 11 days. Open source when you actually feel the urgency.
What Happens After You Publish 📢
Publishing the advisory triggers a cascade:
You click "Publish Advisory"
→ GitHub notifies all dependents (repos that use your package)
→ Dependabot starts filing automated PRs on those repos
→ npm/Packagist/crates.io show the vulnerability warning
→ Security databases pick it up within 24-48 hours
→ CVE appears in Snyk, NIST NVD, and every other scanner
This is when you'll get the most GitHub notifications you've ever received in your life.
Most will be automated. Some will be real users thanking you for the fix, or asking questions. A few will be angry. Someone will probably file an issue asking why you didn't fix it faster.
Respond professionally. Even to the angry ones. Especially to the angry ones. The security community is small and people remember how maintainers handle pressure.
In the security community, researchers talk. If you handle a disclosure well — acknowledge fast, communicate clearly, fix thoroughly, give credit — that reputation follows you. If you handle it badly — deny, delay, release a half-fix, skip credits — that also follows you.
Setting Up Your Project for Future Disclosures 🛡️
Don't wait for your first CVE to set up the infrastructure. Do this now:
1. Enable Private Security Advisories
Go to your repo → Security tab → Enable private vulnerability reporting.
This adds the "Report a vulnerability" button that researchers expect. Without it, they'll email you and you'll miss it.
2. Add a SECURITY.md
GitHub will automatically display this file when someone clicks "Report a vulnerability." It should explain:
# Security Policy
## Supported Versions
| Version | Supported |
|---------|-----------|
| 2.x.x | ✅ |
| 1.x.x | ❌ (EOL) |
## Reporting a Vulnerability
Please DO NOT open a public GitHub issue for security vulnerabilities.
Use GitHub's private vulnerability reporting:
[Instructions link here]
Or email: [email protected]
We will acknowledge within 48 hours and aim to release a patch
within 30 days for high-severity issues.
## Disclosure Policy
We follow coordinated disclosure with a 90-day default window.
We'll keep you updated throughout the process.
3. Consider Dependabot alerts
Turn on Dependabot for your OWN project's dependencies. A vulnerability in one of your dependencies can make YOUR project vulnerable. This is called a transitive vulnerability and it's embarrassing to have a CVE filed against you for something in your dependency tree that Dependabot would have caught.
The Part Nobody Tells You About: The Emotional Bit 😅
Let me be honest about something.
Receiving a vulnerability report for your open source project feels bad, even when you do everything right. There's a specific flavor of dread that comes from knowing people trusted your code and there was a flaw in it.
I felt it. Even after the fix was out and the advisory was published and the researcher publicly thanked me for a smooth process — I felt it.
That feeling is good. It means you care. Maintainers who don't care are the dangerous ones.
What helped me: remember that no code is perfect, the researcher did you a favor by not going straight to full public disclosure, and the existence of the CVE in your advisory means you handled it correctly. An ignored vulnerability with no disclosure is infinitely worse than a patched one with a proper CVE.
Balancing work and open source taught me this is the part they don't put in the GitHub docs: security incidents are emotionally hard. Take 30 minutes after you publish the advisory to do something completely unrelated to code. You'll need it.
TL;DR: Your Security Disclosure Checklist 📋
Preparation (do this now, before anything happens):
[ ] Enable GitHub Private Security Advisory reporting
[ ] Add SECURITY.md with contact info and policy
[ ] Enable Dependabot alerts on your own dependencies
[ ] Know your CVSS scoring basics
When a report arrives:
[ ] Acknowledge within 24-48 hours (even if you can't fix it yet)
[ ] Create a private security advisory in GitHub
[ ] Reproduce the vulnerability yourself
[ ] Give the researcher a realistic timeline
[ ] Keep them updated if the timeline changes
Developing the fix:
[ ] Use the private advisory's draft security advisory + private fork
[ ] Test the fix thoroughly (not just the happy path!)
[ ] Bump the version (a security fix always warrants at least a patch bump)
[ ] Document the change in your CHANGELOG.md
Publishing:
[ ] Draft the advisory text (clear, honest, not a how-to-exploit guide)
[ ] Request a CVE if warranted (CVSS 4.0+ generally yes)
[ ] Give proper credit to the researcher
[ ] Publish advisory and release simultaneously
[ ] Respond to user questions professionally
After:
[ ] Take a break. Seriously. 🧘
[ ] Update your SECURITY.md if anything in your process needs refinement
[ ] Consider writing about the experience (anonymized if needed)
The Bottom Line 💡
Getting a security vulnerability reported to your open source project is not a failure.
It's a sign that your project matters enough for someone to spend time finding and responsibly reporting a flaw. Most truly abandoned projects don't get CVEs — they just get exploited quietly.
The right response is: acknowledge fast, communicate clearly, fix carefully, credit generously, and publish transparently.
GitHub has built genuinely good tooling for this. Use it.
Maintaining an open source project? Drop me a message on LinkedIn — I'm happy to review your SECURITY.md or talk through your disclosure policy before you need it.
Have you filed a security advisory before? Check my GitHub — the SECURITY.md files there are the real ones I use. Steal them freely.
The best security disclosure is the one you never have to panic through. Set up the infrastructure now. 🔐
P.S. The researcher who found my vulnerability? He's now a contributor to the project. Handled right, a CVE disclosure can be the beginning of a great open source relationship. 🤝
P.P.S. Yes, I have a templated "thank you for responsible disclosure" response ready in my notes app now. Work smarter, not harder.