quodeq / blog
← All posts By Victor Purcallas Marchesi

Before it gets a number

Somewhere in Citrix NetScaler's SAML parser, a length field went unchecked. For months, every authentication request walked past it. Then someone noticed.

Seven days later, defenders were applying patches at 2 AM. The bug had a name now: CVE-2026-3055. On March 30, 2026 CISA added it to its Known Exploited Vulnerabilities catalog. A researcher named Aliz Hammond, at the security firm watchTowr, broke the disclosure down in two consecutive writeups. The first was titled "The Sequels Are Never As Good, But We're Still In Pain." The second was called "Please, We Beg, Just One Weekend Free Of Appliances."

Read those titles twice.

The weakness was always there. The CVE just gave it a number.

The same weakness, three times

CVE-2026-3055 is an out-of-bounds memory read, CWE-125 in the Common Weakness Enumeration. The mechanics are simple. When NetScaler processes a login from another system, the code reaches into memory to grab what it needs. It never checks how far to reach, so whatever sits next to the login data comes along. In this case, that memory held the session keys of administrators who had already logged in. An attacker who never logged in walks away as one.

This is the third time. The security community is calling this round CitrixBleed 3, because there was a CitrixBleed 2 in 2025 and an original CitrixBleed in 2023. Three memory overreads in three years, all in the same product, all CWE-125, each one getting its own CVE number when it finally surfaced. Aliz looked closer at this latest disclosure and found that the patch was actually covering more than one bug. The same pattern had landed in another part of the code.

I spend a lot of time looking at static-analysis output, and after a while the same CWE classes start to feel familiar. CWE-79 (cross-site scripting). CWE-89 (SQL injection). CWE-125 like this one. They show up in every codebase I have ever run a scanner over, regardless of the product or the team behind it.

A CVE is the specific vulnerability that gets disclosed, the thing with a number and a CVSS score. A CWE is the underlying weakness pattern behind it, and by the time the CVE shows up on Mastodon, the CWE has usually been in someone's code for years.

Prevention is uncountable

There is a real tradeoff here. Reacting to CVEs has a clean loop. A vulnerability is disclosed, a patch is published, you deploy the patch, you close the ticket. There is an artifact at every step, which auditors and compliance frameworks have built decades of process around.

Reducing the underlying CWE surface in your own code does not work like that.

You scan your codebase and the scanner returns three thousand findings. You triage, fix the obvious ones, and file backlog tickets for the rest. Six months later you scan again and the number has not gone to zero. It may have gone up, because you wrote new code in the meantime. There is no green checkmark at the end of the work, and there is no CVE that did not happen, because the thing you prevented does not generate paperwork.

The team that signs a contract for a new security tool shows up at the next budget review. The engineer who quietly refactors two hundred lines of input validation in NetScaler's SAML parser does not. Both reduce risk.

There is also a market angle. Reacting to CVEs has companies behind it. Threat intel feeds, vulnerability scanners, patch management, managed detection. The entire reactive workflow has financial momentum because there are products to sell at every step. Nobody is selling you a subscription to having fewer bugs.

Scanning is also not enough on its own. CWE detection catches patterns, not logic. A clean scan can still ship a privilege escalation bug or a business-logic flaw nobody saw coming. If anyone tells you scanning makes your code safe, they are oversimplifying. The honest version is that scanning is one of several practices, and skipping it makes the other ones harder.

Read the code first

In April 2026, Anthropic released Claude Mythos, the model behind Project Glasswing, an AI system designed to autonomously discover zero-day vulnerabilities. In seven weeks of internal testing, Mythos found over two thousand previously unknown vulnerabilities, 271 of them in Firefox alone, and built working exploits for 181.

If a model can find that many CWEs sitting in shipped code in seven weeks, those CWEs were already there. They are still in your code somewhere. Sooner or later someone else reads it.

You might as well read it first.

Some tools help. Static analyzers like Semgrep and CodeQL flag known patterns. Container scanners like Trivy check what you ship, when Trivy itself has not been compromised, which it has been. Most secure-coding practices have existed for decades and most teams still skip them. quodeq, the project I work on, scans codebases for CWEs locally and maps the findings against ISO 25010. None of these are complete answers on their own.

The specific tool matters less than the bet underneath. Run something, read what it says, ship a smaller weakness surface this quarter than last. That practice will not catch every bug, but it will catch some that would otherwise have shipped.

The CWE in Citrix's SAML parser was readable in March 2026, and the month before, and going back to whenever it was first committed. It became CVE-2026-3055 only when Aliz Hammond and the watchTowr team finally took a closer look.

There will be more memory overreads in NetScaler, and more deserialization bugs in middleware nobody has audited since 2019. Mythos and whatever comes after it will find them faster than any human team could. The only question is who finds yours first.