The open source project cURL used to be flooded with worthless, AI-generated security reports. Over the past few months, those have vanished — replaced by genuinely useful ones. So many, in fact, that the maintainers are struggling to keep up, says Daniel Stenberg, who leads the project.
cURL is not alone.
“I hear similar witness reports from fellow maintainers in many other Open Source projects,” Stenberg writes on LinkedIn.
Several of those colleagues back him up in the discussion thread — among them the maintainers of glibc, Vim, and Node.js.
“I'd say it is primarily because the tooling has improved. HackerOne did basically nothing new that could explain this (plus, this is mirrored in countless other projects, many of them not on hackerone). This is a notable change in the incoming reports,” Stenberg writes.
HackerOne is the platform cURL uses to receive bug reports.
There is an unexpected downside to being flooded with good bug reports, though — there are simply too many to handle in time.
The challenge used to be filtering out noise. Now it is keeping pace with reports that actually matter. That is how Steve M. Hernandez, a code security specialist, puts it, in the same thread on LinkedIn.
“High quality reports at higher frequency still require the triage capacity and decision consistency to keep up. The bar is moving from filtering noise to keeping pace with real signal.”
There is also something very unsettling about how easy finding vulnerabilities has apparently become. The exact same flaw can be reported several days running. Willy Tarreau, who maintains the load balancing project HAProxy, has seen it coming.
“We're all progressively killing embargoes as well, they're pointless for vulnerabilities found by widely available tools, it's just trying to hide something that can be published again the next day,” he writes.
They absolutely try to do that. Open source often get to use the new AI tools for free (not yet Anthropic Glasswing).
But as Daniel says somewhere: ”The AI tools are better at finding problems than they are at fixing them or writing code...”
There is also a consensus is humans need to be involved to evaluate reports and code – to filter out AI slop. There are also discussions on the more philosphical level like ”Sure, this is a vulnerability, but it would more properly be on the user to guard against it.”
This is the huge problem now. They need new processes. They are looking for how to solve this. It takes time to filter out the noise. And the AI vulnerability tools are quick to find bugs – the time to react before a vulnerability is exploited has disapperad. You need to assume that any AI reported vulnerability is already being exploited by someone else the found the sam vulnerability using the same AI tool.
The open source project cURL used to be flooded with worthless, AI-generated security reports. Over the past few months, those have vanished — replaced by genuinely useful ones. So many, in fact, that the maintainers are struggling to keep up, says Daniel Stenberg, who leads the project.
cURL is not alone.
“I hear similar witness reports from fellow maintainers in many other Open Source projects,” Stenberg writes on LinkedIn.
Several of those colleagues back him up in the discussion thread — among them the maintainers of glibc, Vim, and Node.js.
“I'd say it is primarily because the tooling has improved. HackerOne did basically nothing new that could explain this (plus, this is mirrored in countless other projects, many of them not on hackerone). This is a notable change in the incoming reports,” Stenberg writes.
HackerOne is the platform cURL uses to receive bug reports.
There is an unexpected downside to being flooded with good bug reports, though — there are simply too many to handle in time.
The challenge used to be filtering out noise. Now it is keeping pace with reports that actually matter. That is how Steve M. Hernandez, a code security specialist, puts it, in the same thread on LinkedIn.
“High quality reports at higher frequency still require the triage capacity and decision consistency to keep up. The bar is moving from filtering noise to keeping pace with real signal.”
There is also something very unsettling about how easy finding vulnerabilities has apparently become. The exact same flaw can be reported several days running. Willy Tarreau, who maintains the load balancing project HAProxy, has seen it coming.
“We're all progressively killing embargoes as well, they're pointless for vulnerabilities found by widely available tools, it's just trying to hide something that can be published again the next day,” he writes.
Source: You can easily find the thread on LinkedIn. It's an exciting thread with a cavalcade of who's who from the open source world.
Ok, I will bite and ask the naive question: why not use AI to fix the bugs?
They absolutely try to do that. Open source often get to use the new AI tools for free (not yet Anthropic Glasswing).
But as Daniel says somewhere: ”The AI tools are better at finding problems than they are at fixing them or writing code...”
There is also a consensus is humans need to be involved to evaluate reports and code – to filter out AI slop. There are also discussions on the more philosphical level like ”Sure, this is a vulnerability, but it would more properly be on the user to guard against it.”
Feels like better tooling is lowering the barrier, but also increasing noise. Curious how teams are filtering signal vs noise at scale.
This is the huge problem now. They need new processes. They are looking for how to solve this. It takes time to filter out the noise. And the AI vulnerability tools are quick to find bugs – the time to react before a vulnerability is exploited has disapperad. You need to assume that any AI reported vulnerability is already being exploited by someone else the found the sam vulnerability using the same AI tool.