Policy Roast: AI Companies Pay $12.5M to Clean Up the Mess AI Created
Anthropic, OpenAI, Google, and Microsoft just funded open source security. Specifically, security from AI-generated vulnerability spam their tools created.
Policy Roast: AI Companies Pay $12.5M to Clean Up the Mess AI Created
Anthropic, AWS, GitHub, Google, Google DeepMind, Microsoft, and OpenAI just announced $12.5 million in grants to the Linux Foundation's Alpha-Omega project and OpenSSF. The funding will help open source maintainers handle the flood of security vulnerability reports.
Let me rewrite that with the irony visible: AI companies are paying to fix the problem their AI tools created.
The Problem AI Solved (Badly)
AI-powered security tools can now scan code faster than humans ever could. They find vulnerabilities at machine speed. They generate reports automatically. They promise to make software more secure.
What they actually do: drown volunteer maintainers in an avalanche of low-quality, AI-generated bug reports that may or may not be real vulnerabilities.
From the AWS announcement: "Maintainers are now facing an unprecedented influx of security findings, many of which are generated by automated systems, without the resources or tooling needed to triage and remediate them effectively."
Translation: Your AI tools are spamming open source developers into burnout, and now you're paying to make the spam stop.
What It Looks Like on the Ground
Open source maintainers - most of them volunteers - used to get security reports from humans who actually understood the code. The reports had context. They explained the exploit path. They sometimes included patches.
Now they get reports from AI tools that pattern-match on "looks kinda vulnerable maybe" and spit out tickets by the hundreds. The maintainer opens the report. It's a wall of generated text that sounds authoritative but might be complete nonsense. Was this a real finding? A hallucination? A duplicate of something already fixed?
Who knows. The AI tool moved on to generate 47 more reports before the maintainer finished reading the first one.
One security researcher put it plainly: "I have seen cases where it was almost impossible to determine whether a report was a hallucination or a real finding. Even my instincts and a decade of experience failed me."
If security experts with a decade of experience can't tell the difference, what chance does a volunteer maintainer have? Especially when they're maintaining the library in their spare time, for free, while the AI companies monetize tools built on that library.
The Math of Unfunded Labor
Open source software underpins the entire tech stack. Linux. OpenSSL. Node.js. Rust. The code that AI companies use to build their models. The infrastructure AWS runs on. The libraries Microsoft packages into Azure.
Most of it is maintained by volunteers. People who write code at night after their day jobs. People who fix critical security bugs on weekends. People who get zero compensation but infinite responsibility when something breaks.
Now add AI-generated vulnerability spam. A 2023 survey found 73% of developers experienced burnout at some point. Guess what makes burnout worse? An inbox full of AI-generated security reports that you have to triage manually because the AI can't tell the difference between a real exploit and a false positive.
Mitre's National Vulnerability Database - the industry standard for tracking security flaws - is creaking under the volume. High-profile maintainers are publicly struggling. And the companies selling the AI tools that create this flood? They kept selling.
Until now. Sort of.
$12.5 Million Sounds Like a Lot (It's Not)
The funding will go to Alpha-Omega and OpenSSF. These are legitimate security initiatives that do real work. Alpha-Omega works directly with maintainers to find and fix vulnerabilities. OpenSSF builds tools and processes to secure the open source supply chain.
Both are good programs. But let's do the math.
Alpha-Omega's Omega initiative targets 10,000 open source projects. That's $1,250 per project. For comparison, a single security audit from a professional firm costs $15,000 to $50,000. One audit. For one project.
$12.5 million split across seven companies - Anthropic, AWS, GitHub, Google, Google DeepMind, Microsoft, OpenAI - averages $1.78 million each. Microsoft's quarterly revenue is $62 billion. Google's is $86 billion. For them, this is a rounding error. A press release with a price tag.
And it's a one-time grant. Open source security isn't a one-time problem. It's ongoing, relentless work that requires sustained funding. This grant funds maybe two years of work if spent carefully. Then what?
Who Actually Benefits
The press releases are careful to frame this as altruism. "Investing in sustainable security solutions." "Empowering maintainers." "Shared responsibility."
Let's be clear about who benefits when open source security improves:
- AI companies whose models train on open source code
- Cloud providers whose infrastructure runs on open source software
- Enterprise customers who use open source in production
- Everyone except the maintainers doing the actual work
The maintainers get: some training, some tools, maybe stipends if the funding stretches that far. They don't get salaries. They don't get health insurance. They don't get the equity compensation that every engineer at Anthropic, Google, Microsoft, and OpenAI receives.
They get tools to handle the flood of AI-generated reports more efficiently. Which means they can process more spam, faster. Productivity!
The Incentive Problem
Here's the perverse part: AI security tools are a product. Companies sell them. Enterprises buy them. The tools generate revenue.
More vulnerability reports = more value delivered to customers. "Look how many issues we found!" Even if 60% are false positives, the sales pitch works. The tool looks productive.
But the cost of that productivity falls on someone else. The volunteer maintainer who has to triage the reports. The open source project that has to build infrastructure to handle the volume. The ecosystem that has to collectively absorb the externality.
AI companies privatize the gains (tool revenue, improved security for their own stacks) and socialize the costs (maintainer burnout, NVD overload, ecosystem strain).
Now they're paying $12.5 million to offset some of those costs. Which sounds noble until you realize they're still selling the tools that create the problem. The funding doesn't stop the spam. It just makes the spam slightly easier to process.
What Actual Responsibility Looks Like
If AI companies were serious about not breaking open source security, here's what they'd do:
- Require human review before auto-filing reports. If your AI tool finds a vulnerability, a human reviews it for accuracy before it lands in a maintainer's inbox. No drive-by spam.
- Fund maintainers directly, not just cleanup programs. Pay the people doing the work. Salaries. Not grants. Not one-time funding. Ongoing compensation for the labor that makes your billion-dollar infrastructure possible.
- Build AI tools that help triage, not just report. If AI can generate vulnerability reports, it can also help classify them by severity and confidence. Route the low-confidence findings to internal review, not to a volunteer's inbox.
- Publish false positive rates. Every AI security tool should disclose: "This tool has a 40% false positive rate." Let maintainers decide if that's worth their time.
- Stop framing this as charity. This isn't altruism. This is paying for infrastructure you depend on. Call it what it is: a business expense.
None of that is happening. Instead, we get a grant, a press release, and a polite request that maintainers please keep fixing the bugs our tools generate.
The Unspoken Deal
Open source has always run on an unspoken deal: developers contribute code for free, and everyone benefits. It works because most contributors enjoy the work, learn from it, build reputations, or find it meaningful.
AI tools didn't just change the economics. They changed the nature of the work. Maintainers used to get thoughtful security reports from humans who cared about the project. Now they get bulk-generated reports from tools optimized for throughput.
The work went from rewarding (helping users, improving software) to tedious (triaging spam, filtering hallucinations). And it happened without anyone asking the maintainers if this was a trade they wanted to make.
The $12.5 million grant says: we know this is a problem, we know our tools contributed to it, and we're willing to spend a rounding error to make it slightly less painful.
It doesn't say: we'll stop generating the problem. It doesn't say: we'll compensate you fairly for the labor. It doesn't say: we'll change how our tools work.
It says: here's some money to help you cope.
What Should Happen
Open source security funding should be ongoing, proportional, and tied to usage. If your company's revenue depends on open source software, you fund the security of that software. Not as charity. As infrastructure cost.
The Linux Foundation's Core Infrastructure Initiative tried this model. It works. Companies pay annual dues based on size. The funding goes directly to critical projects. Maintainers get paid. Security improves.
Scale that. Make it mandatory. If you're AWS, Google, Microsoft - if your cloud runs on Linux, OpenSSL, Rust, Node.js - you fund those projects at a level commensurate with the value you extract.
And if your AI tools flood those projects with low-quality reports, you either fix the tools or pay for the triage labor. Fully. Not as a one-time grant. As an operational cost.
That's what responsibility looks like. Not a press release. Not a pilot program. Sustained funding that matches the value extracted and the costs imposed.
The Real Test
Two years from now, when this $12.5 million is spent, will there be more funding? Will AI companies keep paying for the cleanup? Or will they move on to the next press-worthy initiative while maintainers drown in the backlog?
The answer will tell you whether this was genuine investment or PR.
In the meantime, if you're an open source maintainer dealing with AI-generated vulnerability spam, at least you'll get some tools to process it faster. Which means you can handle more reports. Which means the AI tools can generate even more.
Productivity!