As a contributor, I haven't run into anything like that, but I haven't contributed to high-visibility OSS repos in the last few months, I believe.
From the other side (OSS maintainer), I've had some issues, PRs, and emails which are clearly LLM-generated.
While they provoke in me a terrible first impression, I try to keep an empathic attitude and justify it by, for example, considering that some people might be able to read and understand code, thus make requests and suggestions, but not be fluent in English, so they don't feel comfortable not using such tools to assist them.
That being said, if it's clearly a bad, long, complicated, or unintelligible request, suggestion, or contribution, I'll reject it. It's my project, and many people trust me to not make terrible decisions, so if I don't understand it, I'll not accept it. You're always free to fork the project and move on with your changes.
Finally, I'll just say that my OSS projects in GH have less than 2k stars in total (one of them being 1.1k stars), so I might be too small to be targeted "to death" like I've read some projects to have been. If that happens, I'd probably close access to the code unless you pay a one-time fee.
> I'm asking this because I recently opened a PR to fix a vulnerability in an OSS project (RCE via pickle deserialization in Python). A day later, I got a fully LLM-generated comment claiming my approach was wrong and that I should rewrite it differently and telling the maintainers he could contribute "if the project is open to a more surgical refactoring."
>
> It's astonishing how often these encounters have been happening lately.
>
> I'd love to hear from contributors or maintainers whether this happens to them and how they deal with it.
Well, from the other side of the table, as somebody who helps maintain open source projects complicated by bounties. I've had automated PRs and replies from LLMs claiming to be people. I refuse to work with people or people with AIs that are unwilling to take the time to understand the challenges from a human perspective expressed in person to person discourse. People need to develop interpersonal relationships. I think what you're seeing is a response to what other maintainers are experiencing or, more than likely, the problem is as stated above, just from a different point of view.
A human-first approach doesn't exclude AI-augmented solutions for technical problems. The reason code exists is to close a gap in human experience in software.
From what I've seen in the data, acceptance rates to all major OSS projects are down since the age of coding agents.
And when I talk to maintainers, most of them are talking about some version of doing fast and easy pocket vetos (leaving the PRs to rot) or even just banning on the first offense.
It's been building for a bit, but I think the crisis point is solidly here. And things like OpenClaw turn up the dials. I'm sure more tools and changes to practices will be coming.
It will maybe solved soon if we train yet another neural network on scanning GitHub activities; but by also adding other forges like codeberg, gitlab, self-hosted forgejo, etc... to not lock non-github users out
We have just stopped accepting PRs entirely for now. It's been utterly exhausting. We never got many PRs beforehand anyway so the uptick in entirely LLM written PRs was very noticeable.
We do continue to argue with LLM-submitted security disclosures. If these aren't an issue I just instant-close because debating with an LLM what is and isn't an issue is fucking painful.
As a contributor, I haven't run into anything like that, but I haven't contributed to high-visibility OSS repos in the last few months, I believe.
From the other side (OSS maintainer), I've had some issues, PRs, and emails which are clearly LLM-generated.
While they provoke in me a terrible first impression, I try to keep an empathic attitude and justify it by, for example, considering that some people might be able to read and understand code, thus make requests and suggestions, but not be fluent in English, so they don't feel comfortable not using such tools to assist them.
That being said, if it's clearly a bad, long, complicated, or unintelligible request, suggestion, or contribution, I'll reject it. It's my project, and many people trust me to not make terrible decisions, so if I don't understand it, I'll not accept it. You're always free to fork the project and move on with your changes.
Finally, I'll just say that my OSS projects in GH have less than 2k stars in total (one of them being 1.1k stars), so I might be too small to be targeted "to death" like I've read some projects to have been. If that happens, I'd probably close access to the code unless you pay a one-time fee.
> I'm asking this because I recently opened a PR to fix a vulnerability in an OSS project (RCE via pickle deserialization in Python). A day later, I got a fully LLM-generated comment claiming my approach was wrong and that I should rewrite it differently and telling the maintainers he could contribute "if the project is open to a more surgical refactoring." > > It's astonishing how often these encounters have been happening lately. > > I'd love to hear from contributors or maintainers whether this happens to them and how they deal with it.
Well, from the other side of the table, as somebody who helps maintain open source projects complicated by bounties. I've had automated PRs and replies from LLMs claiming to be people. I refuse to work with people or people with AIs that are unwilling to take the time to understand the challenges from a human perspective expressed in person to person discourse. People need to develop interpersonal relationships. I think what you're seeing is a response to what other maintainers are experiencing or, more than likely, the problem is as stated above, just from a different point of view. A human-first approach doesn't exclude AI-augmented solutions for technical problems. The reason code exists is to close a gap in human experience in software.
I'm a bit obsessed with this topic lately, so I'm going to keep refreshing this thread to see if folks have good answers.
One thing I've been working with is this little util to try to do a quick sniff test on the contributors: https://github.com/2ndSetAI/good-egg (Longer explanation on Substack: https://neotenyai.substack.com/p/scoring-open-source-contrib... )
From what I've seen in the data, acceptance rates to all major OSS projects are down since the age of coding agents.
And when I talk to maintainers, most of them are talking about some version of doing fast and easy pocket vetos (leaving the PRs to rot) or even just banning on the first offense.
It's been building for a bit, but I think the crisis point is solidly here. And things like OpenClaw turn up the dials. I'm sure more tools and changes to practices will be coming.
The "egg" system seems really good!
It will maybe solved soon if we train yet another neural network on scanning GitHub activities; but by also adding other forges like codeberg, gitlab, self-hosted forgejo, etc... to not lock non-github users out
Still really good idea!
Quick footnote to call out this really good summary from the team at :probabl (the scikit-learn/skore company): https://blog.probabl.ai/maintaining-open-source-age-of-gen-a...
We have just stopped accepting PRs entirely for now. It's been utterly exhausting. We never got many PRs beforehand anyway so the uptick in entirely LLM written PRs was very noticeable.
We do continue to argue with LLM-submitted security disclosures. If these aren't an issue I just instant-close because debating with an LLM what is and isn't an issue is fucking painful.