It's surprisingly difficult, and the "obvious" techniques (just do embeddings) don't really work. I wrote about it and did benchmarks here: https://joecooper.me/blog/redundancy/
Thank you for actually testing and measuring an implementation & hypothesis. I appreciate the leads for evaluating my own similarity problems and efficacy.
No one else has done it and code is easier than ever to create. This tool needs to be built by the person closest to the problem.
Ask your agent for ways to do this using code, not more AI.
It might propose - and build! - an embeddings based system and scraper for your issues & PRs. Using that will burn zero tokens and you can iterate on it as you think of improvements.
Because there's no money in trying to filter out noise that costs next to nothing to generate. It's like asking why no startup is trying to bring forum moderation to the masses.
The author of this Tweet has been making waves by talking about using AI to write code without reviewing it. He wasn’t actually reading and reviewing all of that code himself.
As for the pace: His project has become extremely popular and got him a very nice position at OpenAI just today.
The reviews must be heavily AI assisted in order to get that sort of volume in.
Either way, it doesn't surprise me that this number is so high. Productivity chasing is the name of the game for AI, regardless of how sustainable or helpful this extra work done actually is
I mean you can just do this with claude code or opencode. I suggest opencode and gemini pro since it has a nice big context window. If you are trying to do something like this on the website version of the models just forget it, stop using those, they are like toys compared to the CLI tools.
Step 1: have it sum up every issue and pr in like 100 words. You can have it do it using subagents working on subsets of the tickets so it doesn't take forever.
Step 1a: concatenate all the summary files to one big file.
Step 2: have it check pairs that seem duplicate from the summary. You may have to force it to read the entire file, for whatever reason models are trained to try to avoid just reading stuff into their context and will try grep and writing scripts and whatever else.
Step 3: repeat the above until it stops finding dupes.
I think this will probably take about 4 hours? 2 hours to get the process working and 2 hours of looping it.
If you don't think the above will work well please just move along, don't bother arguing with me because I've done tasks like this over and over and it works great.
Ways to get better results in general:
- Start by having it write a script to dump all the relevant information you will need up front. It's much faster at reading files than trying to do mcp calls. It's also less likely to pretend to read files and just assume it didn't find anything. (happens more than you think)
- Break the problem down into clear steps for the model, don't just give it a vague project. Just paste the steps above and it should work fine.
- Check what it is doing. Don't assume that because it says it read a file it actually read it, it will very often read the first 1000 bytes, then not read any of the rest of it, then just assume it read everything. In fact ChatGPT will complain that the input is truncated when it is the one that chose to only read the first part.
I asked Copilot (work) to do this with a sheet and the summary it gave each time was so generic I couldn't tell one ticket from another. Feeding it tickets individually was fine, but in a spreadsheet it just seemed to forget.
Would be interested to learn how we can get true foreach loops.
It's surprisingly difficult, and the "obvious" techniques (just do embeddings) don't really work. I wrote about it and did benchmarks here: https://joecooper.me/blog/redundancy/
Thank you for actually testing and measuring an implementation & hypothesis. I appreciate the leads for evaluating my own similarity problems and efficacy.
It's not clear to me: is he asking us to bluid this or is he using twitter to ask it to its clawd bot?
Or more meta: is this message from the bot itself, controlling his twitter, who got fed up because it's also merging the MRs?
And then start writing "hit pieces" to all the bot PR authors? /s
No one else has done it and code is easier than ever to create. This tool needs to be built by the person closest to the problem.
Ask your agent for ways to do this using code, not more AI.
It might propose - and build! - an embeddings based system and scraper for your issues & PRs. Using that will burn zero tokens and you can iterate on it as you think of improvements.
People aren't even good at this task if Stack Overflow is any indication.
True; way too many duplicates have gone unrecognized.
> How's no startup working on this?
Because there's no money in trying to filter out noise that costs next to nothing to generate. It's like asking why no startup is trying to bring forum moderation to the masses.
> Because there's no money in trying to filter out noise that costs next to nothing to generate.
Not yet, but when there's so much more noise than signal, it'll become valuable.
I think anti-spam providers might disagree with that take.
> Worked all day yesterday and got like 600 commits in. It was 2700; now it's over 3100.
Why? There's no reason you need to actually handle that many in a day, right? Pace yourself.
The author of this Tweet has been making waves by talking about using AI to write code without reviewing it. He wasn’t actually reading and reviewing all of that code himself.
As for the pace: His project has become extremely popular and got him a very nice position at OpenAI just today.
The reviews must be heavily AI assisted in order to get that sort of volume in.
Either way, it doesn't surprise me that this number is so high. Productivity chasing is the name of the game for AI, regardless of how sustainable or helpful this extra work done actually is
Even with heavy AI assistance, how well-reviewed can this code be? 600 commits in a day is one commit every 2 minutes for 18 hours.
Hire a staff
He already has dozens of Codex and Claude code accounts
For 3k issues it's 3000x3000 checks to find duplicates? Can you cache similarity?
Nearest neighbor embedding search.
I mean you can just do this with claude code or opencode. I suggest opencode and gemini pro since it has a nice big context window. If you are trying to do something like this on the website version of the models just forget it, stop using those, they are like toys compared to the CLI tools.
Step 1: have it sum up every issue and pr in like 100 words. You can have it do it using subagents working on subsets of the tickets so it doesn't take forever.
Step 1a: concatenate all the summary files to one big file.
Step 2: have it check pairs that seem duplicate from the summary. You may have to force it to read the entire file, for whatever reason models are trained to try to avoid just reading stuff into their context and will try grep and writing scripts and whatever else.
Step 3: repeat the above until it stops finding dupes.
I think this will probably take about 4 hours? 2 hours to get the process working and 2 hours of looping it.
If you don't think the above will work well please just move along, don't bother arguing with me because I've done tasks like this over and over and it works great.
Ways to get better results in general:
- Start by having it write a script to dump all the relevant information you will need up front. It's much faster at reading files than trying to do mcp calls. It's also less likely to pretend to read files and just assume it didn't find anything. (happens more than you think)
- Break the problem down into clear steps for the model, don't just give it a vague project. Just paste the steps above and it should work fine.
- Check what it is doing. Don't assume that because it says it read a file it actually read it, it will very often read the first 1000 bytes, then not read any of the rest of it, then just assume it read everything. In fact ChatGPT will complain that the input is truncated when it is the one that chose to only read the first part.
I asked Copilot (work) to do this with a sheet and the summary it gave each time was so generic I couldn't tell one ticket from another. Feeding it tickets individually was fine, but in a spreadsheet it just seemed to forget.
Would be interested to learn how we can get true foreach loops.