I dont think the side projects or the ai is the problem. It's the perception of the quality and our filtering that needs adaption. Ai has changed the amount of content that is generated by a huge margin, and it is generally difficult to tell how much work went into something. And lesser experienced people do an even worse job in that.
We are likely going to get better in judging this new communication and media. But we need much more experience in it, until we can do that properly.
It will be annoying for quite a while, as it was with social media, until we found the places that are still worth our time and attention. But i am hopeful that we will be able to do that.
Until then i am going to work on my AI side project every evening until i deem it ready and bug free. It already works well enough for my own purposes (which i made it for) and my requirements were heavily influenced by my work process. I would never have been able to finish such a project, even with full time working on it over a year without AI.
I think the main effect of the current influx of low effort AI side projects is that it's going to significantly increase the bar for what's worth showing people.
Six months ago no-one would post a "Show HN" for a static personal website they had built for themselves - it wouldn't be novel or interesting enough to earn any attention.
That bar just went through the ROOF. I expect it will take a few more months for the new norms to settle in, at which point hopefully we'll start seeing absurdly cool side projects that would not have been feasible beforehand.
I’ve felt the same. I’ve had two things on the pre-LLM front page after about a month of work each. Those exist in the framework of my personal site which I’d been building out for a year.
I suspect a month of AI accelerated work is still enough to make the front page. I don’t see the competition as steeper. I bet it’s about the same per unit time.
Yeah, that's probably a good way of thinking about this. If your side project took a couple of hours it's not worthy of a Show HN. If it took at least a few days or a few weeks or longer and it's novel (not something many other people have built already) then it's a much better bet.
The problem is that the effort required to understand the quality of a project has also gone through the roof (and not just due to the number of them). Good looking READMEs and docs, large test suites, well constructed code - LLMs can generate credible versions that take time to digest and understand the limitations of. AI is fantastic at faking the outward signals of a good project and hyping it up. I've lost count of the projects that appear here and on Reddit that initially look good but fall apart once a domain expert spends the time to dig into it.
I agree that’s kind of what should happen. What seems to have happened is that people have figured out it’s easier to game the system than produce more complicated or technical projects.
This is exposing a problem that already existed, AI is just throwing gas on the fire. Most engineers, including the readership of this site, have terrible taste in software.
What I mean by that is: after reading through a brief description of a project, or a conceptual overview, they are no better than noise at predicting whether it will be worthwhile to try out, or rewarding to learn about, or have a discussion about, or start using day-to-day.
Things on the front page used to be high quality software, research papers, etc. And now it is entirely driven by marketing operations, and social circles.
There is no differential amplification of quality.
I don't know what the solution is, but I imagine it involves some kind of weighted voting. That would be a step towards a complicated engagement algorithm, instead of the elegant and predictable exponential decay that HN is known for.
> What I mean by that is: after reading through a brief description of a project, or a conceptual overview, they are no better than noise at predicting whether it will be worthwhile to try out, or rewarding to learn about, or have a discussion about, or start using day-to-day.
You could criticize a Michelin inspector the same way. The poor bastards have to actually taste the dish and can't decide merit based on menu descriptions alone.
Doing things "faster" and "easier" is an interesting way to put it. It places all of the value on one's personal experience of using the AI, and completely ignores the quality of the thing produced. Which explains why most stuff produced by LLMs is throwaway garbage! It only reinforces the parent comment - there is virtually no emphasis on making things "better".
There won't even be a quality conversation if a thing isn't built in the first place, which is the tendency when the going is slow and hard. AI makes the highly improbable very probable.
I feel like the things I've built have gotten better since ChatGPT, but I don't use LLMs, just iterating and realizing how naive/terrible my old code is. Maybe I should write a blog post on How I (Don't) Use LLMs for Coding.
I don't think it's quite at trillions to zero, but I do think the numbers are way, way out of balance so far. I'm still at the point where, if AI disappeared tomorrow, I would be, at worst, mildly inconvenienced.
To be fair there is this persistent paradox about programming methodologies is that no matter how much they seem to speed you up, or how effective they seem to be at reducing bugs, this doesn't seem to give you any material competitive advantage over companies going for the most conservative language and methodology choices.
Google Search has gotten better unless you think AI mode is a downgrade, the alternative of having a wikipedia article, reddit post or random website as the first result is not better technically maybe morally for you but not matter of factually. The average user does less manual filtering.
I definitely think the AI mode is a downgrade. It has me seriously considering abandoning Google for different search engine. With a reddit post or a Wikipedia article, it's much easier to assess the credibility of the content.
The AI mode does at least attempt to list it's sources, but it's extra hoops to jump through.
I agree with you in general, but come on - learning is easier (unless you need to dive into highly specialized stuff), writing shorter chunks of code is faster, simple photo editing ("remove this and that from the background") doesn't need any skills now. Image generation isn't terribly too if you put some effort and don't stick with the same 3-4 drawing styles that all the cheapskate companies use.
> Software and online things I've used that seem to be better than they were before ChatGPT was introduced: 0
I don't think you can really get any sort of a signal on this?
Nobody is all that sensitive to the amount of features that get shipped in any project, and nobody really perceives how many people or how much time was needed to ship anything. As a user, unless that means a 5x difference in price of some service, you don't really see or care about any of that - and even if there were savings on the part of any developer/company, they'd probably just pocket the difference. Similarly, if there's a product or service that exists thanks to vibe coding and wouldn't have existed otherwise, you probably don't know that particular detail.
Even when fuckups and bugs do happen, there's also no signal whether it's explicitly due to AI (or whether people are scapegoating it), or just management pushing features nobody wants and enshittifying products and entire industries for their own gain.
Well i generally look for signals that a place I might eat at knows how to cook. If I walk in and I see rats on the floor and theres no kitchen I probably won't be interested.
For me at least, the advent of coding AI has just forced me to finally accept a truth that I probably always knew: that I'm an average (at best) software developer, and that I don't have anything truly unique or impressive to contribute to the field. My side projects were always just for myself.
I love computing, and programming. If anything I'm better able to appreciate that now that I no longer care if my work has any impact.
The internet is only for training data and praising any and all technological progress. If you don't like it, keep it to yourself. Self-expression has no place here!
Edit: for me the most practical insight to come out of these threads, so far, is that Show HNs for generated repos/sites/projects would be more interesting if submitters were required to share the prompts, and not just the generated output. For such projects, the prompts are the real source, while the GH repo or generated artifact is actually the object code, and if that's all that's shared, it's less interesting and there's less to discuss.
I think we're going to implement this unless we hear strong reasons not to. The idea has already come up a lot, so there's demand for it, and it seems clear why.
It's not a terrible idea, but I'm not sure how that would work logistically.
When I use Codex to do vibe coding stuff, I don't usually have one big prompt, I usually have it do small things piecemeal and I iterate with it later. Maybe I'm using it wrong but it tends to be more "conversational" and I think that would be harder to share, especially considering I'll do things over dozens of sessions.
I suppose I could keep an archive of every session I've ever opened with Codex and share that, but thus far I haven't really done that.
Granted, I don't really share my stuff with "Show HN".
That's how I do it too. I haven't checked (and can't right now as I'm not at work) but does codex not have a feature that lets you download your codex chat logs? I would certainly hope so...
I'm reminded of something I read recently about disclosure of AI use in scientific papers [1]:
> Authors should be asked to indicate categories of AI use (e.g., literature discovery, data analysis, code generation, language editing), not narrate workflows or share prompts. This standardization reduces ambiguity, minimizes burden, and creates consistent signals for editors without inviting overinterpretation. Crucially, such declarations should be routine and neutral, not framed as exceptional or suspicious.
I think that sharing at least some of the prompts is a reasonable thing to do/require. I log every prompt to a LLM that I make. Still, I think this is a discussion worth having.
If I have a vibe coded project with 175k lines of python, there would be genuinely thousands and thousands of prompts to hundreds of agents, some fed into one another.
Whats the worth of digging through that? What do you learn? How would you know that I shared all of them?
I have a daily journal where I put every online post I make. I include anything I send to a LLM on my own time in there. (I have a separate work log on their computer, though I don't log my work prompts.) Likely I miss a few posts/prompts, but this should have the vast majority.
A few caveats: I'm not a heavy LLM user (this is probably what you're getting at) and the following is a low estimate. Often, I'll save the URL only for the first prompt and just put all subsequent prompts under that one URL.
Anyhow, running a simple grep command suggests that I have at least 82 prompts saved.
In my view, it would be better to organize saved prompts by project. This system was not set up with prompt disclosure in mind, so getting prompts for any particular project would be annoying. The point is more to keep track of what I'm thinking of at a point in time.
Right now, I don't think there are tools to properly "share the prompts" at the scale you mentioned in your other comment, but I think we will have those tools in the future. This is a real and tractable problem.
> Whats the worth of digging through that? What do you learn? How would you know that I shared all of them?
The same questions could be asked for the source code of any large scale project. The answers to the first two are going to depend on the project. I've learned quite a bit from looking at source code, personally, and I'm sure I could learn a lot from looking at prompts. As for the third question, there's no guarantee.
It has 9 "prompts" On just the issue of path re-writing, that's probably one of a dozen conversations, NOT INCLUDING prompts fed into an LLM that existed to strip spaces and newlines caused by copying things out of a TUI.
It's ok for things to be different than they used to be. It's ok for "prompts" to have been a meaningful unit of analysis 2 years ago but pointless today.
No the same question CANNOT be asked of source code because it can execute.
You might as well ask for a record of the conversations between two engineers while code was being written. That's what the chat is. I have a pre-pre-alpha project which already has potentially hundreds of "prompts"--really turns in continuing conversations. Some of them with 1 kind of embedded agent, some with another. Some with agents on the web with no project access.
Sometimes I would have conversations about plans that I drop. do I include those, if no code came out of them but my perspective changed or the agent's context changed so that later work was possible?
I don't mean to be dismissive, but maybe you don't have the necessary perspective to understand what you're asking for.
> maybe you don't have the necessary perspective to understand what you're asking for.
I disagree. Thinking about this more, I can give an example from my time working as a patent examiner at the USPTO. We were required to include detailed search logs, which were primarily autogenerated using the USPTO's internal search tools. Basically, every query I made was listed. Often this was hundreds of queries for a particular application. You could also add manual entries. Looking at other examiners' search logs was absolutely useful to learn good queries, and I believe primary examiners checked the search logs to evaluate the quality of the search before posting office actions (primary examiners had to review the work of junior examiners like myself). With the right tools, this is useful and not burdensome, I think. Like prompts, this doesn't include the full story (the search results are obviously important too but excluded from the logs), but that doesn't stop the search logs from being useful.
> You might as well ask for a record of the conversations between two engineers while code was being written.
No, that's not typically logged, so it would be very burdensome. LLM prompts and responses, if not automatically logged, can easily be automatically logged.
I love AI side projects, it allows for quick iteration and finishing on ideas that weren't feasible due to time constraints before. I just made one yesterday/today.
I used Magit before and during the making of this I found out GitUI exists too but I think it's pretty powerful if the tool doesn't do exactly what you want and are opionated you can tailor your tools the way you want to.
I think this is less about the projects themselves and more about distribution channels like HN and ProductHunt being dead. When the zone is flooded by vibecoded apps of all kinds, the "build it and they will come" era of getting your thing on a popular website's homepage is over.
But other distribution strategies exist. You just have to be smarter about finding and getting in front of your core audience.
The author doesn’t pinpoint on why exactly he hates the AI side projects.
Is it because they are low effort? But then he also says his past hobby projects could be done today in few hours with Claudecode.
AI exacerbates the slop it but it's an old trend.
It boils down to "form over substance" which has been an issue in this industry since as far as I can remember; it just got worse and worse; especially over the past decade.
Solid solutions are being overshadowed by AI slop alternatives which were assembled in a few months with no long term vision; the results look great superficially, but under the bonnet, it's inefficient, expensive, closed, lacks flexibility, experience degrades over time. All the essential stuff that people don't think about initially is what's missing.
It feels like the logical conclusion of peak attention economy; the media fully saturated with junk where junk beats quality due to the volume advantage.
I've been seeing a lot of posts like this and I don't tend to agree with the "anti-AI sentiment", but I think some of the problems identified might be like this:
People may lack ideas of interesting projects to work on in the first place; so we need to think about how to help people to think of useful and interesting projects to work on
Related to that idea, people may need to develop skills for building more "complex" ideas and we may need to think about how to make that more possible in the era of AI usage... even if some AI agent can take care of a technical side of things, it still takes a kind of "complexity of thought" to dream up more complicated / useful / interesting projects (I get the impression that there may be a need for some kind of training of the mind necessary for asking for an "automobile" rather than a "faster horse", by analogy... and that conception was often found through manually tinkering with intermediate devices like the bicycle. Hence an AI could "one shot" something interesting but what that thing is is limited by the imagination of the user, which may be limited by technical inability - in other words, the user may need to develop more technical ability in order to be able to dream up more interesting things to create, even if this "technical ability" is a more "skilled usage of AI tools")
There needs to be some way to filter through "noise". That's not a new issue and... a lot of these questions or complaints often feel very "meta", as in - you could just ask AI how to make side projects more interesting or useful, or how to create good filters in the age of AI. In sports, there are hierarchical levels of competition - likewise here you might have forums that are more closed off to newcomers if you want to "gatekeep" things, and they have to compete in "local pools" of attention and if they "win" there, then their "qualified authority / leader" submits the project to a higher level, and so on. AI suggests using qualified curators to create newsfeeds or to act as filters of the "slop".
Upvoting not because I necessarily agree, but because I think it's a conversation worth having. I personally think it's great that "everyone can build" now. I don't have to deal with "product people" telling me about their "billion dollar idea" anymore. Just go build it, bro.
Productizing anything is hard and writing code with AI is basically impossible to do reliably, securely, and at scale unless you're already an expert in what you're trying to do. For example, working on a project now, and it's kind of endearing watching my AI buddy run into every single pothole I ran into when I first started working with Tauri or Rust.
Unless you know what you're doing (and why you're doing it), AI suggestions are in the best case benign, and in the worst case architectural disasters. Very rarely, I'm like "hm that might be a good idea."
I think AI-aided development will raise the bar for products and makes expert engineers like 10x more valuable. Personally, I'm elated that I don't have to write my 4000th React boilerplate or Rust logging helper anymore.
And the real, actual hard work (as in: coming up with new algorithms for novel problems, breaking problems down so others can understand them, splitting up code/modules in a way that makes sense for another person, etc.) will likely never be doable by AI.
I love to resist corporate feudalism by making my entire development process fully dependent on SaaS offerings by corporations like Anthropic and “Open”AI.
Ah yes, because becoming totally reliant on a technology only made possible by weaponising the entire global economy looks nothing like corporate feudalism.
AI is just inconveniently reveal the situation in the current society: don’t share your stuff, sell your stuff, because they end up sold somewhere the only difference is who sold it.
Posts like this really irk me. There were shitty side projects before AI (trust me, I made a couple), but those people knew how to code so somehow you were able to separate the noise and signal easily. Now, coding is no longer some crazy skill and anyone can make side projects.
Just because you can't separate the noise from signal with that easy check doesn't mean these people can't get the joy of side projects. It's especially lazy when that project is open-source and you can literally ask CC, hey dig into this code, did they build anything interesting or different? Peter's side projects like Vibetunnel and Openclaw have so many hidden gems in the code (a rest API for controlling local terminals and Heartbeat, respectively) that you can integrate into your own project. Neglecting these projects as "AI slop" is stopping you from learning what amazing things people are building, especially when those people have different expertise. Lest we forget, the transformer model came from Alphafold and sometimes the best discoveries come from completely unrelated fields.
> Just because you can't separate the noise from signal with that easy check doesn't mean these people can't get the joy of side projects
I would love to meet these people that are getting joy out of seeing other peoples random fucking vibe coded apps that have zero rigor or skill applied to them lol.
Yes both of those are beyond worthless and largely uninteresting.
That being said, THOSE projects at this point have enough "activity" around them to make them at least somewhat worthy of a post. Which none of the vibe code posts have going for them.
How about Pi terminal agent, it's been my favorite terminal agent after trying CC, Droid, Opencode, Codex, Gemini, Amp. Just because you don't find valuable new side projects made using AI, doesn't mean they don't exist.
There's more projects I suspect are made predominantly using AI, but I don't want to speculate.
I think this problem will start to fix itself as people start to set a higher bar for what they share (or set policies restricting projects that don't show effort).
I'm not sure if this article deserves all that much attention if the standard is a subjective interpretation of what is truly special.. human made, human directed, or not.
Is there some sort of spectrum of not special, kind of special, pretty special, and truly special?
Does it have to be special for everyone or just some people?
Is it trying to say that people by default build and share things for external validation?
The argument about how people are using AI to solve a problem is akin to how people might feel about someone using a spreadsheet to solve a problem.
Sometimes projects are for learning. Sometimes projects are for solving a problem that's small to others, but okay to you to solve.
Insecurity about other people learning to build things for the first time and then continue to learn to build them better might be what this is about, period.
There's always been a great number of problems that never could could quite get the attention of software development.
I've genuinely met non-software folks who are interested in first solving a problem and then solving it better and better. And I think that type of self-directed learning in software is invaluable.
AI makes slop, but humans sure seem to like creating the same frameworks over and in every language and thinking it's progress in some way. But every so often, you get a positive shift forward, maybe a Ruby on Rails or something else.
> You don’t have to announce your displeasure to the world.
If your spaces that were previously full of interesting things suddenly become deluged with uninteresting things then that is something to complain about.
"The worst thing about AI is that EVERYONE can build now."
Come on. This site keeps promoting negative content.
It wasn't like I couldn't build before, it just makes it easier and a hell of a lot more fun now. I just did an AI side project and it was a blast. https://oj-hn.com
AI isn't going to take your job. People who know AI are.
The whole idea of side projects is learning something while building something unique, authentic, and cool. Since AI hype, most side projects -that get published- now are solely built to grab your money one way or another, whoever built it thinks that a 3hrs vibe coded slop will make him millionaire by the end of the year, so you end up with so much garbage.
It’s why I only focus on hardware actual “hacking” projects, more fun to read and follow, and I know it wasn’t vibecoded too.
Why? What's the cutoff point of length that you would accept? If they'd used AI to artificially increase the length of their content, would you be happier?
My delivery could have been better. It was half satire and half a jab since it’s a self promoted post pointing out AI slop without much thought behind it. Even if human written it carries probably even less weight than an AI slop side project.
To write such quick note about AI slop a few days after they had a show hn with something that looks a bit like AI slop amused me.
I dont think the side projects or the ai is the problem. It's the perception of the quality and our filtering that needs adaption. Ai has changed the amount of content that is generated by a huge margin, and it is generally difficult to tell how much work went into something. And lesser experienced people do an even worse job in that.
We are likely going to get better in judging this new communication and media. But we need much more experience in it, until we can do that properly.
It will be annoying for quite a while, as it was with social media, until we found the places that are still worth our time and attention. But i am hopeful that we will be able to do that.
Until then i am going to work on my AI side project every evening until i deem it ready and bug free. It already works well enough for my own purposes (which i made it for) and my requirements were heavily influenced by my work process. I would never have been able to finish such a project, even with full time working on it over a year without AI.
I think the main effect of the current influx of low effort AI side projects is that it's going to significantly increase the bar for what's worth showing people.
Six months ago no-one would post a "Show HN" for a static personal website they had built for themselves - it wouldn't be novel or interesting enough to earn any attention.
That bar just went through the ROOF. I expect it will take a few more months for the new norms to settle in, at which point hopefully we'll start seeing absurdly cool side projects that would not have been feasible beforehand.
I’ve felt the same. I’ve had two things on the pre-LLM front page after about a month of work each. Those exist in the framework of my personal site which I’d been building out for a year.
I suspect a month of AI accelerated work is still enough to make the front page. I don’t see the competition as steeper. I bet it’s about the same per unit time.
Yeah, that's probably a good way of thinking about this. If your side project took a couple of hours it's not worthy of a Show HN. If it took at least a few days or a few weeks or longer and it's novel (not something many other people have built already) then it's a much better bet.
The problem is that the effort required to understand the quality of a project has also gone through the roof (and not just due to the number of them). Good looking READMEs and docs, large test suites, well constructed code - LLMs can generate credible versions that take time to digest and understand the limitations of. AI is fantastic at faking the outward signals of a good project and hyping it up. I've lost count of the projects that appear here and on Reddit that initially look good but fall apart once a domain expert spends the time to dig into it.
LLM READMEs and websites immediately stick out an absolute mile.
I agree that’s kind of what should happen. What seems to have happened is that people have figured out it’s easier to game the system than produce more complicated or technical projects.
This is exposing a problem that already existed, AI is just throwing gas on the fire. Most engineers, including the readership of this site, have terrible taste in software.
What I mean by that is: after reading through a brief description of a project, or a conceptual overview, they are no better than noise at predicting whether it will be worthwhile to try out, or rewarding to learn about, or have a discussion about, or start using day-to-day.
Things on the front page used to be high quality software, research papers, etc. And now it is entirely driven by marketing operations, and social circles. There is no differential amplification of quality.
I don't know what the solution is, but I imagine it involves some kind of weighted voting. That would be a step towards a complicated engagement algorithm, instead of the elegant and predictable exponential decay that HN is known for.
> What I mean by that is: after reading through a brief description of a project, or a conceptual overview, they are no better than noise at predicting whether it will be worthwhile to try out, or rewarding to learn about, or have a discussion about, or start using day-to-day.
You could criticize a Michelin inspector the same way. The poor bastards have to actually taste the dish and can't decide merit based on menu descriptions alone.
Blog posts by authors claiming AI has transformed them into godlike engines of productivity: 103728369129
Interviews by celebrities predicting AI will revolutionize the economy: 2837191001747
Software and online things I've used that seem to be better than they were before ChatGPT was introduced: 0
- learning: easier
- searching: better
- photo edit/enhance/filter: easier and acessible
- text summarization: better
- quick scripts/tools: faster
- brainstorming/iterating ideas: faster
- generating list of names: faster
- rephrasing text: better
- researching topics: faster
- stackoverflow: i'm finally free. won't be missed by me
- coding: debatable but for me LLMs made possible projects that weren't before due to scope or lack of expertise
Doing things "faster" and "easier" is an interesting way to put it. It places all of the value on one's personal experience of using the AI, and completely ignores the quality of the thing produced. Which explains why most stuff produced by LLMs is throwaway garbage! It only reinforces the parent comment - there is virtually no emphasis on making things "better".
There won't even be a quality conversation if a thing isn't built in the first place, which is the tendency when the going is slow and hard. AI makes the highly improbable very probable.
I feel like the things I've built have gotten better since ChatGPT, but I don't use LLMs, just iterating and realizing how naive/terrible my old code is. Maybe I should write a blog post on How I (Don't) Use LLMs for Coding.
I don't think it's quite at trillions to zero, but I do think the numbers are way, way out of balance so far. I'm still at the point where, if AI disappeared tomorrow, I would be, at worst, mildly inconvenienced.
To be fair there is this persistent paradox about programming methodologies is that no matter how much they seem to speed you up, or how effective they seem to be at reducing bugs, this doesn't seem to give you any material competitive advantage over companies going for the most conservative language and methodology choices.
It's almost like writing the code isn't the hard part.
Google Search has gotten better unless you think AI mode is a downgrade, the alternative of having a wikipedia article, reddit post or random website as the first result is not better technically maybe morally for you but not matter of factually. The average user does less manual filtering.
I definitely think the AI mode is a downgrade. It has me seriously considering abandoning Google for different search engine. With a reddit post or a Wikipedia article, it's much easier to assess the credibility of the content.
The AI mode does at least attempt to list it's sources, but it's extra hoops to jump through.
AI overview though is crap. It almost always gives wrong answers and contradicts things that are in the results.
Factually and environmentally as well.
Notion AI is pretty great for both search and writing.
I agree with you in general, but come on - learning is easier (unless you need to dive into highly specialized stuff), writing shorter chunks of code is faster, simple photo editing ("remove this and that from the background") doesn't need any skills now. Image generation isn't terribly too if you put some effort and don't stick with the same 3-4 drawing styles that all the cheapskate companies use.
> Software and online things I've used that seem to be better than they were before ChatGPT was introduced: 0
I don't think you can really get any sort of a signal on this?
Nobody is all that sensitive to the amount of features that get shipped in any project, and nobody really perceives how many people or how much time was needed to ship anything. As a user, unless that means a 5x difference in price of some service, you don't really see or care about any of that - and even if there were savings on the part of any developer/company, they'd probably just pocket the difference. Similarly, if there's a product or service that exists thanks to vibe coding and wouldn't have existed otherwise, you probably don't know that particular detail.
Even when fuckups and bugs do happen, there's also no signal whether it's explicitly due to AI (or whether people are scapegoating it), or just management pushing features nobody wants and enshittifying products and entire industries for their own gain.
Well, maybe StackOverflow is a bit easier to host now: https://blog.pragmaticengineer.com/stack-overflow-is-almost-...
> Software and online things I've used that seem to be better than they were before
I would not know if they have gotten better or worse cause I don’t use them anymore.
The influx if these sorts of posts have largely pushed me out of all my previous "programmer" online spots.
I have zero interest in seeing something that Claude emitted that the author could never in a million years have written it themselves.
Its baffling to me these people think anyone cares about what their claude prompt output.
Do you choose your food/car/housing/etc based on virtue signalling rather than utility as well?
Well i generally look for signals that a place I might eat at knows how to cook. If I walk in and I see rats on the floor and theres no kitchen I probably won't be interested.
I apply the same logic to software.
For me at least, the advent of coding AI has just forced me to finally accept a truth that I probably always knew: that I'm an average (at best) software developer, and that I don't have anything truly unique or impressive to contribute to the field. My side projects were always just for myself.
I love computing, and programming. If anything I'm better able to appreciate that now that I no longer care if my work has any impact.
How much more sympathy must we grant to people who are upset others are having fun with computers?
Output is growing decoupled from what we used to consider tightly linked personal characteristics.
There is no guarantee that this will reform under rules that make sense in the old order.
It is embarrassing to see grown engineers unable to cope with the obvious.
It’s funny because I hate these “I hate stuff” articles.
You should write an article about them.
Then I can write a "I hate people who hate 'I hate stuff' articles" article!
Recommended title: "Let people enjoy stuff"
"Let people enjoy hating stuff"
With and without ai slop.
Also write another javascript framework in case it seems easier to create one than take the time to learn one.
“I hate stuff” articles considered harmful.
The internet is only for training data and praising any and all technological progress. If you don't like it, keep it to yourself. Self-expression has no place here!
Yeah we’re drowning in complaints about slop.
So many people are complaining about police brutality, but there are like some really good people on our force! Can't we just focus on them?
Naw.
Ya-haw!
Edit: for me the most practical insight to come out of these threads, so far, is that Show HNs for generated repos/sites/projects would be more interesting if submitters were required to share the prompts, and not just the generated output. For such projects, the prompts are the real source, while the GH repo or generated artifact is actually the object code, and if that's all that's shared, it's less interesting and there's less to discuss.
I think we're going to implement this unless we hear strong reasons not to. The idea has already come up a lot, so there's demand for it, and it seems clear why.
https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
https://news.ycombinator.com/item?id=47077840
https://news.ycombinator.com/item?id=47050590
https://news.ycombinator.com/item?id=47077555
https://news.ycombinator.com/item?id=47061571
https://news.ycombinator.com/item?id=47058187
https://news.ycombinator.com/item?id=47052452
--- original comment ---
Recent and related:
Is Show HN dead? No, but it's drowning - https://news.ycombinator.com/item?id=47045804 - Feb 2026 (423 comments; subthread https://news.ycombinator.com/item?id=47050421 is about what to do about it)
AI makes you boring - https://news.ycombinator.com/item?id=47076966 - Feb 2026 (367 comments)
It's not a terrible idea, but I'm not sure how that would work logistically.
When I use Codex to do vibe coding stuff, I don't usually have one big prompt, I usually have it do small things piecemeal and I iterate with it later. Maybe I'm using it wrong but it tends to be more "conversational" and I think that would be harder to share, especially considering I'll do things over dozens of sessions.
I suppose I could keep an archive of every session I've ever opened with Codex and share that, but thus far I haven't really done that.
Granted, I don't really share my stuff with "Show HN".
That's how I do it too. I haven't checked (and can't right now as I'm not at work) but does codex not have a feature that lets you download your codex chat logs? I would certainly hope so...
“Share the prompts”
How would that be feasible for a project of any complexity whatsoever?
I'm reminded of something I read recently about disclosure of AI use in scientific papers [1]:
> Authors should be asked to indicate categories of AI use (e.g., literature discovery, data analysis, code generation, language editing), not narrate workflows or share prompts. This standardization reduces ambiguity, minimizes burden, and creates consistent signals for editors without inviting overinterpretation. Crucially, such declarations should be routine and neutral, not framed as exceptional or suspicious.
I think that sharing at least some of the prompts is a reasonable thing to do/require. I log every prompt to a LLM that I make. Still, I think this is a discussion worth having.
[1] https://scholarlykitchen.sspnet.org/2026/02/03/why-authors-a...
This is totally infeasible.
If I have a vibe coded project with 175k lines of python, there would be genuinely thousands and thousands of prompts to hundreds of agents, some fed into one another.
Whats the worth of digging through that? What do you learn? How would you know that I shared all of them?
> I log every prompt to a LLM that I make.
How many do you have in the log total?
I have a daily journal where I put every online post I make. I include anything I send to a LLM on my own time in there. (I have a separate work log on their computer, though I don't log my work prompts.) Likely I miss a few posts/prompts, but this should have the vast majority.
A few caveats: I'm not a heavy LLM user (this is probably what you're getting at) and the following is a low estimate. Often, I'll save the URL only for the first prompt and just put all subsequent prompts under that one URL.
Anyhow, running a simple grep command suggests that I have at least 82 prompts saved.
In my view, it would be better to organize saved prompts by project. This system was not set up with prompt disclosure in mind, so getting prompts for any particular project would be annoying. The point is more to keep track of what I'm thinking of at a point in time.
Right now, I don't think there are tools to properly "share the prompts" at the scale you mentioned in your other comment, but I think we will have those tools in the future. This is a real and tractable problem.
> Whats the worth of digging through that? What do you learn? How would you know that I shared all of them?
The same questions could be asked for the source code of any large scale project. The answers to the first two are going to depend on the project. I've learned quite a bit from looking at source code, personally, and I'm sure I could learn a lot from looking at prompts. As for the third question, there's no guarantee.
I can't reply to the other comment, but here goes:
This is one (1) conversation: https://chatgpt.com/share/69991d7e-87fc-8002-8c0e-2b38ed6673...
It has 9 "prompts" On just the issue of path re-writing, that's probably one of a dozen conversations, NOT INCLUDING prompts fed into an LLM that existed to strip spaces and newlines caused by copying things out of a TUI.
It's ok for things to be different than they used to be. It's ok for "prompts" to have been a meaningful unit of analysis 2 years ago but pointless today.
No the same question CANNOT be asked of source code because it can execute.
You might as well ask for a record of the conversations between two engineers while code was being written. That's what the chat is. I have a pre-pre-alpha project which already has potentially hundreds of "prompts"--really turns in continuing conversations. Some of them with 1 kind of embedded agent, some with another. Some with agents on the web with no project access.
Sometimes I would have conversations about plans that I drop. do I include those, if no code came out of them but my perspective changed or the agent's context changed so that later work was possible?
I don't mean to be dismissive, but maybe you don't have the necessary perspective to understand what you're asking for.
> maybe you don't have the necessary perspective to understand what you're asking for.
I disagree. Thinking about this more, I can give an example from my time working as a patent examiner at the USPTO. We were required to include detailed search logs, which were primarily autogenerated using the USPTO's internal search tools. Basically, every query I made was listed. Often this was hundreds of queries for a particular application. You could also add manual entries. Looking at other examiners' search logs was absolutely useful to learn good queries, and I believe primary examiners checked the search logs to evaluate the quality of the search before posting office actions (primary examiners had to review the work of junior examiners like myself). With the right tools, this is useful and not burdensome, I think. Like prompts, this doesn't include the full story (the search results are obviously important too but excluded from the logs), but that doesn't stop the search logs from being useful.
> You might as well ask for a record of the conversations between two engineers while code was being written.
No, that's not typically logged, so it would be very burdensome. LLM prompts and responses, if not automatically logged, can easily be automatically logged.
I love AI side projects, it allows for quick iteration and finishing on ideas that weren't feasible due to time constraints before. I just made one yesterday/today. I used Magit before and during the making of this I found out GitUI exists too but I think it's pretty powerful if the tool doesn't do exactly what you want and are opionated you can tailor your tools the way you want to.
https://github.com/riicchhaarrd/tuide
I think this is less about the projects themselves and more about distribution channels like HN and ProductHunt being dead. When the zone is flooded by vibecoded apps of all kinds, the "build it and they will come" era of getting your thing on a popular website's homepage is over.
But other distribution strategies exist. You just have to be smarter about finding and getting in front of your core audience.
What kinds of other distribution strategies are you thinking of?
The author doesn’t pinpoint on why exactly he hates the AI side projects. Is it because they are low effort? But then he also says his past hobby projects could be done today in few hours with Claudecode.
AI exacerbates the slop it but it's an old trend. It boils down to "form over substance" which has been an issue in this industry since as far as I can remember; it just got worse and worse; especially over the past decade.
Solid solutions are being overshadowed by AI slop alternatives which were assembled in a few months with no long term vision; the results look great superficially, but under the bonnet, it's inefficient, expensive, closed, lacks flexibility, experience degrades over time. All the essential stuff that people don't think about initially is what's missing.
It feels like the logical conclusion of peak attention economy; the media fully saturated with junk where junk beats quality due to the volume advantage.
Makes me think of how fancy framework and library websites have gotten. Surely, if the website has sumptuous animations, the code must be good.
I've been seeing a lot of posts like this and I don't tend to agree with the "anti-AI sentiment", but I think some of the problems identified might be like this:
People may lack ideas of interesting projects to work on in the first place; so we need to think about how to help people to think of useful and interesting projects to work on
Related to that idea, people may need to develop skills for building more "complex" ideas and we may need to think about how to make that more possible in the era of AI usage... even if some AI agent can take care of a technical side of things, it still takes a kind of "complexity of thought" to dream up more complicated / useful / interesting projects (I get the impression that there may be a need for some kind of training of the mind necessary for asking for an "automobile" rather than a "faster horse", by analogy... and that conception was often found through manually tinkering with intermediate devices like the bicycle. Hence an AI could "one shot" something interesting but what that thing is is limited by the imagination of the user, which may be limited by technical inability - in other words, the user may need to develop more technical ability in order to be able to dream up more interesting things to create, even if this "technical ability" is a more "skilled usage of AI tools")
There needs to be some way to filter through "noise". That's not a new issue and... a lot of these questions or complaints often feel very "meta", as in - you could just ask AI how to make side projects more interesting or useful, or how to create good filters in the age of AI. In sports, there are hierarchical levels of competition - likewise here you might have forums that are more closed off to newcomers if you want to "gatekeep" things, and they have to compete in "local pools" of attention and if they "win" there, then their "qualified authority / leader" submits the project to a higher level, and so on. AI suggests using qualified curators to create newsfeeds or to act as filters of the "slop".
Upvoting not because I necessarily agree, but because I think it's a conversation worth having. I personally think it's great that "everyone can build" now. I don't have to deal with "product people" telling me about their "billion dollar idea" anymore. Just go build it, bro.
Productizing anything is hard and writing code with AI is basically impossible to do reliably, securely, and at scale unless you're already an expert in what you're trying to do. For example, working on a project now, and it's kind of endearing watching my AI buddy run into every single pothole I ran into when I first started working with Tauri or Rust.
Unless you know what you're doing (and why you're doing it), AI suggestions are in the best case benign, and in the worst case architectural disasters. Very rarely, I'm like "hm that might be a good idea."
I think AI-aided development will raise the bar for products and makes expert engineers like 10x more valuable. Personally, I'm elated that I don't have to write my 4000th React boilerplate or Rust logging helper anymore.
And the real, actual hard work (as in: coming up with new algorithms for novel problems, breaking problems down so others can understand them, splitting up code/modules in a way that makes sense for another person, etc.) will likely never be doable by AI.
If you're anti (AI) side projects, you're pro corporate feudalism.
I love to resist corporate feudalism by making my entire development process fully dependent on SaaS offerings by corporations like Anthropic and “Open”AI.
Ah yes, because becoming totally reliant on a technology only made possible by weaponising the entire global economy looks nothing like corporate feudalism.
Local models exist. Decouple your dislike of the companies from the technology they're trying to leverage.
Who trained those local models?
AI is just inconveniently reveal the situation in the current society: don’t share your stuff, sell your stuff, because they end up sold somewhere the only difference is who sold it.
Posts like this really irk me. There were shitty side projects before AI (trust me, I made a couple), but those people knew how to code so somehow you were able to separate the noise and signal easily. Now, coding is no longer some crazy skill and anyone can make side projects.
Just because you can't separate the noise from signal with that easy check doesn't mean these people can't get the joy of side projects. It's especially lazy when that project is open-source and you can literally ask CC, hey dig into this code, did they build anything interesting or different? Peter's side projects like Vibetunnel and Openclaw have so many hidden gems in the code (a rest API for controlling local terminals and Heartbeat, respectively) that you can integrate into your own project. Neglecting these projects as "AI slop" is stopping you from learning what amazing things people are building, especially when those people have different expertise. Lest we forget, the transformer model came from Alphafold and sometimes the best discoveries come from completely unrelated fields.
> Just because you can't separate the noise from signal with that easy check doesn't mean these people can't get the joy of side projects
I would love to meet these people that are getting joy out of seeing other peoples random fucking vibe coded apps that have zero rigor or skill applied to them lol.
I just gave you two Vibetunnel and Openclaw. If you think those are worthless, I don't know what else to tell you.
Yes both of those are beyond worthless and largely uninteresting.
That being said, THOSE projects at this point have enough "activity" around them to make them at least somewhat worthy of a post. Which none of the vibe code posts have going for them.
How about Pi terminal agent, it's been my favorite terminal agent after trying CC, Droid, Opencode, Codex, Gemini, Amp. Just because you don't find valuable new side projects made using AI, doesn't mean they don't exist.
There's more projects I suspect are made predominantly using AI, but I don't want to speculate.
Why was this flagged? It's the best thing i've read here in a while.
I think this problem will start to fix itself as people start to set a higher bar for what they share (or set policies restricting projects that don't show effort).
> Most of my past side projects would take me a few minutes or hours to build with Claude Code. Today, they’re not worth talking about.
What, were the vibe coded in COBOL?
That is: I don't understand why the use of Claude Code itself renders them unworthy of discussion.
"If you don’t have something truly special.."
I'm not sure if this article deserves all that much attention if the standard is a subjective interpretation of what is truly special.. human made, human directed, or not.
Is there some sort of spectrum of not special, kind of special, pretty special, and truly special?
Does it have to be special for everyone or just some people?
Is it trying to say that people by default build and share things for external validation?
The argument about how people are using AI to solve a problem is akin to how people might feel about someone using a spreadsheet to solve a problem.
Sometimes projects are for learning. Sometimes projects are for solving a problem that's small to others, but okay to you to solve.
Insecurity about other people learning to build things for the first time and then continue to learn to build them better might be what this is about, period.
There's always been a great number of problems that never could could quite get the attention of software development.
I've genuinely met non-software folks who are interested in first solving a problem and then solving it better and better. And I think that type of self-directed learning in software is invaluable.
AI makes slop, but humans sure seem to like creating the same frameworks over and in every language and thinking it's progress in some way. But every so often, you get a positive shift forward, maybe a Ruby on Rails or something else.
This is some Andy Rooney whine-fest. Who cares? Don’t do it. Don’t read it. You don’t have to announce your displeasure to the world.
> You don’t have to announce your displeasure to the world.
If your spaces that were previously full of interesting things suddenly become deluged with uninteresting things then that is something to complain about.
I’d find someplace else to spend my time if I no longer found someplace interesting,
Where do you think you are, right now?
Oh, the irony.
"The worst thing about AI is that EVERYONE can build now."
Come on. This site keeps promoting negative content.
It wasn't like I couldn't build before, it just makes it easier and a hell of a lot more fun now. I just did an AI side project and it was a blast. https://oj-hn.com
AI isn't going to take your job. People who know AI are.
The whole idea of side projects is learning something while building something unique, authentic, and cool. Since AI hype, most side projects -that get published- now are solely built to grab your money one way or another, whoever built it thinks that a 3hrs vibe coded slop will make him millionaire by the end of the year, so you end up with so much garbage.
It’s why I only focus on hardware actual “hacking” projects, more fun to read and follow, and I know it wasn’t vibecoded too.
[dead]
[flagged]
What I hate is people posting links to their blog post which upon viewing is really just an extended twitter sized rant.
Why? What's the cutoff point of length that you would accept? If they'd used AI to artificially increase the length of their content, would you be happier?
My delivery could have been better. It was half satire and half a jab since it’s a self promoted post pointing out AI slop without much thought behind it. Even if human written it carries probably even less weight than an AI slop side project.
To write such quick note about AI slop a few days after they had a show hn with something that looks a bit like AI slop amused me.