> This exact thing is what software developers have been begging for since the beginning of the profession: Receiving a detailed outline of the problem and what the end result should look like.
> This is often the part that slows down software development. Trying to figure out what a vague, title only, feature request actually means.
But that is exactly what Software Engineering is!. It's 2026 and the notion that you can get detailed enough requirements and specifications that you can one-shot a perfect solution needs to die.
In my experience AI has made us able to iterate on features or ideas much faster. Now most of the friction comes from alignment and coordination with other teams. My take is that to accelerate processes we should reduce coordination overhead and empower individuals and teams to make decisions and execute on them.
> It's 2026 and the notion that you can get detailed enough requirements and specifications that you can one-shot a perfect solution needs to die.
It's 2026 and the idea that even with detailed-enough requirements you can one-shot even a workable (let alone perfect) solution also needs to die. Anthropic failed to build even something as simple as a workable C compiler, not only with a perfect spec (and reference implementations, both of which the model trained on) but even with thousands of tests painstakingly written over many person-years. Today's models are not yet capable enough to build non-trivial production software without close and careful human supervision, even with perfect specs and perfect tests. Without a perfect spec and a perfect human-written test suite the task is even harder. Maybe in 2027.
Sorry where are we seeing that it failed? It compiled multiple projects successfully albeit less optimized.
"
It lacks the 16-bit x86 compiler that is necessary to boot Linux out of real mode. For this, it calls out to GCC (the x86_32 and x86_64 compilers are its own).
It does not have its own assembler and linker; these are the very last bits that Claude started automating and are still somewhat buggy. The demo video was produced with a GCC assembler and linker.
The compiler successfully builds many projects, but not all. It's not yet a drop-in replacement for a real compiler.
The generated code is not very efficient. Even with all optimizations enabled, it outputs less efficient code than GCC with all optimizations disabled.
The Rust code quality is reasonable, but is nowhere near the quality of what an expert Rust programmer might produce.
"
For faffing about with a multi agent system that seems like a pretty successful experiment to me.
Anthropic said the experiment failed to produce a workable C compiler:
- I tried (hard!) to fix several of the above limitations but wasn’t fully successful. New features and bugfixes frequently broke existing functionality.
- The compiler successfully builds many projects, but not all. It's not yet a drop-in replacement for a real compiler.
> Like I think people don't realize not even 7 months ago it wasn't writing this at all.
There's no doubt that producing a C compiler that isn't workable and is effectively bricked as it cannot be evolved but still compiles some programs is great progress, but it's still a long way off of auonomously building production software. Can today's LLM do amazing things and offer tremendous help in software development? Absolutely. Can they write production software without careful and close human supervision? Not yet. That's not disparagement, just an observation of where we are today.
> Can they write production software without careful and close human supervision? Not yet. That's not disparagement, just an observation of where we are today.
I never claimed they could! I just view this as a successful experiment. I don't think anthropic was making that claim with their experiment either.
It feels reflexive to the moment to argue against that claim, but I tend to operate with a bit more nuance than "all good" or "all bad".
The experiment failed to produce a workable C compiler despite 1. the job not being particularly hard, 2. the available specs and tests are of a completely higher class of quality than almost any software, not to mention the availability of other implementations that the model trained on.
You can call that a success (as it did something impresssive even though it failed to produce a workable C compiler) but my point in bringing this up was to show that today's models are not yet able to produce production software without close supervision, even when uncharacteristically good specs and hand-written tests exist.
Saying the model failed to write a competitive C compiler makes more sense.
I don't think they tried to do that though.
> today's models are not yet able to produce production software without close supervision, even when uncharacteristically good specs and hand-written tests exist.
That's great and all, but that's not the point I was making and you're engaging rather uncharitably on it. So when you view it from the perspective of capability increase it's rather impressive. Note the slope of progress which this experiment was to show.
Edit: Maybe uncharitably is too strong, but we're talking past each other.
Yeah I think people are really underestimating what LLMs can do even without specs.
As an example, I did an exploratory attempt to add custom software over some genuinely awful windows software for a scientific imaging station with a proprietary industrial camera. Five days later Claude and I had figured out how to USB-pcap sample images and it's operationalized and smoothly running for months now. 100% of the code written by Claude, it's all clean (reviewed it myself) pretty much all I did was unstuck it at a few places, "hey based on the file sizes it looks like the images are being sent as a 16-bit format")
For day to day work, I'll often identify a bug, "hey, when I shift click on this graphical component, it's not doing the right thing". I go tell Claude to write a RED (failing) integration test, then make it pass.
Zero lines of code manually written. Only occasionally do I have to intervene and rearchitect. Usually thus involves me writing about ten lines of scaffold code, explaining the architectural concept, and telling it to just go
People both underestimate and overestimate what LLMs can do. LLMs have shown very different results when autonomously writing a small program for personal use and autonomously writing production software that needs to be evolved for years.
I wonder how knowledgeable in compilation was the engineer that attempted this. I'm pretty confident that I could produce a decent C compiler in a few weeks (or less), if given Opus 4.7 + unlimited tokens + a good test suite. (and this is not blind unsubstantiated belief in AI, I've recently rewritten a somewhat sophisticated interpreter in a week with AI; and have worked on several C++ compilers in the past, including a GCC port to a custom DSP, so I have a bit of an idea about what this would take).
But yeah, this is not a "one shot" project, none of it is. One shot doesn't work even with humans - after all, this is exactly what killed waterfall as a methodology.
> I'm pretty confident that I could produce a decent C compiler in a few weeks (or less), if given Opus 4.7 + unlimited tokens + a good test suite.
Of course. The point is that a full, detailed spec isn't enough (even in the rare situations it does exist, like for a C compiler). At least for the moment, you need expert humans to supervise and direct the agents.
Vibe coders usually also let the agents write the tests, which mean that the only independent human validation of the software is some cursory manual inspection. That also obviously isn't enough to validate software.
> One shot doesn't work even with humans - after all, this is exactly what killed waterfall as a methodology.
You can one-shot a C compiler with humans. LLMs' software development ability is impressive and helpful, but it is not human-level yet, even if at some tasks the agents are better than most human programmers. And while many waterfall projects failed, many succeeded (although perhaps not as efficiently as they could have). So far I don't believe agents have been able to produce any non-trivial production software autonomously.
yeah, the key part is that there be a human in the loop, directing and course-correcting the ai while it produces code in reasonably small and well defined stages.
A workable C compiler is a ~10-50KLOC program, and a fairly simple one at that (batch, with no concurrency or interaction). That Anthropic's swarm of agents wrote 100KLOC before failing is a symptom of the problem. It's certainly possible that many programs are in the sub 5KLOC range, but it's definitely not "most software". Plus, almost no software has this level of detailed spec, ready-made tests, and a selection of existing implementations of the same spec.
My first thought when reading Anthropic's description of the experiment was that it is unrealistically easy. It's hard to come up with realistic jobs in the 10-50KLOC range that would be this easy for an LLM. That it failed only shows how much further we still have to go.
A bit off topic, but see how Anthropic publicity stunts went from "Claude C Compiler" with 100K LOC to the recent Bun Rust rewrite with 1M LOC (10x!) in just 3 months.
I get that it's "novel" creation vs porting, but given that they reported that the C compiler cost them $20k in API costs, the Bun rewrite must be at least $200k, maybe even closer to a million. Pure madness.
Asking an LLM tp change programming language of an implementation is completely different from asking it to code from spec. It's orders of magnitude simpler in practice. I converted some 60kloc of Java to C++ and it works. There were some issues where the Java implementation used runtime reflection because that needs creative workarounds and not all of the C++ translations worked on the first try. And that was my first serious attempt at a task with an LLM. I could likely do better now. An important task simplification here is that a well designed codebase can be converted in small pieces and then joined back together. So the total amount of code converted becomes an irrelevant metric.
I don't know how it could fail - Bun loses popularity among devs? Is it an objective metric? From what I understand, Node.js remains dominant across the industry as a whole, with Deno and Bun mostly used by startups.
Anthropic can always fire the Opus/Mythos token machine gun on any problem (bugs, features, security) to ensure PR success, and there would be plenty of AI-sphere startups already drinking the kool-aid that would consider the whole vibe-coding thing to Bun's benefit.
> Anthropic can always fire the Opus/Mythos token machine gun on any problem (bugs, features, security) to ensure PR success,
Can they, though? They tried and failed to do it in their C compiler experiment. The experimenter wrote: "I tried (hard!) to fix several of the above limitations but wasn’t fully successful. New features and bugfixes frequently broke existing functionality."
Do Firefox not have tests? Then how was there over 200 CVEs found?
Are we going to be comfortable running a piece of software that has 1M lines, and who knows how many zero-days will be in it.
Yes, sure they are going to use LLM to find the CVE's, and so will the hackers. You need a day or two to fix the security issue, a hacker just need to put it in use.
People who independently tried to use it reported that it is very much not workable:
- "CCC compiled every single C source file in the Linux 6.9 kernel without a single compiler error (0 errors, 96 warnings). This is genuinely impressive for a compiler built entirely by an AI. However, the build failed at the linker stage with ~40,784 undefined reference errors."(https://github.com/harshavmb/compare-claude-compiler)
- Overall it’s an interesting experiment, and shows the current bleeding edge of Claude’s Opus 4.6 model. However the resulting product is also a clear example of the throwaway nature of projects generated almost entirely by AI code agents with little human oversight. The prototype is really impressive, but there is no real path forward for it to be further developed. It can build the Linux kernel [for RISC-V], which is impressive. It can also build other things… if you are lucky, but you really cannot rely on it to work. (https://voxelmanip.se/2026/02/06/trying-out-claudes-c-compil...)
Anthropic themselves said that the codebase was effectively bricked and that their agents could not salvage it.
Well then as you say a 10-50KLOC C compiler is workable. Could you show me the C compiler that does manage to compile a modern Linux kernel that is of that size?
I can make a c compiler in a couple weeks just by looking up open source libraries and copying them.
I can't make any software that people will pay me money to use without taking months/years of development, research, expiramentation and iteration.
Just because the original people who invented compilers had to be genius, doesn't mean anyone has to spend much time or thought in copying that work now.
I built a compiler for a simpler language as part of my compilers course in a CS degree. It was a non-trivial exercise well beyond the majority of software applications. What open source libraries did you have in mind and what are you copying?
If you can truly write a C compiler in weeks then kudos to you. How many compilers have you written so far for how many languages?
I work for big tech and I would say a large % of developers are incapable of producing a working C compiler on any reasonable time scale, certainly not weeks, even with looking at open source. I'm sure they can download one and run it. Most developers today don't even know C or assembler. They don't know how to approach the C language spec. The top 5-10% of developers/engineers can do it but even for them it's non-trivial.
> It was a non-trivial exercise well beyond the majority of software applications
That depends on how you count. By number of programs that may well be right, but that's not what matters in terms of impact on the industry, as software value roughly corresponds to the number of people working on a particular piece of software (or lines of code, if you wish). By number of people/LOC most software is not in the "simpler than a C compiler" category.
I regularly get pieces of work someone product guy has thought up in an afternoon. They only care about the happy path, and sometimes only part of the happy path. I work for a global company that has to abide by rules and regulations in each country we operate in. The product guy thinks up some feature, we implement the feature, then we're told "actually, we legally aren't allowed to do this in 90% of the markets we operate in". Cool, so we add an ability to disable it in those markets. Then they come back "We can do this in some of those markets if it's implemented with [regulatory bureaucracy], so can you do that please".
Then we have to hack away at the solution because the deadline is right around the corner.
This is not software engineering! None of this is related to the software. The job of a software engineer is to take a list of requirements and figure out the way we accomplish those requirements. Requirements gathering is NOT a software engineering problem. Software is implementation, product is behaviour. That's the split. The behaviour of the thing we're building needs to be known before we even try to seriously build it.
If someone just held back for week and did their due diligence, we would been able to architect a solution that is scaleable, extensible, easy to maintain and can make the future easier.
> Requirements gathering is NOT a software engineering problem. Software is implementation, product is behaviour. That's the split.
That's a theory but I've never seen this work in practice. A piece of software is unique. If it weren't, we'd just use the cp command.
What usually happens is you get a set of requirements that looks simple. Then you start thinking about a design and see 10 different possibilities, each corresponding to a slightly different interpretation of the requirements set. You iterate a few times reviewing the designs with who set the requirements and a few peers and see more possible variations to the requirements. You need to double check its parent requirements up to the master requirements. Then you need to take time/feature/quality tradeoffs, affecting the fulfillment of requirements.
Once starting to implement, you see dependencies to other software (framework, sdk, drivers, language features,...) and understand that other software is not what you thought, or has bugs. Or you see an issue with performance or see that one particular feature becomes unfeasible.
That's where all the complexity goes. AI doesn't change that, but can make prototyping iterations and bug hunting faster, as long as someone holds it on a leash and understands its decisions.
I completely agree. It's more than 40 years since I wrote my first program, and I've never seen software that was first specified and then written and all was good.
The most difficult part of any non-trivial engineering is understanding the problem, and the first versions of a piece of software are how you reach that understanding.
That's why I do not think that AI-powered "software factories" will ever work. It's waterfall development all over again. An architect writing UML diagrams and handing them off to the team of programmers to do the essentially mundane task of implementing... the wrong thing.
AI is, however, very good at helping you go fast from the wrong first version to the less wrong second one. But you need to remember that your main task is to understand the problem that you are trying to solve.
If they can't at least imagine the golden path themselves and write it down, they shouldn't be in charge of the product because they will be unlikely to understand any other in depth conversations about it. And I have no idea how they'd be having coherent conversations with anyone above them either. They're also unlikely to use AI well or not identify bad-out-of-the-gate solutions. It is of course different if they're just gathering opinions or want a PoC or exploratory work done, but those aren't requirements to me.
Developers are unlikely only doing development these days. There's ops and support to do as well, so more back and forth is less time doing those things and development.
We need to meet in the middle about requirements otherwise developers will end up doing someone else's job for them.
I'm seeing decision-makers / people who write requirements starting to use AI as well in my day to day. As before, my job is to read, understand and test those requirements against the real world as I understand it. But same with code. Software engineering for the past (at least) 20 years has had a core focus of "don't trust anyone", this hasn't changed and this takes a lot of time and effort still.
The problem is that instead of trying to figure out what they really want/need, now we're trying to figure out what they really wanted or needed before it got obfuscated by the babble-machine.
Yeah I agree, such a fundamental aspect of software engineering is translating ambiguous “asks” into specific requirements. We now have a tool to convert those requirements directly into code.
And yes, architecture and how to actually implement the designs are also part of the requirements.
The code is just the implementation, the actual problem that needs solving is one abstraction level higher.
It's UML and outsourcing all over again: If only we can write the perfect UML diagrams representing the ideal class hierarchy, we can just put that in an email, send it to India, then we'll get back exactly the program we wanted, no mistakes!
> Trying to figure out what a vague, title only, feature request actually means.
> My take is that to accelerate processes we should reduce coordination overhead and empower individuals and teams to make decisions and execute on them.
This is funny because it's exactly what the agile/scrum training taught me 20 years ago.
I think when LLMs first came out people thought they could just say something like, "Make a Facebook clone". But now we're realizing we need to be more exact with our requirements and define things better. That has always been the bottle neck in software.
When I was working we used to get requirements that literally said things like, "Get data and give it to the user". No definition of what data is, where its stored, or in what format to return it. We would then spend a significant amount of time with the product person trying to figure out what they really wanted.
In order to get good results with LLMs we need to do something similar. Vague requirements get vague results.
In what I've seen, tickets are much richer in detail now because PMs are using AI (connected to the codebase itself, like Claude Code or Codex) to fill out a template as to what and why the problem is (ie X field exists in the backend not frontend), how and where to get any data (query the backend), and what acceptance criteria is needed (frontend should have the field exposed and "submit" should push the field's data to the backend where it should show up in the databas), which is something they would not have done before, due I guess to laziness and thinking the devs can figure it out. Then devs can copy paste this Jira ticket content into the LLM agent of choice (or even use the Atlassian MCP to have the LLM read it automatically).
This has significantly helped devs and made sure that requirements are very clear.
Honestly, with the first step, it seems the PMs are already halfway there to implementation of the feature so I wonder if in the future they'll just do everything themselves and a few devs will be around as SDETs rather than full blown implementers.
I can't imagine SWEs will be reduced to SDETs anymore than attorneys will be reduced to spell-checkers on AI powered case briefs.
I am a very AI-forward person, but hallucinations are becoming more pernicious than ever even as they get less frequent, especially if the code actually works. A human absolutely has to guide these processes at a macro level for sustainability for SaaS as it evolves with business needs.
Maybe for one and done systems with no maintenance/no updates/no security patches you can reduce humans to SDETs, but systems like that are more the exception than the norm.
I've noticed even more than the "hallucinations", just the code is generally quite bad.
At least with concurrent and distributed systems stuff (which is really all I know nowadays), it is great at getting a prototype, but the code is generally mediocre-at-best and pretty sub-optimal. I don't know if it's because it is trained on a lot of mediocre and/or buggy code but for concurrency-heavy stuff I've been having to rewrite a lot of it myself.
I think that AI is great for getting a rough POC, and admittedly often a rough POC is good enough for a project (and a lot of projects never get beyond a rough POC), but I think software engineers will be needed for stuff that needs to be more polished.
By SDET I mean one who reviews not writes code, maybe we have different definitions of that term because you also mention humans being needed to guide the processes.
Even still, other professions interact with the real social world which is not necessarily the case with programming. A lawyer will always be needed because judgments are and must be made by humans only. Software on the other hand can be built and tested in its own loop, especially now with human readable specifications. For example, I wanted to build an app and told Claude and it planned out the features, which I reviewed and accepted, then it built, wrote tests, used MCPs including the browser for interacting with the UI and taking screenshots of it, finding any bugs and regressions, and so on until an hour later it came back with the full app. Such a loop is not possible in other professions.
This afternoon I was speaking with a friend and mentioned that I need to find a lawyer for contracts. His immediate response was, "you don't need a lawyer, just use AI". Not an avenue I'm interested in going down.
IMO the code-generation for boilerplate and the improvement of copypasta quality are much bigger improvements than that.
PMs turning their brain off and letting the LLMs extrapolate from quick and dirty bashing of text into a template (or, PMs throwing customer feedback at a slackbot to generate a jira ticket form it) can be better than PMs doing nothing but passing ill-defined reqs directly into the ticket, but that's a low bar. And it doesn't by itself solve the problems of the details that got generated for this ticket subtly conflicting with the details that got generated for (and implemented) in a different ticket 8 months ago.
> Honestly, with the first step, it seems the PMs are already halfway there to implementation of the feature so I wonder if in the future they'll just do everything themselves
I'm guessing they've tried (or been induced to try by upper management), but given up because they don't know how to debug any problems that arise due to the LLM working itself into a corner.
Coding-agent LLMs act a lot like junior devs. And junior devs are: eager to write code before gathering requirements; often reaching for dumb brute-force solutions that require more work from them and are more error-prone, rather than embracing laziness/automation; getting confused and then "spinning their wheels" trying things that clearly won't work instead of asking for help; not recognizing when they've created an X-Y problem, and have then solved for their Y but not actually solved for the original problem X; etc.
The way you compensate for those inexperience-driven flaws in junior devs' approach, is to have them paired with, or fast-iteration-code-reviewed by, senior devs.
Insofar as a PM has development experience, it's usually only to the level of being a "junior dev" themselves. But to compensate for LLMs-as-junior-devs, they really need senior-dev levels of experience.
The good PMs know all of this, and so they're generally wary to take responsibility for driving the actual coding-agent development process on all but the most trivial change requests. A large part of a PM's job is understanding task assignment / delegation based on comparative advantage; and from their perspective, it's obvious that wielding LLMs in solution-space (as opposed to problem-space, as they do) is something still best left to the engineers trained to navigate solution-space.
Considering that that’s been a running complaint for like 50 years, it doesn’t seem like project management is going to get better on its own at this point. So, yes, an LLM does represent a productivity boost in that area.
The problem is that organizations are inefficient in such a way that extra output from white collar workers doesn't translate to improved org-wide performance in a positively correlated, linear fashion.
When the org is misaligned, mismanaged, has poor customer feedback loops, bad product market fit, too much bureaucracy, etc etc no amount of AI slop is going to make a meaningful impact on its bottom line. In fact, it will likely do the opposite through combination of exponentially increasing complexity, combined with worker force deskilling, layoffs, and rising token prices. Real bottleneck is and always has been communication & alignment.
It might make the employees _happier_ in the interim though, which, I believe, is what we're predominantly seeing during this AI mania. People fed up with the bullshit jobs of rewriting the same service for the 5th time in 2 years or creating TPS reports weekly just for their manager to throw them directly in the trash are absolutely giddy that they no longer have to do this manually. I think we need to question the economic value of these jobs in the first place, though.
I've worked at big tech prior to LLMs becoming a thing, and consistently saw projects of 20-50 people carried by 2-3 individuals that actually understood what needed to be done. I don't think this ratio will be any better with genAI, and I also don't think that tokenmaxxing has any meaningful correlation with impact. Bullshit jobs (and questionable personal projects) just get done faster now. Yay, I guess.
In the long run these highly inefficient firms are going to get destroyed by people who have a vision and can do what 100+ firms are doing and package it together as one solution that is far superior on dimensions that matter to firms.
The idea that PM tickets are now much improved because they paste their unbaked wrong "idea of what the ticket is" into ChatGPT to expand into a 500 word behemoth is hilarious.
At least when the PM still wrote it you could outright tell it was bullshit and made no sense. Now that is just obfuscated.
Not sure what your point is, LLMs don't have to be all that great to still show a productivity boost and especially if the organization is inefficient, then even more so.
If you do that someone still needs to make sure the details make sense which, from experience, sometimes they will and sometimes they won't. When I open tickets using automation I often back into the ticket from a running implementation that passes tests so the description is at least internally consistent but there are often still issues that need corrected.
That's what a good PM and developer pair should be doing, it's just that it's a lot faster for both of them now to review and work in tandem to get the feature done, because the bottleneck is the code generation.
The PMs validate it, why do you think they don't read over it to make sure it fits what they want? You might say "well they're lazy, look why they didn't write enough detail to start off with" but for lots of people, reviewing something to make sure it's close to what they want and then tweaking it is much easier than writing it from scratch.
It's the equivalent of writer's block and is why a common advice given to writers is to put anything they can onto the page then edit it later.
> The PMs validate it, why do you think they don't read over it to make sure it fits what they want?
The PM has historically often not had a detailed enough mental model of the implementation to spot the hard parts in advance or a detailed enough mental model of the customer desires to know if it's gonna be the right thing or not.
Those are the things that killed waterfall.
You can use LLM tools to help you improve both those areas. Synthesizing large amounts of text and looking for inconsistencies.
But the 80th-percentile-or-lower person who was already not working hard to try to get ahead of those things still isn't going to work any harder than the next person and so won't gain much of a real edge.
I'm glad you mentioned it and TFA briefly mentioned waterfall. The second graph shown in the article with documentation overlapping the dev cycle, it's like the worse of both agile and waterfall. It's supposedly real-time waterfall.
Normally waterfall works where the scope is extremely-well defined and articulated in design plans. Which shortens dev time because prior to AI code was mostly deterministic. Here we have to do waterfall level of documentation while iterating on a non-deterministic solution (code gen) to non-deterministic requirements (per usual).
It's bonkers.
I still think the technology is cool though.
And to answer the questioner.. Have you worked with a PM? Most of the ones I've worked with try to be simultaneously in charge yet not responsible for anything. Validating something implies skill and responsibility.
Then they're just bad PMs and don't deserve to have the job. That can be said in any profession, devs or lawyers or doctors who blindly accept LLM output without review are bad employees.
> Then they're just bad PMs and don't deserve to have the job.
Nobody "deserves" anything. They do have the jobs though. Thinking that the world isn't full of people doing what they need to do to get by who don't give a shit about fitting a fantasy ideal is wild.
Deserving and having are two different things, that doesn't mean they can't be criticized either way. By the same logic bad devs and bad dev practices can also be criticized.
I think validating a fully generated novel of a ticket, is much harder than thinking through the problem in the first place and creating your own ticket.
We see it with code too right? It’s harder to review code than to write it.
On top of that the LLM can work so fast that the amount of things that need validating grows!
This is where humans get lazy and the problems come in IMO. Whether its a PM not validating their ticket, or a dev doing a bad code review.
Add on to that that the incentives currently are to move fast and trust the AI.
It becomes clear to me that a lot of that review work either won’t be done at all, or won’t be nearly thorough enough.
The tickets are not "novel"-length, they are about a few bulleted lists of the sections I mentioned above. In that case it is indeed way easier to review that a ticket only saying "do X with Y data."
Reviewing code is harder than reviewing text because code does something and has interdependencies and therefore must be correct in its function, do not mix the two. This is like saying an editor reviewing an article or novel is harder than actually writing the novel which is blatantly incorrect.
Most? That's doubtful especially when a lot of tickets are simply CRUD which are fine being generated by an LLM. Those that are more complex require more review and interdependency management, sure, but to say that that is most tickets is simply not correct.
I agree. I hate getting tickets like this because they’ve often gone down the wrong path and I have to work backwards to understand the actual problem and the right way to solve it
just this week i pushed back on some requirements in a very detailed product spec I was implementing to speed up time to ship. The pm had no idea what I was talking about because the requirements were invented by an LLM. This is not a bad PM, discipline doesn't scale.
Maybe you both just have bad PMs, because just like good devs they should also be reviewing their work. My point was that it is more likely for PMs to review and edit a generated ticket than to have to write it all themselves which they often won't do.
I feel compelled to point out to you that this is a completely unsustainable, unsupportable, unsubstantiable claim. You have met ~0% of PMs, and of the ones you've met maybe you've experienced a non-zero percentage of their work, but statistically that's also very unlikely.
If you think you can say what most PMs do or what PMs are likely to do, then, I'm sorry, but you are not even thinking like an engineer. You're thinking, actually, a lot more like a PM to many of us.
> just like good devs
I'm so sorry, my sides just can't handle the starry-eyed nature of these takes. This is just too much for me.
To many of us this reads like you've never met people before. But who knows, maybe you live in Lake Wobegon, where all the women are strong, all the men are good-looking, and all the children are above average! If so then we're jealous, but you still should be more careful about how unrigorous your mental model is because it will make you a worse engineer.
Experience with different PMs and developers aside, the older you get in the profession the more you will hopefully realize that none of your quality effort fantasy matters. Sales happen and money rolls in independently of whether you think the PMs or the people who call themselves engineers do a "good job". Businesses thrive on sales and marketing, not engineering.
This failure is human laziness, not an issue with the technology. People who use AI because they are trying to avoid doing work fall into a completely different category than people using AI as a force multiplier and for skills/capabilities enhancements / quality improvement.
This is very much a "you're holding it wrong" response.
If your technology relies on humans using it in ways that go against the ways they are inclined to use them, then that is an issue with the technology.
I don't think that works as a critique of LLMs because it's far too broadly applicable to well-accepted tools.
Are advanced calculators bad because a student could use the CAS to ace calculus homework, exams or the SAT without actually learning the material?
Is copy/paste bad because a person could use it to copy/paste code from one place to another without noticing some of the areas they need to update in the new location, adding bugs and missing a chance to learn some more subtleties of the system?
Is Git bad because a manager could use it to just measure performance by number of lines of code committed instead of doing more work to actually understand everyone's performance?
Many tools can be used lazily in ways that will directly work against a long term goal of improving knowledge and productivity.
but in this case that's exactly what AI is doing, and no more. its filling in the gaps with some plausible sounding goo so that the person doesn't have to worry about the details.
ok, so for some of the jobs we're doing plausible sounding goo is just fine. and that's kinda sad. but the 'just playing around' case is fine for PSG, this isn't a serious effort but just seeing how things might work out without much effort.
taking the remainder, where understanding and intent are important, the role of the ai is produce PSG, but the intentional person now goes through everything and plucks out all the nonsense. this may take more or less time than simply writing it, but we should understand this is resulting in less real engagement by the ultimate author. where this is actually interesting is a parallel to Burrough's cutup method - where source text and audio were randomly scrambled and sometimes really clever and novel stuff pops out.
but to say the current model of vibe coding has much to offer in the second case is really quite unclear. to the extent to which coding is the production of boilerplate is really a problem with APIs and abstraction design. if we can get LLMs to mitigate some of that I the short term without causing too much distraction, that's fine, but we should really be using that to inform the solution to the fundamental problem.
so for me what's missing in your model is how LLMs are supposed to be used 'properly'. I don't think laziness is really the right cut here, make-work is make-work, and there's plenty of real work to be done. but in what sense does LLM usage for code actually improve our understanding of these systems and get us more agency?
I don't disagree with your take on most jobs or vibe coding as shown in countless proof-of-concept/0-to-1 demos. But the comment I was replying to was dismissing this statement from another commenter:
> People who use AI because they are trying to avoid doing work fall into a completely different category than people using AI as a force multiplier and for skills/capabilities enhancements / quality improvement.
This statement is absolutely true. There are ways to use LLM tools to significantly improve the quality of your work instead of to avoid doing hard work. (And the result can easily become something that requires more hard thought, not less.)
Some that I frequently enjoy that are usable even if you don't want the machine to generate your actual code at all:
* consistency-check passes asking it to look for issues or edge cases
* evaluation of test coverage to suggest any missed tests or proposed new ones
* evaluation of feasibility of different refactoring approaches (chasing down dependencies and call trees much more faster than I would be able to do by hand, etc)
> to the extent to which coding is the production of boilerplate is really a problem with APIs and abstraction design. if we can get LLMs to mitigate some of that I the short term without causing too much distraction, that's fine, but we should really be using that to inform the solution to the fundamental problem.
I generally would disagree with this, though. I don't think there's solely a problem with abstraction design, I think the inherent complexity of many systems in the business world is very high (though obviously different implementations make it different levels of painful). If that's a problem, it's a people/social one, not a technology problem.
In my future we lean into the fact that people want features, they want complexity, for many things - everybody's ideal just-for-them workflow/tooling would look slightly different than the next person's - and use these tools to build things that do more, not less. Like the evolution of spellcheck from something you manually ran, to something that constantly ran, to something that can autocorrect generaly-usefully when typing on a touchscreen.
Let's get back to finding more features/customization to delight users with.
> This is very much a "you're holding it wrong" response
This isn’t actually an argument for or against anything, I don’t know why people say this. It is entirely possible that people are using this brand new, historically unprecedented tool wrong.
Cars have been a huge success in spite of requiring people to learn a bunch of new things use them.
It's not about having to learn things; it's about the required methods of using the tool going directly against the grain of the way people in general operate.
The classic "you're holding it wrong" was about the iPhone 4: sure, people could learn to hold the iPhone in such a way that they didn't block the particular parts of the antenna that were (supposedly) the problem. But "holding an iPhone" is a fairly natural thing to do, and if the way that people are going to do it naturally doesn't allow its antenna to connect properly, then that's a technology problem, not a human problem.
If the selling point for AI is "you can just talk to it, and it will do stuff for you!" (which may or may not be yours, personally, but it is for a lot of people), then you have to be able to acknowledge that "describing a problem or desire using natural language" is something that humans already do naturally. Thus, if they have to learn to describe their problem in very specific ways in order to get the AI to do what they want, and most people are not doing that, then that's a failure of the technology.
For the specific case at hand, what's being described is similar to the problem of self-driving cars: you're selling the benefit as being the AI taking a lot of the work off your shoulders; all you have to do is constantly check its work just in case it makes a mistake. Which is something that we already know, empirically and with lots and lots of data, that humans are bad at.
Once again, it's a technology issue. Not a human issue.
> selling the benefit as being the AI taking a lot of the work off your shoulders; all you have to do is constantly check its work just in case it makes a mistake.
Cars can take you from place to place much faster than a horse can, all you have to do is learn to drive and constantly keep your hand on the wheel.
Part of using a technology is, well, learning how to use it. It's not the technology's fault that humans are lazy or not able to pay attention and crash.
Maybe they are holding it wrong then. Like someone else said, people had to be taught how to drive a car and that cannot be in any sense said to be the car's fault.
Some people are lazy, plain and simple. If they want to blindly accept what the LLM tells them without critical analysis and review then that's on them.
Maybe for some subset of sotware (like CRM panels or something) PMs will do everything. But if you're projecting the way one sort of software (ie user-facing, business use oriented software) is developed and put to use with software writ large, then no I don't think so
Sure, I'm just talking about 90% of software which is basic CRUD, not complex systems or microcontroller programming. In that case it's likely that just a PM could build something with LLMs.
> Honestly, with the first step, it seems the PMs are already halfway there to implementation of the feature so I wonder if in the future they'll just do everything themselves
Yes please, I've seen the vibecoded slop PMs put out every day because software engineering is simply not a skill they have, and I'd love to make a LOT of money fixing their crap once it dies in production <3
I’m a former PM who’s now a founder and all the engineers I worked with loved me.
I can tell you right now most pm’s are absolutely useless and glorified project managers who don’t know how to think and get in the way - and don’t know how to enable engineers to be more productive.
> I think when LLMs first came out people thought they could just say something like, "Make a Facebook clone". But now we're realizing we need to be more exact with our requirements and define things better. That has always been the bottle neck in software.
This was substantially predicted by Fred Brooks in 1986 in the classic No Silver Bullets [1] essay under the sections "Expert Systems" and "Automatic Programming".
In it, he lays out the core features of vibe coding and exactly the experience we are having now with it: Initial success in a few carefully chosen domains and then a reasonable but not ground breaking increase in productivity as it expands outside of those domains.
The LLMs turn out fully formed clones of stuff for which there exists copious amounts of code openly searchable on the web doing the exact same thing.
LLMs require developer-like specification, task/subtask breakdown and detail where such example code already exists.
As a professional prior to LLMs, how many problems that you work on have many existing free solutions but you neglected to use that code and decided to spend days doing it yourself?
Well put, and same challenge to a lot of these demos & LoC numbers: if you were a pro prior to LLMs, how many of these demos could you fully recreate if you ignored copyright?
I’ve often reimplemented things at work that exist elsewhere. If I could just copy & paste whole solutions from GitHub and change the branding/naming slightly, I could make curl in an afternoon.
> how many problems that you work on have many existing free solutions but you neglected to use that code and decided to spend days doing it yourself?
Only when the existing free solutions are licensed with something like GPL. Now I can just say, write me a C webserver library similar to mongoose and I get the functionality without the license burden.
You might as well have ignored or removed the GPL notice. Running it through the LLM laundering gets you a "fork" of unknown origin, questionable quality. You're still potentially open to supply chain issues but the chain is obfuscated.
And you now own full responsibility for maintenance.
I just vibe coded a socks proxy because existing ones were too thick. And let me tell you, you are absolutely right. Go libraries I’ve never heard of, new implementations that has not been tested.. I think the word for this is YOLO
We now have product owners trying to farm out their work to an LLM. The process didn’t work before because the person writing the requirements either put out vague requirements or bad requirements because they didn’t understand the business intent (or were careless).
LLMs just take the same vague or poor requirements and make them look believable until you dig in to them.
> The process didn’t work before because the person writing the requirements either put out vague requirements or bad requirements because they didn’t understand the business intent (or were careless).
You make it sound like writing good requirements is easy.
If it were easy we wouldn't need all these concepts around PMF, product pivots and the like. And even before that was Peter Naur's paper "Programming as Theory Building" [1].
If you truly understand the problem you're solving with software then requirements can be easy. But usually we don't, not right away, and so we have to build up our understanding of the problem first in order to solve it.
Even then, the problem we solve may not have been the problem paying users will have, so you can have "good requirements" and still have a bad business, or even the opposite where you somehow build a working business despite bad requirements, because you hit upon a customer's need quite by mistake.
Nothing about any of this precludes LLMs being helpful, though nothing guarantees LLMs will be helpful either.
You're completely right and I thought this would be obvious. I never prompted anything remotely closely to "make a facebook clone". Instead, I make an explanation of how it should work. To give you an example:
I need a python script that
1) reads /etc/hosts
2) find values of specific configured hosts (read from a .conf which) eg server1, localhost, etc
3) it'll assign a name to those configs eg if the .conf has
[Env1]
192.168.0.1 production-read
192.168.0.2 production-write
192.168.0.27 amqp
[Env2]
192.168.0.101 production-read
192.168.0.201 production-write
192.168.1.127 amqp
Basically format:
[CONFIG_NAME]
<ip> <hostname>
Like an usual hosts file
4) And each of those will be stored in memory
5) if in /etc/hosts it matches one of those, it sets the "current env" as the configname
5) It'll create an icon on the top-right of ubuntu 22 default gnome with
6) that icon could be the text of the current config name or if nothing matches, "custom" text would show
7) When the user clicks the "tray"/appindicator(or whatever gnome is calling them) it'll list the config names in a simple gtk/gnome
8) When the user clicks one config, we create a backup of /etc/hosts in ~/.config/backups/ named hosts-%UNIX_TIMESTAMP%
9) we then apply it to hosts file (find only the line with the hostnames to change and modify only those)
And that one-shotted a simple gnome app indicator env switcher. Had to fix a few lines here and there but it mostly just worked. If you give the proper spec to the LLM, it'll do it right. You can even fake a DSL to describe what you want and it'll figure it out.
Juxt's Allium https://juxt.github.io/allium/ is an interesting entry in this 'pseudo DSL' space to define and store system specifications and requirements. I think it's likely that this sort of 'persistent specifications to help bots work correctly' will be a good approach when things finally cool down a bit.
That's the kind of stuff where you would write a few lines of shell script or perl and not bother with the whole GTK stuff. Because GTK would be accidental complexity to the task (unless you used something like zenity).
This is one of the reasons I like the OpenBSD and suckless projects. There are solutions that are technically correct, but are overengineered.
Well I would never write shell because I loathe it's grammar/syntax. I enjoy GUIs and am a heavy mouse user, so the GTK part isn't really an "accidental complexity" but a must have for me. If a LLM can one-shot all the GTK boilerplate it's a win.
That's (as shown in my sample prompt) one great thing I've been using LLMs for: making GUIs for arcane Linux-based OS/userland settings that I have no interest in doing "sudo gedit yadda yadda" or learning man pages for. It's been 30+ years, we deserve a better desktop experience.
I've used suckless packages in the past, but it feels to me too close the GNOME/Apple way of giving zero settings and having opinionated defaults whose opinions do not ring well for me. I have zero desire to change my shortcuts/hotkeys to something random devs chose based on their past computer experience, mostly unix-based. Muscle memory > *.
I was pointing out that a simpler solution exists. I prefer simple solutions, because I want to test whatever idea I have in real world situation first before I go for a more complete one. Kinda like doodling before committing to do a sketch (or spend weeks doing a painting).
> It's been 30+ years, we deserve a better desktop experience
That desktop experience would need to be like smalltalk (where it’s trivial to modify the gui). The nice power of Unix is having the userland being actually a userland. Meaning you can design a system for your workflow and let the computer take care of that. Current desktop environment doesn’t allows for that kind of flexibility.
Also it’s the nature of unix that makes such basic utilities possible (and building them with raw xlib or tcl is easier than gtk). Imagine doing the same on macOS or Windows where everything is behind an opaque database where some other process fancies itself as its owner.
There's also a pattern based on the simple solution that used to be more common: One command-line program for updating and querying the current state, and a second GUI one that just acts as a dumb interface for the first one. Even aside from separation-of-concerns purity, there are two more practical benefits: this gives you scriptability (say, automatically choosing an environment on startup) as well as easier support for multiple desktop environments (two different dumb GUI frontends for the actual complexity in the command-line backend, or updating the GUI because of a change in the APIs without worrying about breaking the important logic).
What's even worse is that when dealing with human software teams, a vague requirement will (at least in a well-run org) receive demands for further specification. "What do you mean by 'get data'?", etc.
An LLM will just say, "Sure! Here's the fully implemented code that gets the data and give it to the user. " and be done with it.
You're both right. The parent was a toy example, and if asked literally to an LLM, it will definitely ask for more information. Yes, it's important to be accurate but I don't think that applies here.
But the point still stands: in most contexts, the LLM will fill in the blanks with what it deems appropriate like an overconfident intern at best and a bull in a China shop at worst.
When the cycles are short enough, though, that is to some degree the right thing. That is, it's the right thing for things the users can then immediately see and give feedback on, because it lets them give feedback on something tangible.
It's the wrong thing for important things under the hood (like durability and security requirements) that are not tangible to them.
IME you give it very precise specifications and it still fucks it up.
When we talk about "the" bottleneck being specs it just isnt the case that it's the only thing LLMs do poorly. Theyre really bad at a lot of stuff in the SDLC.
They're also good at providing results which are bad but look ok if you either dont look too closely or dont know what you're looking for.
It's worse. Vague requirements still only power vague interpretations of the problem. Even if you provide good requirements, you still only have vague interpretations at your fingertips. The promise is that such things won't be a problem in the future, which is obviously not materialising.
"Make a facebook clone" is the vague human promise to the end user. The reality is that it leads to so many assumptions which are insurmountable due to the vague interpretation so you have to change your requirements in the end to claim success.
Thus everything turns into a mediocre compromise. There is no exceptional outcome, which is what makes a marketable product. There are just corpses everywhere.
You need something better to both define requirements and implement them than this technology.
In several companies I have seen product managers joining teams and failing to even have minor requirement ready for months during “onboarding” of the PM. And then code being ready but taking months to release because DevOps is busy or QA can’t find time.
The pace of release of software has been disconnected from the coding part for the longest time, and we have been quiet about it.
The solution I've seen work is have engineers and designers that can take much of the detailed spec writing on, and have the PMs spend time with users/prospective users, partners, etc, understanding the market and users better. When you pull PMs in to all the details, often they turn into project managers, shuffling bug tickets around etc, taking time away from owning the user and the problem and shifting them too much to the solution side. Have a lead engineer own much / most of that. Every org / product is different of course.
Did everyone forget about outsourcing and how outsourcing works?
The dudes in Eastern-Wherever not asking what something means is the scary part. You only find out at the end how deeply confused everyone was when making the thing. You can fix it with attention and management, but then only some projects sometimes are profitably outsourced and you still need competency.
Even if that were the case, I wouldn’t want to spend my working life building software poorly fit for the purpose, that nevertheless sells due to marketing.
> But now we're realizing we need to be more exact with our requirements and define things better.
That's why we write programs in programming languages and not English. Because they are much more efficient at giving precise instructions than natural language.
Even purely from an information theory perspective it was obvious “make me a Facebook clone” was not going to work. The more you compress the information prompt the more detail you lose.
Realizing? Will be very happy if that is the case, but in my view all big company execs are still balls deep into the notion that you will be able to just ask it for the facebook clone and everything sucks as a result
> When I was working we used to get requirements that literally said things like, "Get data and give it to the user". No definition of what data is, where its stored, or in what format to return it. We would then spend a significant amount of time with the product person trying to figure out what they really wanted.
This is a big HN LLM discussion divide. I am in the same no-specs work background camp, and so the idea that the humans who input that into dev teams are suddenly going to get anything out of an LLM if they directly input the same is laughable. In my career most orgs there has been no product person and we just talked directly to end users.
For that kind of org, it will accelerate some parts of the SWEs job at different multipliers, but all the non-dev work to get there with discussions, discovery, iteration, rework, etc remains.
If the input to your work is a 20 page specification document to accompany multi-paragraph Jira tickets with embedded acceptance criteria / test cases / etc, then yes there is a danger the person creating that input just feed it into an LLM.
I’ve never understood engineers who complain about vague specs… if the spec was complete, it would be code and the job would be done already! Getting a 20 page spec delivered from upon high and mechanically translating it to code without any chance to send feedback up the chain sounds like… a compiler.
Yes, I don't think a job where I am programmed by a product manager would be terribly interesting. I would move on to be the product manager if I found myself in such a role.
In my experience, the complaints are not about the specs and their vagueness. It's more about the political game to get them detailed. If you've not encountered the kind of organizational issues where getting an answer is like pulling teeth, you're kind of lucky.
Oh no, I’ve definitely experienced that, it’s terrible. But that situation makes me wish for more agency (for example, talking to customers directly), whereas it seems to make other engineers wish for less agency (please hand me a complete spec and I will mindlessly translate it to code). That’s what I don’t understand.
some of us couldn’t give a rat’s ass about the customer. One of our customers charges people for paying their own bills via certain methods, which is completely bogus and I remind everyone loudly all the time that they do this. Everyone agrees that this customer sucks to work with, and the less time spent with them the better.
The people from the customer’s end suck, they’re not technical, they have in-fighting with their own teams during calls, have decades long errors with their integration that they have never fixed…the list goes on. For this customer and a few others, please give me a spec that I can implement, shove it back across the aisle, and forget about. The absolute last thing I want is to have to talk to them more.
> I think when LLMs first came out people thought they could just say something like, "Make a Facebook clone". But now we're realizing we need to be more exact with our requirements and define things better.
The annoying thing is that giving an LLM vague instructions like "make a Facebook clone" does work... in certain limited cases. Those being mostly the exact things a not-very-creative "ideas person" would think to try first. Which gave the "ideas people" totally the wrong idea about what these things can do.
These same "ideas people" have been contracting human software developers to "make them a Facebook clone" (and other requests of similar quality) for decades now.
And every so often, the result of one of those requests would end up out there on the internet; most recently on Github. (Which is, once there's enough of them laying about, already enough to allow a coding-agent LLM trained on Github sources to spew out a gestalt reconstruction of these attempts. For better or worse.)
But for the most common of these harebrained ideas (both social-media-feed websites and e-commerce marketplace websites fit here), entire frameworks or "engines" have also been developed to make shipping one of these derivative projects as easy as shipping a Wordpress.org site. You don't rewrite the code; you just use the engine.
And so, if you ask an LLM to build you Facebook, it won't build you Facebook from scratch. It'll just pull in one of those frameworks.
And if you're an "ideas person", you'll think the LLM just did something magical. You won't necessarily understand what a library ecosystem even is; you won't realize the LLM didn't just generate all the code that powers the site itself, spitting out something perfectly functional after just a minute.
AI is not supposed to bypass the process, but it can speed up things nonetheless, it can help with refactoring, writing boilerplate, finding errors you never even spotted before, and things that linters cannot catch.
I see so many comments that seem to me like either they don't use standard known processes, or they assume AI doesn't need you to follow the standards.
Can I ship more code and features? Absolutely I can, if I have a good set of requirements, and thorough testing. All AI written code needs to be reviewed and tested, and should be in discrete commits and pull requests, anyone pushing a PR with thousands of lines of code is a red flag, you wouldn't do it without AI, why would you do it with AI? Major rewrites / refactors are the only known exception, and even then I would argue that these should still have discrete commits you can switch to so you can see how things changed, and make a more informed decision.
If you show me a massive one shot commit or PR I will deny it. Break it down into bits a normal developer can audit.
People don't really understand that non-trivial software development isn't even 50% coding. The coding step is generally the 'easiest' part and given to Junior developers. In a large org most product changes span multiple systems and human operations. Seniors and even mid-level generally spend most of their figuring out how to shape the local priorities into a new arrangement of the existing cybernetic entity and then getting buy-in on that new vision given these other teams have their own priorities.
This naturally involves a lot of tradeoffs and politics - senior engineers know to avoid adding 'weight' to their airframes and fight hard to avoid adding scope to the systems they're responsible for or divergence from their intended direction of travel. So compromises have to be struck or escalations to management to choose between priorities have to play out.
Maybe AI solves that as well but that is a lot more difficult lift.
LLMs mostly only being code-writers was true a year ago, but it is not true now. Now they are tool-callers, which means a coding agent can effectively: run lints/typechecks/tests (and fix resulting errors), dig into observability platforms to identify root cause of isses (e.g. on Sentry or similar), run benchmarks to identify slow code / hot paths, keep systems up to date by reading migration docs (and applying them) for new majors of consumed libs, etc.
So sure, if you have none of these things set up to back-pressure agents and help them better understand the system, then they will just be dumb LLM code writers. But you can definitely go a lot further than that with the improvements that are rapidly happening to models and harnesses.
This article assumes that AI only has an impact on the development phase which is certainly not true. It can speed up every part of the step. Including ideation, legal, documentation, development, and deployment.
Ideation: Throw ideas back & forth, cross reference with knowledge bases, generate design documents. Documentation: Generate large parts of docs. Development: Clear. Deployment: Generate deployment manifests, tooling around testing, knowledge around cloud platforms.
Every single step can be done better & faster with AI. Not all of them, but a lot.
Even development. Yes some part of your job involves understanding the problem better than anyone & making solutions. But some parts are also purely chore. If you know you keed a button doing X, then designing that button, placing it, figuring out edge cases with hover & press states, connecting to the backend etc - this is chore that can be skipped. Same principle applies to almost all steps.
A typical example of trying to add a new significant capability involves many meetings (days, weeks, months, etc. )with the business to understand how their work flows between systems X, Y and Z as well as all of the significant exceptions (e.g. we handle subset A this way and subset B that way, but for the final step we blend those groups together, except for subset C which requires special process 97).
Then with that understanding comes the system solutioning across multiple systems that can be a blend of internal system or vendor's system, each with different levels of ability to customize, which pushes the shape of the final solution in different directions.
There is certainly value in speeding up coding, but it's just one piece of the puzzle and today LLM's can't help with gathering the domain information and defining a solution.
What I've seen in an AI-forward looking environment is that it's much more common for PM/POs to be knocking up at least a UI prototype now, and experimentation is happening often even before writing the tickets. Similarly when devs are proposing something they often are coming with a couple of prototypes already implemented. Both of those mean decisions are coming a lot quicker.
I've seen proposals for Product Managers to define those conditions themselves by speaking with the LLM. A continuing architectural diagram is constructed and graph is updated until all cases are covered and then the LLM writes the code, writes the validations, pushes to CI environments, runs tests, schedules prod deploy (by looking at company event schedule), gets CAB approval, deploys code, tests in prod, and fixes regressions.
I'm not saying this is the correct thing, but companies are implementing it and it is "working". I don't think keeping our head in the sand is helping.
> I've seen proposals for Product Managers to define those conditions themselves by speaking with the LLM.
But the LLM is not aware of how the business works and why, so someone needs to work with the business to extract the information. Typically it's not well documented.
is it working though? The main outcome we've seen with companies that drink the AI Kool aid en masse is buggy unstable systems. clearly there's a level of rigor that's being missed for ship velocity
The article pretty much plays out whats happening in our place, heavy use of AI in software development but we dont see us shipping faster, about same or perhaps slower (for other reasons). Its a weird feeling as were waiting for this utopia to kick-in but its not and were cant fully put our fingers on it.
All of the above points align with our organization’s experience. But there is one more thing happening as well: we have more people in more roles able to create software solutions for issues that used to be brute forced via physical processes. (We are a small manufacturing business.) While these aren’t big giant enterprise projects that require deep swe experience, they are simple software tools that are improving process and productivity everywhere. It is pretty amazing what happens when your head of shipping can build a bespoke tool to solve a problem that previously they dealt with through burning through a lot of labor hours.
I would be really interested in the details of these kind of tools that are improving processes and productivity.
Are they reasonably documented/audited/put into any sort of version control like a lot of internal tooling? Or are they the kind of the thing that gets whacked together on the fly in a "move spreadsheet data from A to B", "I want a list of people's schedules with custom highlighting" kind of things.
Not doubting your productivity increase, I'm just curious how people quantify that when they say it.
One of our BAs created a site that tests the effectiveness of copy / layout adjustments. I don't even know exactly what that's called but he's able to do statistical analysis much faster on what works and what doesn't. It's really cool to watch him thrive and I feel like some of the thinkers that were not devs are going to find themselves to be one but in their specific domain in a few years
Yes. In the same way that spreadsheets are the dev tools for non-devs, LLMs could step into that role, but with much more powerful end result. With the caveat that in the same way you can create a powerful foot-gun with a spreadsheet you can probably create a foot-cannon with an LLM.
yeah the Coinbase CEO gleefully pointed that out as well and now the market thinks they are totally incompetent every time some UX quirk is found
looks like orgs have to have engineers on for optics. like having a legal staff with no lawyers, or a cybersecurity staff with no IT or certified people. Software has famously not needed state licenses or industry certification, but maybe thats a direction to consider to give utility to company optics.
I know and I agree. It sounds incredibly arrogant but it's frankly is a bid sad to see how much HN is lagging behind AI adaption. It's been 90% noise over the last 3-6 months about problems that aren't truly problems if you really look hard at what AI is capable to do already today. It's mostly ppl & process problems. I could post a comment like the one above below almost every article on AI. But it is what it is. It's an opportunity for anyone who doesn't bite into the cynical tone here for sure.
Indeed. I suspect most effective AI users are quietly making real progress toward their objectives.
Anecdotally, I see a lot of problems/solutions content about AI that doesn't reflect at all the challenges I face. But trying to tell people that there are other ways of doing things, especially when it conflicts with token-maxxing, is a lost cause
Precisely. People don't realize that it's all numbers. Given average IQ of people involved in a project is 140, an AI with an IQ of 150 can replicate each and every such individuals in the pipeline. People saying AI can't do this or AI can't do that should come to terms with the fact that this IQ gap is monotonously increasing.
1: When was the last time you worked on a project where you thought the average IQ was 140? I don’t even think I have worked on a project where the maximum IQ was 140.
2: Who thinks the IQ of people on the project determines its success? There’s so much more to it than just “high capability team members” (to give IQ a generous interpretation).
3: (math joke) A sequence like (AI IQ - Human IQ) can be negative and monotonicly increasing and still never reach 0.
I agree. Inexperienced people (not necessarily "dumb") are likely to accept everything at face value, not apply critical thinking skills, and not even check the AI generated output.
On the one hand, this is a clean post that explains exactly what a lot of us have been thinking and seeing on the job at large organizations doing tech work. Dear Author, I agree with you 110% and want everybody else to come to understand what you have written.
On the other hand, it feels like we've been over this tens of times recently, on HN specifically and IRL at work. Another blog post isn't going to convince leaders that this is how the world works when they are socially and financially incentivized to pretend like AI really will speed things up. So now I just wait for their AI projects to fail or go as slowly as previous projects and hope they learn something.
Sadly I think you’re right. I even shy away from sharing these types of posts at work because it feels like anything that doesn’t mesh with the status quo isn’t received well.
Every time these types of posts are discussed at work, the point is always that there's more risk of falling behind (more like FOMO) at pace if others are able to launch or bring new features faster
I disagree, I think the visuals, Gantt charts, are precisely the kind of "PM speak" that can be understood. Sure it won't solve anything as long as C-suite and investors do innovation signaling but that itself can only last so long.
The alternative viewpoint is that if there weren’t people who continue to try to advocate for a better world, the world we’d live in would be even worse.
Yep. I have the luxury of having my mortgage paid off and being able to be a bit picky about my work for a little bit.
So I am spending my days gardening and obsessively working on personal coding projects with these agentic tools. Y'know, building a high performance OLTP database from scratch, and a whole new logic relational persistent programming environment, a synthesizer based on some funky math, an FPGA soft processor. Y'know, normal things normal people do.
So I know what these tools are capable of in a single person's hands. They're amazing.
But I hear the stories from my friends employed at companies setting minimum token quotas or having leaderboards of people who are "star AI coders" telling people "not to do code reviews" and "stop doing any coding by hand" and I shake my head.
I dipped my toes into some contract work in the winter and it was fine but it mostly degraded into dueling LLMs on code reviews while the founder vibe coded an entire new project every weekend.
These tools suck for team work or any real team software engineering work.
I'll just let this shake out and sit out until the industry figures it out. The only places that are going to be sane to work at are places with older wiser people on staff who know how to say "slow down!" and get away with it.
In the meantime, quantities of cut rhubarb $5 a bunch in Hamilton, Ontario area for sale. Also asparagus. Lots and lots of asparagus.
Yeah I think moving forward one of the questions I'll be asking companies I interview with is "what does your seniority distribution look like and how do you intend to maintain it?"
I think there's an interesting dichotomy. I find that for things I'm already capable at, LLMs are relatively inconsequential. But for things I'm no good at, it's a huge game changer. For a large company, that's going to be able to hire out most needed roles for any given project, this means the overall effect is going to be relatively inconsequential. At best, they may be able to cut down on labor costs by having one guy do a mediocre job at 5 people's jobs in exchange for a worse product. Short-term gains for long-term costs, wcgw?
But for a small studio, or independent developer, LLMs are a big game changer. Being able to do a mediocre job at 5 people's jobs is a huge leap over trying to get by without those jobs - relying on third party assets or other sorts of content, or even worse - doing a really awful job of trying to improv those jobs. See the UI of basically any program ever that was clearly laid out by a programmer and not a designer. Or there's the whole trying to rip off stuff from dribbble, but lacking the skills to do so. Whereas with AI, you can suddenly competently rip off everything and everybody - it's basically their entire MO.
I can give an anecdote. I'm a backend engineer for a service that I would consider pretty high horsepower. We get about 30k sign ups and trillions of events a day. I haven't touched the front end with a 10 foot pole since college.
I got the opportunity to rewrite our aging login page just as a fun experiment. I sat down with one of our analysts and we just went to town in a zoom trying out stuff with claude until we made something pretty sweet. Ran it through all our systems for accessibility, performance, etc and it came out clean. Made a PR and fired up a test that day in production. I haven't written a lick of our front end framework ever in my entire life and we were able to build something that has had a marked improvement in our user engagement in a day.
> a marked improvement in our user engagement in a day.
Do you have any idea what has caused this engagement improvement and indeed do you actually have any metrics or is it hearsay?
It is much easier to knock something up in a day as you have done, but often the reason manual things take longer is they are based on actual testing and research which takes longer than a day however you do it. The manual way gives you much more data on the hows and whys, and will inform you much more in the future when you need to change again instead of just 'ai did it last time, lets use it again!'
No, we did a actual test using our existing testing framework. We have shitloads of metrics to know when a user gets stuck, when they give up, which login path they took, etc.
This wasn't a half assed test but a legitimate effort to improve something that we never prioritized
We had a legitimate 25% reduction in users giving up logging in in a system that has millions of users.
We ran a 50-50 AB test for several weeks to confirm the data and then turned it on completely
edit: If you haven't already read my post, I'd also like to say that the benefit AI gives us is that I worked on something I never get to work on, the analyst got to try a hunch he always had, and we got to see it go live in a day. If it didn't' work out, we were out a day of work which beats the few weeks of an effort prior to AI that we would spend on something just to find out it didn't work.
This seems consistent with OP. You had a feature where most of his Gantt chart is, in effect, already done: you had a clear problem with a clear well thought out design/solution (with associated documentation) in mind, you had a well setup analytics process for deployment and followup... you really had everything except that big fat chunk in the middle labeled 'coding'. So in your anecdote, an agentic coding LLM really could deliver a huge speedup by doing the remaining 10% or whatever of the work.
This is why LLMs are really great 'knocking off the todo/wishlist' of things you always meant to do. The problem, as far as broader discussions of 'productivity multipliers' or 'total factor productivity' go is that there's a certain perverse diminishing returns to such wishlist items (if each item was all that important, why didn't it get done before?), they generally only apply to a small part of a large complicated whole (what % of your ecosystem/business/community as a whole is the login page, as pleasing and profitable as that fix is relative to the investment? Probably not a big %), and they are also finite (what happens when you have worked through your backlog of lowhanging fruit?).
Just because one isn’t good at a thing doesn’t preclude one from being a sufficiently passable judge of a thing.
To wit, the answer pre-AI was to hire an expert on that thing, and you would then critically assess their work product, despite being unable to build it yourself.
True, but if you hire a generalist and they are consistently under-performing specifically in the subject matter where you are an expert, it may behoove you to take the rest of their work with a grain of salt as well.
This is all substantially correct and gives us hints as to where to focus for AI to make the processes go faster.
Eg: I had a product manager say to me that he envisions a future where any meeting with stakeholders that does not result in an interactive prototype by the end of the meeting would be considered a failure. This feels directionally correct to me.
The other thing I expect to see is Vibecoding being the "Excel 2.0" where it allows significant self-serve of building interactive apps that's engaged in a continual war with IT to turn them into something with better security guarantees, proper access control & logging, scalability, change management etc.
But the larger historical point here is that every revolutionary transition produces, in the early stages, "Steam Horses". The invention of the steam engine had people imagining that the future of transportation would involve horse shaped objects, powered by steam, pulling along conventional carts. It wasn't until later developments that we understood the function of transportation as divorced from the form.
I started talking about Steam Horses originally in the context of MOOCs, which was a classic Steam Horse idea.
> he envisions a future where any meeting with stakeholders that does not result in an interactive prototype by the end of the meeting would be considered a failure.
Just learn something like balsamiq. You don't need code to build out a prototype. Just like you don't need actors and a camera when a few sketches can capture a scene.
I've found that AI is extremely useful when coding: For example, a task that used to take 3 days I can now do in about a day, in part because I can do things like have the agent write tests, or because I can have the agent start from some higher-level instructions which I can then clean up and debug.
BUT: The article is 100% right that I spend a lot of time doing other tasks: Reviewing other teammates' work, interacting with colleagues, planning, ect. AI isn't quite as helpful there. For example, I find that co-pilot code reviews don't add a lot of value; and the AI isn't good at judging a UI.
Maybe we'll get there soon? It's starting to look like the biggest challenge with AI is learning how to use it correctly.
> Yes, AI can generate code quickly (whether that’s a good thing is open for debate), but that doesn’t mean it’s generating the correct code.
No, the code is actually almost always correct. The way it’s added is probably not what you’re going to like, if you know your code base well enough. You know there’s some ceremony about where things are added, how they are named, how much comments you’d like to add and where exactly. Stuff like that seems to irritate people like me when not being done right by the agent, and it seems to fail even if it’s in the AGENTS.md.
> If you were to give human developers the same amount of feature/scope documentation you would also see your productivity skyrocket.
Almost 2 decades in IT and I absolutely do not believe this can ever happen. And if it does, it’s so rare, it’s not even worth talking about it.
That's not my experience, especially when the inputs are bugs or performance issues. It frequently hallucinates and misdiagnosis without a guiding hand. However, it can still RCA and analyze well and improve efficiency if you keep an eye on what it's doing and push it the right direction.
> If you were to give human developers the same amount of feature/scope documentation you would also see your productivity skyrocket.
I think you run into a ceiling how fast a person can digest and analyze the info compared to a machine
What tools are you using? What settings? What process? What's your code review like?
I think this varies a lot. I find with a c++ project I'm working on that the LLM needs a lot of guardrails and guidance, and still gets a lot wrong. But with a vite/js project it often one shots complex and intricate changes in large codebases.
> Software development is about translating a problem into a solution that a computer can understand and automatically resolve. Preferably in a secure and scalable way.
True, meanwhile software engineering puts optional bit into the requirements bucket. (ie. Secure & Scalable)
---
For the problem description and gathering requirements sentiment; I don't think we'll _ever_ have a 100% proper way of doing this. If we did, we'd basically solve any and all problems in the world.
Nevertheless, I think AI can help with investigating and exploring the problem space. Especially when the problem is an already solved thing that the prompter hasn't gained enough expertise yet.
Moreover, I think (and keep mentioning) we will see different kind of models in the near future. Those would be more specialized per industry, per language (both programming and human languages), even per field.
Those will open up newer areas for employment & job market. Something like an "AI-trainer" but more of a knowledge-worker style. Although this can also be automated with LLMs, the limits on context length/size plus amount of compute required to re-train the models to iterate faster both are quite heavy.
That last paragraph sounds like a meta vp explaining to the engineers why it is important to log all their keystrokes and eye movements. Pinky promise we wont fire you.
The trend I DO see at least based on JDs is a whole lots of “agents” which are glorified claude code but in the cloud with tools focus on a given industry or domain. If this is what you mean, then you are correct.
Been having conversations like this with a client I've worked with. They get approved by corporate for us to use claude and ask how much faster we'll be able to move with it.
I tell them "Us engineers will probably be able to deliver some of our stuff faster but it won't have even a slight effect on the actual deliverable because we've never been the bottle neck", it's the fact that the process to get an S3 bucket allocated takes (not exaggerating) 4 weeks there.
Instead of mandatory AI workshops simply cancel all meetings with more than 3 people and no written agenda. Instead block the meeting time for productive work.
That’ll be 2000$ of advisory fees for the insane productivity gains I just unlocked you. You’re welcome
Yes, there are MANY in tech/non-tech management that will quietly admit that a lot of this top-down stuff is to create the appearance of motion to appease a higher more tech/AI ignorant authority.
If that sounds familiar, it’s because it’s what dang did over the course of several years.
It’s taken a few weeks. I started right around May, and now it’s able to render large HN threads (900+ comments) within a factor of five of production HN performance. (Thank you to dang for giving actual performance numbers to compare against.)
A couple days ago, mostly out of curiosity, I ran Claude with “/goal make this as fast as HN.” Somewhat surprisingly, it got the job done within a couple hours. I kept the experiment on separate branches, because the code is a mess, just like all AI generated code starts as. But the remarkable part is that it worked, and I can technically claim to have recreated HN within a few weeks.
The real work is in the specifications. My port of HN is missing around a hundred features. Things from favorited comments, to hiding threads, to being able to unvote and re-vote.
But catching up to HN is clearly a matter of effort (time spent actually working on the problem with Claude), not complexity. Each feature in isolation is relatively easy. Getting them all done within a short time span without ruining the codebase is the hard part. And I think that’s where a lot of people get tripped up: you can do a lot, but you have to manage it tightly, or else the codebase explodes into an unreadable mess.
It’s true that if you don’t do that crucial step of “manage the results”, you’ll end up making more work for yourself in the long run, by a large factor. But it’s also true that AI sped me up so much that I was able to do in weeks what would’ve otherwise taken years (and did take dang years). I’m not claiming parity, just that I got close enough to be an interesting comparison point.
AI can clearly accelerate us. But we need to be disciplined in how we use it, just like any other new tool. That doesn’t change the fact that it does work, and I think people might be underestimating how good the results can be.
I've had a handful of software projects in my career land essentially on the day I predicted, sometimes several months out, and the commonality across all of those projects was that the specification was crystal clear. Two of them were actual ports of an existing piece of software over to a new system. And so any time we had a question about the implementation, we could look at the existing version and immediately have our questions answered about what "correct" was.
I think projects where correct is very clearly defined can benefit from LLM acceleration, as you're describing here.
But so much of modern software development is figuring out what the right thing to build is. And in those situations, I don't think LLMs provide nearly as much benefit.
I think the role of llm’s is once you have a rich enough understanding of what you want - you can speed run to build it. And then perhaps re-build to cover the issues created by the llm.
Problem for model producers is - the revenues they get from this mode of work is tiny relative to what they need.
And there's a few domains where the spec is clear and the solution kinda easy to implement. But it breaks the contract with your users or downstream projects and you now have to coordinate communication. Code rarely exists in isolation.
The article severely underestimates deployment times for large, world wide services. Usually the strategy is to have a smaller "blast radius" for deployments and going in stages that are also usually time bound ("let it bake"). It also does not account for outages and fixing things you only find in deployment. Programming languages like Python it using injection in Java (e.g. using Guice) either need pristine testing, and all test teams were converted to dev 20 years ago, or have a magical way to destroy all the help compilers and static analysis can give you. So yeah, you take the 4 weeks of development from your 6 month deployment, then add 6 weeks of debugging and retries by using AI. You're welcome that will be 3 million tokens, of which you wrote 1k, the rest was system prompts and "reasoning", which you do not control. This whole AI space is highly fixable, but requires investment no one seems to be willing to do, particularly in areas that were mistakes from the past.
At least where I am we can’t and shouldn’t know all the requirements of a project beforehand^. Every project is an iterative learning process between the users, product and engineers. The problem is if everyone uses AI to replace their thinking it breaks that process and no one learns anything.
^ I say shouldn’t because I work in research engineering. Most of the needs of our users are pretty unique. We’ve had people come in and try and specify every piece of work, -and ended up building a crud app no one wanted or used.
> Every software developer knows that you can’t make projects go faster just by typing faster. If that were the case we would all be taking typing lessons.
So well said.
AI is unveiling how the bureaucracy is the slow part.
> AI is unveiling how the bureaucracy is the slow part.
Computing has been doing that for decades. If your process is fucked, computers make it fucked faster.
It’s just that now, we have entire generations alive that have never seem a world without digital computers. ~LLMs~ AI is a fun new lever in some uses so clearly it is finally the hammer that will drive the screws and bolts for us, with less effort on our part!
They just have to learn from experience. It’s what you do when you can’t be bothered to learn the lessons of the past.
Bureaucracy cannot learn the problems of the past with bureaucracy because it is against their self interest.
Work in large orgs long enough and you will recognize these creatures. Ladder climbing is a skill orthogonal to adding any value to the customer/company.
You're right it's just like any other mechanization/automation revolution. Except it's not.
It's happening about 10x faster than any other I've seen or read about.
Conceive how long it took just to get barcode scanners rolled out in grocery stores. Or direct payment terminals. Or how many decades it's been getting robotics into the manufacturing of cars at scale. I worked through the .com boom and I can tell you that "webification" took 10 years or more for most businesses (and many of them now just gave up and just have a Facebook page instead etc)
This is a little insane what's happening now. It really does change everything. People who don't work in software I don't think have any idea what's coming.
It's highly salient to management, and being forced top-down by them at 10x speed, for sure, because they see a future cost save to reduce headcount.
For certain technical roles its a force multiplier and already very saturated for sure.
On the other hand there's a lot of solution-looking-for-problem going on in large orgs where layers of management have been banging the table for 2-3 years on AI KPIs without any value being delivered.
In the weekly AI wins mail at a friends company, multiple non-technicals were bragging how AI has saved them 15 minutes a day by summarizing their morning inbox. This was the big game changer for them.
This post makes it sound like an engineers role is only the collection and filling of feature gaps, but leaves completely out that an engineer is also responsible for the feasibility of a feature. If you get a request for a feature, but you are aware of the current system's limitations, it is your job do come up with a solution which fits into the business sides given frame. But nowadays engineers have been so much drilled that showing resistance to management is portrayed as a lack of skill and not a lack of trust from management into their staff. And when it is clear that your management actually doesn't clear it just tells you how much of the self proclaimed mission is the real motivation behind these people.
If the acceptance criteria of management does not meet your principles you might not be the right fit and if, in my opinion, the ac of management are mostly based on the next promise made to investors or by sales to prospects, their goal is to make money and not to develop a quality product.
Delivering more complete details for a task at hand is a noble goal, but there is a problem.
Programming is a logical circuit breaker. There is a wide range of incompleteness that halts development or puts the solutions in an unpublishable state.
A product person has no compiler, no RAM, no database, no state machine. There is nothing that can fail. There are probably strategies to weed out some issues, but none will be perfect.
We need to combine reality with computers. Computers set the constraints and we can only check if we are in bounds of the constraints by solving the problems with computers.
Oddly enough AI has so far nothing to offer to improve the "product people" problems.
Fascinating, I was literally thinking about how to communicate this to coworkers the other day, literally down to the gantt chart. Now I don't even have to make one =)
> We are now talking about software development, but this is applicable to all processes that take longer than you would like.
Indeed, it's kind of a generalized version of Amdahl's law. Since we only speed up a portion of the work, there are upper bounds on time saved. Worse, work in progress tends to bunch up at a specific point: code review. A coworker of mine literally complained two months ago now that nobody was reviewing code (and that it was blocking his work). I'm not sure review delay has actually gotten better since.
>What people typically don’t do is look at why this is taking so long, and even more importantly: long duration does not automatically mean the problem originates there.
To some extent, we tell as many lies as we can get away with. Some answers are more convenient then others.
"Why" this is taking so long, like "why did this fail?" are prone to broadly agreed lies. Sometimes this is for obvious blame liability reasons. Often, this is because the lie conflicts with some "meta."
One such fallacy is the idea that software=value. Code= money, because it cost money to write. Features=revenue. Etc.
Irl.. startups produce features very quickly because they actually need features. They start with zero features.
But... LinkedIn, visa or even Facebook.... What they are short on is opportunities to develop code with value. Ie... Something that will increase revenue.
FB aren't resource constrained. They're demand constrained. If there were a "write code, make revenue" opportunity available... they'd have taken it already.
This totally conflicts with the experience of working somewhere. That's because you have wishlists, road maps and deadlines.... and it always appears that demand for code is sky high.
It’s completely wild to me that lifelong programmers come into contact with agentic coding and come to the conclusion that their jobs are safe for one reason or another. AI will definitely be able to write entire software, inclusive of figuring out requirements and asking the right questions. It’s not that far already. Why is it that everyone looks at weaknesses of a technology that didn’t exist a couple years ago instead of appreciating the incredible rate of improvement? I know why, because it’s inconvenient to the narrative of what makes us valuable. But still, our job is to turn ideas into a sequence of logical steps. Why can’t we do the same when forecasting the impact of AI on our jobs?
Because the "rate of improvement" is only astonishing in well understood areas and really only astonishing if you yourself are not that great at what you do.
Speaking for myself here, my job is extremely safe given that my boss doesn't wanna sit there and prompt AI all day and i work in a fun little 4 person company. We already have plans for the 3 next years which involve me :-)
Because the "rate of improvement" is only astonishing in well understood areas and really only astonishing if you yourself are not that great at what you do.
This is a bold vague claim many on HN make, but never put back-of-napkin numbers on. e.g. do you think agentic Opus 4.7/GPT 5.5 are 95th percentile coders but you're 98th percentile? Or are you saying you're a middle-of-the-road 60th percentile coder and AI is 20th percentile so only 20% worst programmers should worry? Let's be specific about the claim being made.
I don't think we're going to be able to have rational conversations about this with C-level folks for quite some time. They mostly seem too wrapped up in copying each other to think clearly, and it's only when the bottom line starts suffering that we might be able to start asking some questions about their strategy.
Yes, it is true for large enterprises, but not for startups ans individual creators. AI is accelerating speed for anyone who is not stuck in Corporate breaucratic processes.
The primary issue is simply that developers are the most immediately impacted by this technology. The combination of being able to adopt, willing to adopt, and the tech actually being incredibly good at developer related concerns is unique. The rest of the business will eventually catch up. I'm watching it happen in real time. It is agonizingly slow in most places, but it is happening.
The developers being able to drain a one year long work queue in an afternoon is meaningless if the rest of the business cannot absorb the effects of that work in the same timeframe. The business will not leave your idle work queue on the table for long though. Keep pulling a vacuum on them and they will fill the space eventually.
It’s amazing to see some people talk with 100% confidence about the macro view of AI assisted development when we have had strong coding agents available for less than a year.
Exactly and the tools weren't even that great for most of that year. They only got properly usable around the end of last year. At least for me. I'd call it more like half a year.
If you don't like the state of technology with AI tools, just wait a few weeks. Things are still changing at a quite rapid pace. The scope of what is possible seems to shift regularly. A lot of what I did in the last weeks was complete science fiction even a year ago.
This article makes a few good points though. AI won't magically make processes faster. You might actually have to change the process. A lot of processes in companies are about people and how they communicate. The more people you have, the more communication you get. It's an exponential. Using AI in that context just adds to the communication noise.
But if you restructure your processes you might get different results. Most companies have not really gone through that process yet. It's too early to call success or failure. And especially non technical people have mostly not yet experienced any agentic tooling at all. We've yet to see how that will change companies. My guess is that some companies will be better at this than others. And we'll see a bit of darwinism play out.
Some organizations added a ton of process around software development because it is expensive and risky. They require a ton of approvals and sign-offs, then some managing overhead on top to check if their investment is on the right track. This approval process is bound to change by the fact that development is far cheaper and faster now.
Another aspect that is not captured here is that the lawyers and subject matter experts will also be using AI to speed up their parts.
I think the thing that gives human developers a leg up is the ability to read between the lines of a spec and have the ability to intuit the expected output more than an LLM in many cases.
The human their cumulative experience over a career of the nuances behind every decision and their evolved context at their given company. This context allows them to take that one-line spec and extract tons of detail from it by knowing who wrote the ticket, what was the "trigger" for the ticket, what other work is being done in tandem that might need to be incorporated, etc.
LLMs can be given this context but it's a manual process of transcription into its prompt/memory/skills and that content must be continually updated and refined. It just pushes lots of work to spec writing from the more intuitive nature of feature development a lot of us have a level of mastery over. Then you must constantly have a back-and-forth to refine the output.
Any senior engineer knows that a lot of that communication is wasted energy. If I have a good idea of what I'm building I can develop the feature in a focused flow of output that I refine in an almost unconscious way because I don't need to translate intent into words, just code, and that process is incredibly automatic after years of developing software.
When all the effort is placed into writing specs, re-prompting and then reviewing (often over and over again), that intuitive and automatic ability to build software degrades. Think of a time when you were mostly focused on PR reviews and not contributing to a project. You may have been able to help developers build better code, but if you were to jump into that project to contribute, there would be a real and painful effort to re-familiarize yourself and reconstruct that intuitive familiarity of the project.
LLMs have many very useful qualities but so far I fear an over reliance on them can be more a hinderance than a benefit.
It's felt awhile similar to what we see in parallel computing:
- shift towards throughput-oriented vs latency-oriented. Can juggle more tasks, but increasingly hard to speed up individual ones.
- strong scaling is tough. Might even see slowdowns for individual tasks, so reliable benefits come from being able to juggle more and eat the per-task inefficiency
- amdahl's law: we can't speed up tasks beyond their longest sequential (human) unit, so our work becomes identifying those bits and working on them. Related: you can buy bandwidth, but you can't buy latency
The promise of AI is in doing things at all that couldn't be automated before, at least economically. And when you find a use case where a bit of automated inference is sufficient and can replace human inference, it can wildly speed up a process, from when Susan has time for it, to right now.
Handholding is an issue which is affected by 3 factors: the model, the tooling and the human expertise. Out of the three, the last is the weakest link, due to the fact that it takes the longest to nurture.
Once tooling (e.g. agent harnesses, external tools) becomes more mature and consistent, the other 2 will become less of a bottleneck.
If I were to take a gamble here, I would argue that development will at one point reach the more ideal scenario, whereas the project planning, the scoping, will become longer. Also, the documentation section will take almost the same as the development, slightly longer at the edges.
The new ai-assisted era will most likely push companies to adopt a Waterfall management, rather than an Agile one.
Honest question: Does anyone know about any quantitative study or analysis on productivity gains using code assistants? Asking for numbers comparing between the "pre AI era" and now.
Also, I have the impression that LLMs bring some gains or benefits for individuals but not relevant enough at the organization level.
I believe it is very hard to quantify „productivity“. I’m sure that for suitable definitions you can find gains from coding assistants. Personally I get more code written and more features implemented. Yet I’m very wary of coding assistants because I believe they deal a fatal blow to my ability to understand the system. All LLM generated code is (at best!) code that was written by an intern which I just helped with the design and reviewed (unless productivity expectations cut down my review time and I get LLM assistance for reviews too). My grasp on the inner working of that code is much more tenuous than had I written it myself. I will never become an expert by just reviewing code and prompting.
For a while this is not a problem: I can work with my current mental model. But every generated PR erodes my expertise a little bit. Eventually my mental model won’t fit anymore.
So how much of that model maintenance should I count into my productivity metric? Does that even matter or will the next model be able to reason well enough that my mental model doesn’t matter?
Every large corporation is stuck in communication problems and approval processes. They have grown so large as to have minimal alignment between what the company attempts to produce, what makes the company profitable, and what people actually do. Enshittification, The Gervais Principle, Bullshit Jobs. Pick your favorite, flawed way to look at what is going on, it's all blind people touching different parts of the same elephant.
The way AI makes your processes go faster will have little to do with cutting software development time in itself, but by letting an organization be made with fewer people, which in itself lowers your misalignment issues. A giant company of 200K people will still be about as messy as one today, but you might be able to do a lot more with the same number of people, just like a lone programmer today, without AI, already does quite a bit more than anyone could do by themselves the 80s.
Maybe some of the advantages are that you don't need quite as many developers, or maybe you can use a smaller marketing team, or you don't need to spend that much time answering questions, because an LLM is doing it for you, and it's tracking what it's been asked of it, turning the questions into product research. Either way, the gains come from being able to run leaner, and therefore minimizing organizational misalignment.
While this is true, it doesn’t stop businesses being overzealous with AI. It’s a compound issue of a decade of ZIRP, grow at all costs and then covid overhiring and AI is suddenly poised as some kind of magical panacea.
The broader issue is the sheer number of businesses that build massively overcomplicated stacks, bought heavily into bandage solutions like AWS lambda, got on dumb tech bandwagons like big data, nosql etc. This is just another one.
I think you can engineer yourself into being leaner, in some businesses AI will help but we’ve had over a decade of “we can just add more complexity” and it just does not work.
I’m a rails guy. People forget for every unicorn there’s 10 9 figure businesses just ticking away on some niche with a VPS, rails and like 4-10 devs.
Insofar as I have seen anyone get actual productivity boost from AI, the process went like this:
We have a person who wants, effectively, a formatted report generated on demand from four sources. The current interface is four different programs, all of which were written by different groups inside the corp, but they also all draw from the same or similar databases. There's a unified login, but each interface has its own permissions.
The company brings in an AI initiative and soon enough drops all security restrictions for the AI's access to the databases. The new formatted report gets generated through the use of a few tens of thousands of tokens each time, and about 5% of the time synthesizes non-existent data.
A competent DBA and application programmer could have spent a week doing the same thing, producing a program which would do the job faster, cheaper (at run-time), secure and in a way which could be extended and debugged.
But DBA and application programmer time is expensive up-front and the execs are gung-ho about the stock-price now that they are hip and trendy.
Recent NYT podcast showcased how China and the US are putting time, effort, and money into using AI. I have to say I liked China’s approach of AI percolation into the economy than US approach of walled gardens with cloud.
> Every software developer knows that you can’t make projects go faster just by typing faster.
You know, typing fast and accurately is kind of important.
The new speed skill that developers now need is speed reading. LLMs just make copious amounts of output (from tests, documentation, diagnostics). They also produce code so quickly that a skill for focusing on weak points is so important.
I understand a Deloitte consultant has specific incentives. But let's first try to answer a baseline question: why do some companies have thousands of software engineers? What do they all do?
And then, a follow-up: what is actually the bottleneck at most companies? What causes "requirements gathering" to take long?
> And then, a follow-up: what is actually the bottleneck at most companies? What causes "requirements gathering" to take long?
Complexity.
In my experience (medium size businesses, i.e. 200 million to 2 billion annual revenue) we're trying to understand how a complex set of systems and business processes and different businesses (external partners) interact and then trying to morph all of that into a shape that now has capability X layered on top or in the middle.
Here's a concrete example, business X that makes their own products and has retail stores as well as an ecom site wanted to add the ability to put complementary items built by other companies on the website and have them drop shipped from the vendors to the consumers. The final solution involved 21 different interfaces between 4 different systems (ecom system, store system, omni channel system, external drop ship mgmt system) as well as a new internal system to manage this activity. It's takes a significant amount of time to understand and solve for all of the low level details.
Isn't the answer to both questions straightforward? Real life is complex and has nearly infinite degrees of freedom. This means it's hard to approximate in software. Over time, real life, your understanding of it and your approximation (the software) all change. Keeping the approximation accurate enough that it's useful takes considerable effort since now you need to understand both the real life and the previously existing approximation of it.
What do they do ? Give power to their management ? "I am responsible for 50 people, I am important". "I managed over 250, I am important, give me money".
Large corporations with orthodox methodologies will take time to extract the best benefits from AI. Small teams, which still remember the original Agile Manifesto, will soar and overtake their competitors.
Speaking about the middle, once I was shown advice from ai that a particular ticket would stall at “frozen middle management” and should be shelved until “coordination” improved. That sounds accurate, but can you imagine what a token-obsessed PM might say?
If the underlying workflow is noisy, ambiguous, or overloaded with coordination overhead, faster generation just produces more low-context output to review and reconcile.
This is so true. Recently, I’ve been working on a project involving almost every department, including Product, Engineering, Compliance, Finance, etc.. We kicked things off late last year with a many meetings. Product was primarily coordinating between the teams, but engineers also met directly with non engineering departments to explain technical details and accelerate the timeline.
However, while the engineering team successfully fast tracked development, UAT, and production testing largely thanks to AI other departments only began digging deeper into the project toward the end of April. To be fair, they do use AI in their workflows to some extent, but they haven't adapted their processes to keep pace with engineering's increased productivity.
In my opinion, this lag is mostly because many employees in those departments are older and hesitant to change their routines. While I understand that resistance to change is a natural human trait, what comes to my mind is this beautiful German adage, "Wer nicht mit der Zeit geht, geht mit der Zeit" which loosely translates to, "Who doesn't change with time is left behind by time"
> ...but that doesn’t mean it’s generating the correct code.
Something I'm observing is that now a lot of the pressure moves to the product team to actually figure out the correct thing to build. Some product teams are simply not used to this and are YOLO-ing prototypes now, iterating, finding out they built and shipped the wrong thing, and then unwinding.
Before, when there was the notion that "building is expensive", product teams would think things through, do user interviews up-front, actually do discovery around the customer + business context + underlying human process being facilitated with software.
This has shortened the cycle to first working prototype, but I'd guess that in the longer scale, it extends the time to final product because more time is wasted shifting the deliverable and experience on the user during this process of discovery versus nailing most of the product experience in big, stable chunks through design.
At the end of the day, there is a hidden cost to fast iterative shifts on the fundamental design of the software intended for humans to use and for which humans are responsible for operation. First is the cost on the end users who have to stop, provide feedback, and then retrain on each cycle. Second is that such compounding complexities in the underlying implementation as product learns requirements and vibe-codes the solution creates a system that becomes very challenging for humans to operationalize and maintain.
Ultimately, I think the bookends of the software development process are being neglected (as author points out) to the detriment of both the end users and the teams that end up supporting the software. I do wonder if we're entering an "Ikea era" of software where we should just treat everything as disposable artifacts instead.
LLMs are great at two things: search and speed of generating code.
I get most value from them when I'm asking it to either fill in the blanks of something already half implemented or when I need some feature in a given context/language that only exists in other languages
There is another problem. For developers, productivity means "functionality produced per hour of work", but that's not what productivity means for businesses. To them, productivity means "money produced per hour of work", and because AI costs money, it is this number that needs to go up (not quite, as it's more "value" than money, but until the economy adjusts they are similar). Even if we could considerably reduce the time between releases and/or do it with fewer people at scale across the industry, for it to pay off, we'll need to see a corresponding rise in demand for software and/or features.
Another option is that lower software costs would significantly reduce the cost of whatever non-software product the software supports (manufactured good, electricity, services, telecom etc.) but I don't know in which industry the cost of software is a large portion of the overall product cost.
And there's another thing. A company that makes tractors can't produce food without land. A company that makes metal machining equipment can't make cars without the raw materials. But a software company that makes software that automatically makes software could just produce the result software itself rather than sell the software-making software. If AI ever reaches the point it makes software at a marginal cost that's not much higher than the cost of the AI itself, what would be the incentive of selling that AI?
Great explaination it is true that AI doesn't generates the correct programs everytime but sadly it has become a common practice to involve AI in every aspect of Software Engineering, and it is true that it made Software Engineers to become product manager and their work has become to debug and test the entire codebase which adds more frustation.
Everything is OK, but the size of Gantt chart should be expanded.
Our current most popular methods of using AI with software development is either waterfall or autocomplete. We aren't at a great pair programming experience yet. I presume that would improve speed and accuracy, but it's still unclear.
Another post that doesn’t understand effective use of GenAI in software engineering.
The assumption is that there’s no way to extract speed and accuracy matching business models.
This isn’t obviously false to the majority of dev/arch’s because most are vibe-coding, but it is extremely obvious to the minority that has focused on accuracy first THEN speed.
This blog post is nonsensical and the arbitray time boxes aren't realistic. Not all development cycles or features require legal input and I would hazard most don't, even in Big Tech. Documentation takes seconds to generate. Same as tests.
Feature development could take minutes to hours depending on how you iterate it. These days, all we do now is just think of a feature and add it within an hour using AI. We have a process that is a year old now that is fixing bugs that would have taken us hours or days and it spits out a fix in about 10-15 minutes that is 95% accurate. 5% is garbage, but 24 months ago, 95% of it was garbage so the progress is staggering. The longest pole is code review which is all human, but that will all be automated soon.
Not everything will be much faster, but most processes will be 1-3 orders of magnitude faster. To ignore this or find excuses why LLMs/AI won't speed things up or remove the need for large swathes of humans is delusional and cope-ism.
In my world automotive/mechanical engineering we are also observing how much AI can help you build a mental model, fetch unstructured data and help shape your understanding of the system. Onboarding new engg. figuring out what is what in the system. It could have taken hours before to fetch right info, now we are able to do this in seconds.
While I agree with the article, I think AI can speed up all steps in the Gantt chart. It's really good about aggregating and summarizing information.
>Process blocked on human inputs
Have AI check chat, email, issue tracker and see who it's blocked on and what latest status is. It may not save a huge amount of time but it can dig through the info pretty quick.
>Exploration
Once again, have it scour issue tracker, chat, customer suggestions, product documentation and summarize history and current status. Much quicker than setting up new meetings to try to rediscover and organize existing info.
Another use case, have agent build prototype, hand to people, have AI summarize and integrate feedback.
Claude or ChatGPT + Slack MCP + Jira MCP + Google Docs MCP + internal knowledgebase MCP + gh (GitHub) CLI + Datadog MCP--really 1 MCP per process in the Gantt chart--has been a huge boost at work just digging through context scattered all over the place and summarizing.
That said, it definitely still needs supervision and hand holding along the way
So we have spent 40 years trying to get management and investors to understand that 9 people can't make a baby in one month.
There's no point in falling under the illusion that they'll finally get it now. This will all fall on deaf ears. They're convinced they're automating us out of existence when in fact they'll need the services of people who can surf complex systems more than ever.
We will be able to do more than ever and potentially faster. The issue remains that most of the things these people ask us to do and want us to do and pay us to do remains basically stupid and as TFA points out, the last mile of getting shit properly shipped isn't going to speed up. It's going to slow down.
If you want to see what happens when you put people in charge who sincerely believe in the "AI automates SWEs out of existence" mantra, take a look at the code quality of Claude Code and the recent "bun rewrite in Rust" fiasco.
I'm very much enjoying how Anthropic is basically an anti-advertisement of how things will go if you try to run a company with text generators. The univerally-despised customer support, constant outages, hilarious bugs in CC, and now how badly the bun acquisition backfired..
Whilst the conclusion of the article certainly seems plausible, it glosses over the cost calculations and simplifies them too much.
The cost of a subscription is somewhat offset by being guaranteed income regardless of usage, following the financial models of gyms. Whilst api costs represent both the convenience of on-demand pricing and the scale for applications with many users.
Further, the costs of api and subscriptions need to cover the operating costs of the business, the massive SOTA training costs as well as the costs of inference.
The true cost of serving tokens is buried in all of that in these enormous, opaque companies.
It absolutely will make some things faster. Anyone that has ever churned out some boilerplate code with it knows that.
...but yeah most organizational processes & people aren't set up for leveraging it and roll out will be slow (same on learning where it does / doesn't work).
I’m not convinced. I’ve been using AI pretty heavily for about 18 months and agents for a little over 6 months.
I’m currently working on a data migration for an enormous dataset. I’m writing the tooling in go, which is a language I used to be very familiar with, but that I hadn’t touched in about 12 years when I started this. It definitely helped me get back into go faster.
But after the initial speed up, I found myself in the last 10% takes the other 90% of the time phase. And it definitely took longer for me to wrap my head around the code than it would have if I’d skipped the AI. I might have some overall speed up, but if so it’s on the order of 10-20%. Nothing revolutionary.
I have been able to vibe code a few little one off tools that have made my life a little easier. And I have vibe coded a few iPad games for my kids for car trips, but for work I still have to understand the code and reading code is still harder than writing it.
This is also not from lack of trying , I spent $1000 last week during a company wide “AI week”. Mostly on trying to get AI to replicate my migration tooling, complete with verification agents, testing agents, quality gates, elaborate test harnesses etc…
I’d let Claude (opus 4.7 max effort) crank away overnight only to immediately find that had added some horrible new bug or managed to convince the verification agent that it wasn’t really cheating to pass my quality tests.
What I learned from last week is that we are so far away from not needing to understand the code that everyone who says otherwise is probably full of shit. Other people who I trust who have been running the same experiments have told me the same thing.
Until and unless we get to that point, it’s always going to be a 10-50% speed up (if that).
>if so it’s on the order of 10-20%. Nothing revolutionary.
For many businesses that is revolutionary.
Not sure that's enough magic to make the math work for the trillions being invested, but on a ground level within companies even small wins stack up. You may have burned through $1000 without getting much done, but from a company perspective they've probably got an employee with better instincts as to what does or doesn't work
I think the $1000 was worth spending just as a one time experiment. And there are use cases where LLMs are fantastic. It’s great at debugging because tracking down a bug usually takes much longer than verifying it once it’s pointed out.
Where I have a problem is with the FOMO, panic, and mania that has come down from up top. There are people in my company saying that we should be spending 3x our salaries in tokens.
But if you’re in a business where a 20% speed up is revolutionary, there are so many things that have been on the table for years that you could have been focusing on. I’ve seen at least 5 advances over that have happened over the last 20 years with that kind of boost.
That’s probably about you’d get from spending time really learning vim or eMacs.
How does that 10-20% change when the cost of tokens rises to meet post-IPO earnings targets? For example if it increases 2, 5, or 10x, does this 10-20% gain net out? (Rhetorical question)
Maybe my existing processes not but it can help you enormously.
I literally found a problem with AI analyzing packages in Wireshark and it hinted and steered me in the direction in me finding the error setting in the end. Could a senior network guy found it? Yes but probably not even faster. Did I as a L2 SWE not being familiar with much of networking and the companies stack(was like 1 Month at this company) found it with no AI, absolutely no.
What a naïve article. People don’t write software this way anymore. A Gannt chart? We don’t use those anymore.
People have to stop promoting this narrative of the AI doesn’t make you move faster as it’s not helping anybody.
I get it. We all worked hard for our skills and it’s really difficult watching them get automated away, but it’s been this way since the printing press assembly lines and the industrial revolution itself. Things change, and you have to adapt to them and stop thinking about it from a centric point of view. The narrative people should be pushing is that you can build great things with AI.
Of course you might not have a job for a while and yes, that’s a big deal but it doesn’t mean that AI is wrong or stupid. It means you have to adapt.
Exactly. The larger the organization the less percentage time devs actually are doing dev work and the less direct benefit there is from AI assisted coding tools.
I have a colleague who vibes the shit out of his part, and it results in large commits that take a lot of time to understand, and that makes cooperation practically impossible. LLMs are not team players.
I get much different results than others when using these tools. Turns out there is some skill in wielding them, and knowing the domain in which you do.
That's a you guys problem. Maybe one or both of you.
Have you thought about pair programming together with the AI?
My LLM outputs are intentional, in my style, and tightly reviewed by myself.
I'm also emitting Rust, which I've found to be the very best language to work with in AI. The AST and language design is focused around control flow and error handling. The borrow checker, sum types, filtering and mapping makes it such that good design is idiomatic.
There's a lot JavaScript, Python, PHP, and Java in the world. A lot of it isn't great. The architectures and styles are wildly varied too. Rust doesn't have that problem. The training data is really solid and idiomatic.
The bottom line is that AI is genuinely useful at prototyping new features, acting as a sounding board, and generating quick initial drafts, even if the quality isn't uniformly excellent. It seems plausible to conclude that it will only take a little additional effort to refine and improve that initial draft to achieve excellence and truly high-quality, production-grade code. In reality, whole processes to build properly with AI-generated outputs and that mitigate thoroughly against the fundamental limitations and constraints of AI agents (many of which are not well understood even by daily users) really need to be invented and implemented.
I think many things that were true prior to AI are still true or more so today, but new workflows and processes altogether are needed. I suspect that comprehensive, detailed planning and specification documentation must be assembled in advance of beginning code (akin to waterfall) when working with AI agents. Furthermore, I still believe customers and other key stakeholders need to be involved early and often so that the product can iterate towards a better ultimate end state (i.e., agile). Unlike prior to AI, it's completely plausible to implement both types of approaches, and they aren't mutually exclusive. We can do comprehensive, exhaustive, thorough planning and specification documentation prior to handing off to dedicated engineering and products teams, AND we can work quickly and iteratively via sprints that aim for frequent meetings and updates with the stakeholders that matter.
I also think the same validation gates that mattered before -- linting, SASTs, but most importantly, comprehensive automated testing that gets run locally and in CI/CD and is regularly expanded to cover all expectations about the behavior and structure of newly-implemented functionality -- continue to matter now, more than ever.
New tools and processes also must be built to make human review, the single biggest bottleneck in software development today, more simplified and streamlined, and less taxing. I think tools like CodeRabbit and Qodo can help automate and expedite the code-review and approval processes, but they would be even better if they were working off more surgical and tiny edits. Bloated, verbose AI-generated code edits are the core problem here. Process management techniques to mitigate the problem of AI code overload can prohibit the submission of AI-generated PRs, require senior engineer approval of any PRs prior to merging, or block the maximum number of lines or changes made. More sophisticated processes like Graphite's stacking of PRs are genuinely helpful in breaking down massive PRs into smaller chunks.
Finally, precision-editing tools for AI coding assistants like HIC Mouse (full disclosure, my project) that move beyond the existing options available to AI agents of whole-file replacement or exact string-replacement to enable agents at the editing-tool layer to perform surgical, tiny changes that don't touch any unrelated content, giving agents specialized visibility, recovery, and next-step guidance mechanisms that safeguard AI workflows, can materially reduce AI code slop by alleviating burdens upstream of code reviewers, both automated and human.
The bottom line: Shipping secure, production-grade code was never easy and always took a long time. It's not necessarily easier now just because certain aspects to the overall process can be generated much more rapidly. Arguably, the hardest parts like human review and approval are much harder now -- not easier. Solutions will take hard work and must be tested in the crucible of real-world enterprise usage. I am guessing that companies that deploy successful processes will be wildly profitable. Those that don't, including well-established incumbents, will fail. I do think AI absolutely can give organizations a game-changing boost in development velocity of genuinely high-quality code that might even be better than anything ever created previously. I also fully agree with the author that for many organizations, AI will not make their processes go faster and may even slow things down.
I just spent a few days cleaning up someone's web app they created with Claude Code. There was more than 30k lines of DEAD code, and I was able to cut the code that was actually being used down by ~30-40%. If I just wrote this app myself It would have taken a day or two.
LLMs are not helpful, they make everything worse. They make you worse, or reduce you to average at best. I really just don't see what ya'll are seeing. I have access to every model with no limits, Its not issue of "holding it correctly" I can assure you, I've tried.
Yes it can create very small programs with low complexity, but anything of any size ends up as a literal Eldritch horror or with so many subtle bugs that make life miserable. I actually hate all of you that are pushing it onto people, its such a lie.
Yeah, totally agree. In my experience I've found that people think AI is making them more productive, but mostly it just seems to amplify their existing failure modes. They don't realize they are wrong, because they never had the skill in the first place.
So, for example, if someone is a poor at architecture, then they ask for AI's help to design a new feature, they won't know when to push back on the AI design, so the design will be overly complex and not solve the problem optimally.
If they are a poor debugger, and ask for the AI's help they will not know when it has incorrectly made a false assumption on the root cause or interpreting data and come to a faulty conclusion.
If they are poor at writing optimized code , and ask for ai to write some , they won't push back when the code is literally 10x the size it needs to be to solve the exact same problem.
This one non-technical PM guy at work used Codex to develop a project I was expecting would fall on my plate. He asked me to do a code review on it. What it produced was riddled with SQL injection vulns and the UI was complete garbage.
Off of that example, the key stakeholders on my project are demanding I start vibe-coding everything. I raised the security flag and now they are saying, "well, now we have a prototype and real development can continue," but it's clearly just to mollify me and make me shut up, because no such development effort on that other project has been planned, scheduled, budgeted, etc. They are kind of just sitting around on it, hoping they can get everyone distracted long enough to sneak it out the way it is.
"But he did it in a week!" Yeah, it would have taken me only a week to make whatever of value actually was in that project. The reason our software projects at our company take longer than a week is not because of code, it's because we have an IT department that blocks production deployment of everything unless you literally get the president of the company to make them do it. That's not a repeatable process that every project can leverage.
There was another project another more-technical-but-not-a-developer guy (he knows how to use MS Access) did in Claude Code where, yes, Claude could read a bunch of PDFs he got from the client, get the salient details out, made an Access database out of it, and made a static HTML website out of it to make those documents easier to search and navigate. But again, the UI was complete, unadulterated garbage. And, the best part, he spent several weeks on just getting Claude to reliably process the entire set of documents. He never could quite get it to end-to-end do the entire process. It kept missing documents and reprocessing the same ones over and over again. A for-loop to iterate over a directory of files would have taken 2 minutes to code by hand and he got stuck on it for over a month.
AI will speed us up, my ass.
Look, if AI means I never have to open another PowerPoint from a client to read a "quad chart" on one particular slide to get the data I need to do my project because my client doesn't understand that PowerPoint is not a data transmission format, fine. I'll be happy with just that: AI vision as a library I can call out to from my code, just like we've been trying to do with OCR but traditional OCR sucks at the job. But there's a bigger drumbeat than that and it ends in dilettantism and laying off the junior analyst and developer staff. I will be no party to that.
I'm not even a front end guy, and have little experience with UI/UX, but its wild how easily decision makers are impressed with UI spit out by an LLM. This era of anybody being able to make a dashboard with Claude Code has made me really appreciate the amount of sweat that designers and devs put into a good user experience.
I agree with it being great for OCR, the most impact LLMs have had for me are structured outputs I can call from a function: "extract X value from this ambiguously structured document and return json that can my code can deserialize into Y type." However, how many people are doing something similar and spinning up $500k in GPUs just to avoid writing regex.
They are so bad at UI. I'm not a traditionally trained UI dev, but in the past I've been the solo dev on so many projects that I had to git gud on making at least a clean, functional UI. No crazy animations and "delightful experiences," but I'm skeptical that level of design is necessary for anything outside of consumer apps designed for kids and the child-like. My default, slapped-together UIs for just getting out of the way and getting forward movement are still infinitely better than the inconsistent, buggy, overly decorated UIs the LLMs produce.
> This exact thing is what software developers have been begging for since the beginning of the profession: Receiving a detailed outline of the problem and what the end result should look like.
> This is often the part that slows down software development. Trying to figure out what a vague, title only, feature request actually means.
But that is exactly what Software Engineering is!. It's 2026 and the notion that you can get detailed enough requirements and specifications that you can one-shot a perfect solution needs to die.
In my experience AI has made us able to iterate on features or ideas much faster. Now most of the friction comes from alignment and coordination with other teams. My take is that to accelerate processes we should reduce coordination overhead and empower individuals and teams to make decisions and execute on them.
> It's 2026 and the notion that you can get detailed enough requirements and specifications that you can one-shot a perfect solution needs to die.
It's 2026 and the idea that even with detailed-enough requirements you can one-shot even a workable (let alone perfect) solution also needs to die. Anthropic failed to build even something as simple as a workable C compiler, not only with a perfect spec (and reference implementations, both of which the model trained on) but even with thousands of tests painstakingly written over many person-years. Today's models are not yet capable enough to build non-trivial production software without close and careful human supervision, even with perfect specs and perfect tests. Without a perfect spec and a perfect human-written test suite the task is even harder. Maybe in 2027.
Sorry where are we seeing that it failed? It compiled multiple projects successfully albeit less optimized.
" It lacks the 16-bit x86 compiler that is necessary to boot Linux out of real mode. For this, it calls out to GCC (the x86_32 and x86_64 compilers are its own).
It does not have its own assembler and linker; these are the very last bits that Claude started automating and are still somewhat buggy. The demo video was produced with a GCC assembler and linker.
The compiler successfully builds many projects, but not all. It's not yet a drop-in replacement for a real compiler. The generated code is not very efficient. Even with all optimizations enabled, it outputs less efficient code than GCC with all optimizations disabled.
The Rust code quality is reasonable, but is nowhere near the quality of what an expert Rust programmer might produce. "
For faffing about with a multi agent system that seems like a pretty successful experiment to me.
Source: https://www.anthropic.com/engineering/building-c-compiler
Edit: Like I think people don't realize not even 7 months ago it wasn't writing this at all.
> where are we seeing that it failed?
Anthropic said the experiment failed to produce a workable C compiler:
- I tried (hard!) to fix several of the above limitations but wasn’t fully successful. New features and bugfixes frequently broke existing functionality.
- The compiler successfully builds many projects, but not all. It's not yet a drop-in replacement for a real compiler.
(source: https://www.anthropic.com/engineering/building-c-compiler)
Software that cannot be evolved is dead software. That in some PR communications they misrepresented their own engineer's report is beside the point.
> It compiled multiple projects successfully albeit less optimized.
150,000x slower (https://github.com/harshavmb/compare-claude-compiler) is not "less optimised". It's unworkable.
> Like I think people don't realize not even 7 months ago it wasn't writing this at all.
There's no doubt that producing a C compiler that isn't workable and is effectively bricked as it cannot be evolved but still compiles some programs is great progress, but it's still a long way off of auonomously building production software. Can today's LLM do amazing things and offer tremendous help in software development? Absolutely. Can they write production software without careful and close human supervision? Not yet. That's not disparagement, just an observation of where we are today.
> Can they write production software without careful and close human supervision? Not yet. That's not disparagement, just an observation of where we are today.
I never claimed they could! I just view this as a successful experiment. I don't think anthropic was making that claim with their experiment either.
It feels reflexive to the moment to argue against that claim, but I tend to operate with a bit more nuance than "all good" or "all bad".
The experiment failed to produce a workable C compiler despite 1. the job not being particularly hard, 2. the available specs and tests are of a completely higher class of quality than almost any software, not to mention the availability of other implementations that the model trained on.
You can call that a success (as it did something impresssive even though it failed to produce a workable C compiler) but my point in bringing this up was to show that today's models are not yet able to produce production software without close supervision, even when uncharacteristically good specs and hand-written tests exist.
Saying the model failed to write a competitive C compiler makes more sense.
I don't think they tried to do that though.
> today's models are not yet able to produce production software without close supervision, even when uncharacteristically good specs and hand-written tests exist.
That's a good point anyway
That's great and all, but that's not the point I was making and you're engaging rather uncharitably on it. So when you view it from the perspective of capability increase it's rather impressive. Note the slope of progress which this experiment was to show.
Edit: Maybe uncharitably is too strong, but we're talking past each other.
Yeah I think people are really underestimating what LLMs can do even without specs.
As an example, I did an exploratory attempt to add custom software over some genuinely awful windows software for a scientific imaging station with a proprietary industrial camera. Five days later Claude and I had figured out how to USB-pcap sample images and it's operationalized and smoothly running for months now. 100% of the code written by Claude, it's all clean (reviewed it myself) pretty much all I did was unstuck it at a few places, "hey based on the file sizes it looks like the images are being sent as a 16-bit format")
For day to day work, I'll often identify a bug, "hey, when I shift click on this graphical component, it's not doing the right thing". I go tell Claude to write a RED (failing) integration test, then make it pass.
Zero lines of code manually written. Only occasionally do I have to intervene and rearchitect. Usually thus involves me writing about ten lines of scaffold code, explaining the architectural concept, and telling it to just go
People both underestimate and overestimate what LLMs can do. LLMs have shown very different results when autonomously writing a small program for personal use and autonomously writing production software that needs to be evolved for years.
Why are you quoting from their marketing blog as if it's a reliable source?
https://github.com/anthropics/claudes-c-compiler/issues/1
> Apparently compiling hello world exactly as the README says to is an unfair expectation of the software.
GCC has only like a billion man hours in it?
Assembler and linker are not part of a compiler. They are separate tools. They are also generally much simpler.
I wonder how knowledgeable in compilation was the engineer that attempted this. I'm pretty confident that I could produce a decent C compiler in a few weeks (or less), if given Opus 4.7 + unlimited tokens + a good test suite. (and this is not blind unsubstantiated belief in AI, I've recently rewritten a somewhat sophisticated interpreter in a week with AI; and have worked on several C++ compilers in the past, including a GCC port to a custom DSP, so I have a bit of an idea about what this would take).
But yeah, this is not a "one shot" project, none of it is. One shot doesn't work even with humans - after all, this is exactly what killed waterfall as a methodology.
> I'm pretty confident that I could produce a decent C compiler in a few weeks (or less), if given Opus 4.7 + unlimited tokens + a good test suite.
Of course. The point is that a full, detailed spec isn't enough (even in the rare situations it does exist, like for a C compiler). At least for the moment, you need expert humans to supervise and direct the agents.
Vibe coders usually also let the agents write the tests, which mean that the only independent human validation of the software is some cursory manual inspection. That also obviously isn't enough to validate software.
> One shot doesn't work even with humans - after all, this is exactly what killed waterfall as a methodology.
You can one-shot a C compiler with humans. LLMs' software development ability is impressive and helpful, but it is not human-level yet, even if at some tasks the agents are better than most human programmers. And while many waterfall projects failed, many succeeded (although perhaps not as efficiently as they could have). So far I don't believe agents have been able to produce any non-trivial production software autonomously.
yeah, the key part is that there be a human in the loop, directing and course-correcting the ai while it produces code in reasonably small and well defined stages.
Most software is much simpler than a c compiler.
A workable C compiler is a ~10-50KLOC program, and a fairly simple one at that (batch, with no concurrency or interaction). That Anthropic's swarm of agents wrote 100KLOC before failing is a symptom of the problem. It's certainly possible that many programs are in the sub 5KLOC range, but it's definitely not "most software". Plus, almost no software has this level of detailed spec, ready-made tests, and a selection of existing implementations of the same spec.
My first thought when reading Anthropic's description of the experiment was that it is unrealistically easy. It's hard to come up with realistic jobs in the 10-50KLOC range that would be this easy for an LLM. That it failed only shows how much further we still have to go.
A bit off topic, but see how Anthropic publicity stunts went from "Claude C Compiler" with 100K LOC to the recent Bun Rust rewrite with 1M LOC (10x!) in just 3 months.
I get that it's "novel" creation vs porting, but given that they reported that the C compiler cost them $20k in API costs, the Bun rewrite must be at least $200k, maybe even closer to a million. Pure madness.
Asking an LLM tp change programming language of an implementation is completely different from asking it to code from spec. It's orders of magnitude simpler in practice. I converted some 60kloc of Java to C++ and it works. There were some issues where the Java implementation used runtime reflection because that needs creative workarounds and not all of the C++ translations worked on the first try. And that was my first serious attempt at a task with an LLM. I could likely do better now. An important task simplification here is that a well designed codebase can be converted in small pieces and then joined back together. So the total amount of code converted becomes an irrelevant metric.
Yes, the task is very different, but also it will be months to a year until we know the results of the bun experiment.
I don't know how it could fail - Bun loses popularity among devs? Is it an objective metric? From what I understand, Node.js remains dominant across the industry as a whole, with Deno and Bun mostly used by startups.
Anthropic can always fire the Opus/Mythos token machine gun on any problem (bugs, features, security) to ensure PR success, and there would be plenty of AI-sphere startups already drinking the kool-aid that would consider the whole vibe-coding thing to Bun's benefit.
> Anthropic can always fire the Opus/Mythos token machine gun on any problem (bugs, features, security) to ensure PR success,
Can they, though? They tried and failed to do it in their C compiler experiment. The experimenter wrote: "I tried (hard!) to fix several of the above limitations but wasn’t fully successful. New features and bugfixes frequently broke existing functionality."
It could fail due to maintenance burden. There is a lot of code now that no one wrote.
Are we assuming, all tests pass == software done?
Do Firefox not have tests? Then how was there over 200 CVEs found?
Are we going to be comfortable running a piece of software that has 1M lines, and who knows how many zero-days will be in it.
Yes, sure they are going to use LLM to find the CVE's, and so will the hackers. You need a day or two to fix the security issue, a hacker just need to put it in use.
And good luck debugging a million line code base.
1M LOC == already failed.
The compiler that claude made went way beyond workable. It could compile the full linux kernel afaik. That is much further even beyond standard C.
People who independently tried to use it reported that it is very much not workable:
- "CCC compiled every single C source file in the Linux 6.9 kernel without a single compiler error (0 errors, 96 warnings). This is genuinely impressive for a compiler built entirely by an AI. However, the build failed at the linker stage with ~40,784 undefined reference errors."(https://github.com/harshavmb/compare-claude-compiler)
- Overall it’s an interesting experiment, and shows the current bleeding edge of Claude’s Opus 4.6 model. However the resulting product is also a clear example of the throwaway nature of projects generated almost entirely by AI code agents with little human oversight. The prototype is really impressive, but there is no real path forward for it to be further developed. It can build the Linux kernel [for RISC-V], which is impressive. It can also build other things… if you are lucky, but you really cannot rely on it to work. (https://voxelmanip.se/2026/02/06/trying-out-claudes-c-compil...)
Anthropic themselves said that the codebase was effectively bricked and that their agents could not salvage it.
Well then as you say a 10-50KLOC C compiler is workable. Could you show me the C compiler that does manage to compile a modern Linux kernel that is of that size?
Not really.
I can make a c compiler in a couple weeks just by looking up open source libraries and copying them.
I can't make any software that people will pay me money to use without taking months/years of development, research, expiramentation and iteration.
Just because the original people who invented compilers had to be genius, doesn't mean anyone has to spend much time or thought in copying that work now.
I built a compiler for a simpler language as part of my compilers course in a CS degree. It was a non-trivial exercise well beyond the majority of software applications. What open source libraries did you have in mind and what are you copying?
If you can truly write a C compiler in weeks then kudos to you. How many compilers have you written so far for how many languages?
I work for big tech and I would say a large % of developers are incapable of producing a working C compiler on any reasonable time scale, certainly not weeks, even with looking at open source. I'm sure they can download one and run it. Most developers today don't even know C or assembler. They don't know how to approach the C language spec. The top 5-10% of developers/engineers can do it but even for them it's non-trivial.
> It was a non-trivial exercise well beyond the majority of software applications
That depends on how you count. By number of programs that may well be right, but that's not what matters in terms of impact on the industry, as software value roughly corresponds to the number of people working on a particular piece of software (or lines of code, if you wish). By number of people/LOC most software is not in the "simpler than a C compiler" category.
I don't agree.
I regularly get pieces of work someone product guy has thought up in an afternoon. They only care about the happy path, and sometimes only part of the happy path. I work for a global company that has to abide by rules and regulations in each country we operate in. The product guy thinks up some feature, we implement the feature, then we're told "actually, we legally aren't allowed to do this in 90% of the markets we operate in". Cool, so we add an ability to disable it in those markets. Then they come back "We can do this in some of those markets if it's implemented with [regulatory bureaucracy], so can you do that please".
Then we have to hack away at the solution because the deadline is right around the corner.
This is not software engineering! None of this is related to the software. The job of a software engineer is to take a list of requirements and figure out the way we accomplish those requirements. Requirements gathering is NOT a software engineering problem. Software is implementation, product is behaviour. That's the split. The behaviour of the thing we're building needs to be known before we even try to seriously build it.
If someone just held back for week and did their due diligence, we would been able to architect a solution that is scaleable, extensible, easy to maintain and can make the future easier.
> Requirements gathering is NOT a software engineering problem. Software is implementation, product is behaviour. That's the split.
That's a theory but I've never seen this work in practice. A piece of software is unique. If it weren't, we'd just use the cp command.
What usually happens is you get a set of requirements that looks simple. Then you start thinking about a design and see 10 different possibilities, each corresponding to a slightly different interpretation of the requirements set. You iterate a few times reviewing the designs with who set the requirements and a few peers and see more possible variations to the requirements. You need to double check its parent requirements up to the master requirements. Then you need to take time/feature/quality tradeoffs, affecting the fulfillment of requirements.
Once starting to implement, you see dependencies to other software (framework, sdk, drivers, language features,...) and understand that other software is not what you thought, or has bugs. Or you see an issue with performance or see that one particular feature becomes unfeasible.
That's where all the complexity goes. AI doesn't change that, but can make prototyping iterations and bug hunting faster, as long as someone holds it on a leash and understands its decisions.
I completely agree. It's more than 40 years since I wrote my first program, and I've never seen software that was first specified and then written and all was good.
The most difficult part of any non-trivial engineering is understanding the problem, and the first versions of a piece of software are how you reach that understanding.
That's why I do not think that AI-powered "software factories" will ever work. It's waterfall development all over again. An architect writing UML diagrams and handing them off to the team of programmers to do the essentially mundane task of implementing... the wrong thing.
AI is, however, very good at helping you go fast from the wrong first version to the less wrong second one. But you need to remember that your main task is to understand the problem that you are trying to solve.
Trying to figure out the best way to solve vague requirements is why I got into engineering.
If I got detailed specs, I’d just be a coding robot. I push that work off onto juniors.
If they can't at least imagine the golden path themselves and write it down, they shouldn't be in charge of the product because they will be unlikely to understand any other in depth conversations about it. And I have no idea how they'd be having coherent conversations with anyone above them either. They're also unlikely to use AI well or not identify bad-out-of-the-gate solutions. It is of course different if they're just gathering opinions or want a PoC or exploratory work done, but those aren't requirements to me.
Developers are unlikely only doing development these days. There's ops and support to do as well, so more back and forth is less time doing those things and development.
We need to meet in the middle about requirements otherwise developers will end up doing someone else's job for them.
I'm seeing decision-makers / people who write requirements starting to use AI as well in my day to day. As before, my job is to read, understand and test those requirements against the real world as I understand it. But same with code. Software engineering for the past (at least) 20 years has had a core focus of "don't trust anyone", this hasn't changed and this takes a lot of time and effort still.
The problem is that instead of trying to figure out what they really want/need, now we're trying to figure out what they really wanted or needed before it got obfuscated by the babble-machine.
Yeah I agree, such a fundamental aspect of software engineering is translating ambiguous “asks” into specific requirements. We now have a tool to convert those requirements directly into code.
And yes, architecture and how to actually implement the designs are also part of the requirements.
The code is just the implementation, the actual problem that needs solving is one abstraction level higher.
This is true, but funny thing is: it was also true before AI.
It's UML and outsourcing all over again: If only we can write the perfect UML diagrams representing the ideal class hierarchy, we can just put that in an email, send it to India, then we'll get back exactly the program we wanted, no mistakes!
> Trying to figure out what a vague, title only, feature request actually means.
> My take is that to accelerate processes we should reduce coordination overhead and empower individuals and teams to make decisions and execute on them.
This is funny because it's exactly what the agile/scrum training taught me 20 years ago.
I think when LLMs first came out people thought they could just say something like, "Make a Facebook clone". But now we're realizing we need to be more exact with our requirements and define things better. That has always been the bottle neck in software.
When I was working we used to get requirements that literally said things like, "Get data and give it to the user". No definition of what data is, where its stored, or in what format to return it. We would then spend a significant amount of time with the product person trying to figure out what they really wanted.
In order to get good results with LLMs we need to do something similar. Vague requirements get vague results.
In what I've seen, tickets are much richer in detail now because PMs are using AI (connected to the codebase itself, like Claude Code or Codex) to fill out a template as to what and why the problem is (ie X field exists in the backend not frontend), how and where to get any data (query the backend), and what acceptance criteria is needed (frontend should have the field exposed and "submit" should push the field's data to the backend where it should show up in the databas), which is something they would not have done before, due I guess to laziness and thinking the devs can figure it out. Then devs can copy paste this Jira ticket content into the LLM agent of choice (or even use the Atlassian MCP to have the LLM read it automatically).
This has significantly helped devs and made sure that requirements are very clear.
Honestly, with the first step, it seems the PMs are already halfway there to implementation of the feature so I wonder if in the future they'll just do everything themselves and a few devs will be around as SDETs rather than full blown implementers.
I can't imagine SWEs will be reduced to SDETs anymore than attorneys will be reduced to spell-checkers on AI powered case briefs.
I am a very AI-forward person, but hallucinations are becoming more pernicious than ever even as they get less frequent, especially if the code actually works. A human absolutely has to guide these processes at a macro level for sustainability for SaaS as it evolves with business needs.
Maybe for one and done systems with no maintenance/no updates/no security patches you can reduce humans to SDETs, but systems like that are more the exception than the norm.
I've noticed even more than the "hallucinations", just the code is generally quite bad.
At least with concurrent and distributed systems stuff (which is really all I know nowadays), it is great at getting a prototype, but the code is generally mediocre-at-best and pretty sub-optimal. I don't know if it's because it is trained on a lot of mediocre and/or buggy code but for concurrency-heavy stuff I've been having to rewrite a lot of it myself.
I think that AI is great for getting a rough POC, and admittedly often a rough POC is good enough for a project (and a lot of projects never get beyond a rough POC), but I think software engineers will be needed for stuff that needs to be more polished.
By SDET I mean one who reviews not writes code, maybe we have different definitions of that term because you also mention humans being needed to guide the processes.
Even still, other professions interact with the real social world which is not necessarily the case with programming. A lawyer will always be needed because judgments are and must be made by humans only. Software on the other hand can be built and tested in its own loop, especially now with human readable specifications. For example, I wanted to build an app and told Claude and it planned out the features, which I reviewed and accepted, then it built, wrote tests, used MCPs including the browser for interacting with the UI and taking screenshots of it, finding any bugs and regressions, and so on until an hour later it came back with the full app. Such a loop is not possible in other professions.
This afternoon I was speaking with a friend and mentioned that I need to find a lawyer for contracts. His immediate response was, "you don't need a lawyer, just use AI". Not an avenue I'm interested in going down.
IMO the code-generation for boilerplate and the improvement of copypasta quality are much bigger improvements than that.
PMs turning their brain off and letting the LLMs extrapolate from quick and dirty bashing of text into a template (or, PMs throwing customer feedback at a slackbot to generate a jira ticket form it) can be better than PMs doing nothing but passing ill-defined reqs directly into the ticket, but that's a low bar. And it doesn't by itself solve the problems of the details that got generated for this ticket subtly conflicting with the details that got generated for (and implemented) in a different ticket 8 months ago.
> Honestly, with the first step, it seems the PMs are already halfway there to implementation of the feature so I wonder if in the future they'll just do everything themselves
I'm guessing they've tried (or been induced to try by upper management), but given up because they don't know how to debug any problems that arise due to the LLM working itself into a corner.
Coding-agent LLMs act a lot like junior devs. And junior devs are: eager to write code before gathering requirements; often reaching for dumb brute-force solutions that require more work from them and are more error-prone, rather than embracing laziness/automation; getting confused and then "spinning their wheels" trying things that clearly won't work instead of asking for help; not recognizing when they've created an X-Y problem, and have then solved for their Y but not actually solved for the original problem X; etc.
The way you compensate for those inexperience-driven flaws in junior devs' approach, is to have them paired with, or fast-iteration-code-reviewed by, senior devs.
Insofar as a PM has development experience, it's usually only to the level of being a "junior dev" themselves. But to compensate for LLMs-as-junior-devs, they really need senior-dev levels of experience.
The good PMs know all of this, and so they're generally wary to take responsibility for driving the actual coding-agent development process on all but the most trivial change requests. A large part of a PM's job is understanding task assignment / delegation based on comparative advantage; and from their perspective, it's obvious that wielding LLMs in solution-space (as opposed to problem-space, as they do) is something still best left to the engineers trained to navigate solution-space.
lol
Just lol. Is this what you guys mean by productivity boost?
Comical. LLM’s aren’t all that great - it’s more that most orgs are horribly inefficient. Like it’s amazing how bad they are.
That’s why Elon succeeded with spacex - he saw how horrible inefficient the industry was. And used that thinking to take a gamble and it’s paid off.
> most orgs are horribly inefficient
Considering that that’s been a running complaint for like 50 years, it doesn’t seem like project management is going to get better on its own at this point. So, yes, an LLM does represent a productivity boost in that area.
The problem is that organizations are inefficient in such a way that extra output from white collar workers doesn't translate to improved org-wide performance in a positively correlated, linear fashion.
When the org is misaligned, mismanaged, has poor customer feedback loops, bad product market fit, too much bureaucracy, etc etc no amount of AI slop is going to make a meaningful impact on its bottom line. In fact, it will likely do the opposite through combination of exponentially increasing complexity, combined with worker force deskilling, layoffs, and rising token prices. Real bottleneck is and always has been communication & alignment.
It might make the employees _happier_ in the interim though, which, I believe, is what we're predominantly seeing during this AI mania. People fed up with the bullshit jobs of rewriting the same service for the 5th time in 2 years or creating TPS reports weekly just for their manager to throw them directly in the trash are absolutely giddy that they no longer have to do this manually. I think we need to question the economic value of these jobs in the first place, though.
I've worked at big tech prior to LLMs becoming a thing, and consistently saw projects of 20-50 people carried by 2-3 individuals that actually understood what needed to be done. I don't think this ratio will be any better with genAI, and I also don't think that tokenmaxxing has any meaningful correlation with impact. Bullshit jobs (and questionable personal projects) just get done faster now. Yay, I guess.
Correct most people should be fired.
In the long run these highly inefficient firms are going to get destroyed by people who have a vision and can do what 100+ firms are doing and package it together as one solution that is far superior on dimensions that matter to firms.
You're probably right but that sounds like it's still a win to me.
The idea that PM tickets are now much improved because they paste their unbaked wrong "idea of what the ticket is" into ChatGPT to expand into a 500 word behemoth is hilarious.
At least when the PM still wrote it you could outright tell it was bullshit and made no sense. Now that is just obfuscated.
Not sure what your point is, LLMs don't have to be all that great to still show a productivity boost and especially if the organization is inefficient, then even more so.
Tbh your points demonstrate you’re not that much of a deep thinker.
Two hour old account just to make comments like this, it will get flagged. Next time use your main account.
If you do that someone still needs to make sure the details make sense which, from experience, sometimes they will and sometimes they won't. When I open tickets using automation I often back into the ticket from a running implementation that passes tests so the description is at least internally consistent but there are often still issues that need corrected.
That's what a good PM and developer pair should be doing, it's just that it's a lot faster for both of them now to review and work in tandem to get the feature done, because the bottleneck is the code generation.
> Then devs can copy paste this Jira ticket content into the LLM agent of choice
Super glad to have gotten out when I did...
Except... no one validates the generated tickets, and it's full of inaccuracies.
And then someone copy pastes it into Claude and now those inaccuracies become part of the code and tests.
The PMs validate it, why do you think they don't read over it to make sure it fits what they want? You might say "well they're lazy, look why they didn't write enough detail to start off with" but for lots of people, reviewing something to make sure it's close to what they want and then tweaking it is much easier than writing it from scratch.
It's the equivalent of writer's block and is why a common advice given to writers is to put anything they can onto the page then edit it later.
> The PMs validate it, why do you think they don't read over it to make sure it fits what they want?
The PM has historically often not had a detailed enough mental model of the implementation to spot the hard parts in advance or a detailed enough mental model of the customer desires to know if it's gonna be the right thing or not.
Those are the things that killed waterfall.
You can use LLM tools to help you improve both those areas. Synthesizing large amounts of text and looking for inconsistencies.
But the 80th-percentile-or-lower person who was already not working hard to try to get ahead of those things still isn't going to work any harder than the next person and so won't gain much of a real edge.
I'm glad you mentioned it and TFA briefly mentioned waterfall. The second graph shown in the article with documentation overlapping the dev cycle, it's like the worse of both agile and waterfall. It's supposedly real-time waterfall.
Normally waterfall works where the scope is extremely-well defined and articulated in design plans. Which shortens dev time because prior to AI code was mostly deterministic. Here we have to do waterfall level of documentation while iterating on a non-deterministic solution (code gen) to non-deterministic requirements (per usual).
It's bonkers.
I still think the technology is cool though.
And to answer the questioner.. Have you worked with a PM? Most of the ones I've worked with try to be simultaneously in charge yet not responsible for anything. Validating something implies skill and responsibility.
Then they're just bad PMs and don't deserve to have the job. That can be said in any profession, devs or lawyers or doctors who blindly accept LLM output without review are bad employees.
> Then they're just bad PMs and don't deserve to have the job.
Nobody "deserves" anything. They do have the jobs though. Thinking that the world isn't full of people doing what they need to do to get by who don't give a shit about fitting a fantasy ideal is wild.
Deserving and having are two different things, that doesn't mean they can't be criticized either way. By the same logic bad devs and bad dev practices can also be criticized.
I think validating a fully generated novel of a ticket, is much harder than thinking through the problem in the first place and creating your own ticket.
We see it with code too right? It’s harder to review code than to write it.
On top of that the LLM can work so fast that the amount of things that need validating grows!
This is where humans get lazy and the problems come in IMO. Whether its a PM not validating their ticket, or a dev doing a bad code review.
Add on to that that the incentives currently are to move fast and trust the AI.
It becomes clear to me that a lot of that review work either won’t be done at all, or won’t be nearly thorough enough.
The tickets are not "novel"-length, they are about a few bulleted lists of the sections I mentioned above. In that case it is indeed way easier to review that a ticket only saying "do X with Y data."
Reviewing code is harder than reviewing text because code does something and has interdependencies and therefore must be correct in its function, do not mix the two. This is like saying an editor reviewing an article or novel is harder than actually writing the novel which is blatantly incorrect.
Most real tickets are more complicated than “Do x with Y data” and also have many interdependencies throughout the business
Most? That's doubtful especially when a lot of tickets are simply CRUD which are fine being generated by an LLM. Those that are more complex require more review and interdependency management, sure, but to say that that is most tickets is simply not correct.
I agree. I hate getting tickets like this because they’ve often gone down the wrong path and I have to work backwards to understand the actual problem and the right way to solve it
just this week i pushed back on some requirements in a very detailed product spec I was implementing to speed up time to ship. The pm had no idea what I was talking about because the requirements were invented by an LLM. This is not a bad PM, discipline doesn't scale.
> The PMs validate it, why do you think they don't read over it to make sure it fits what they want?
Hahahahahaha. Sorry, I couldn't help myself; this reads like satire. The answer is "real life experience says otherwise".
Yeah I was so tempted to ask if this person has ever actually met a project/product manager...
Maybe you both just have bad PMs, because just like good devs they should also be reviewing their work. My point was that it is more likely for PMs to review and edit a generated ticket than to have to write it all themselves which they often won't do.
> My point was that it is more likely for PMs to
I feel compelled to point out to you that this is a completely unsustainable, unsupportable, unsubstantiable claim. You have met ~0% of PMs, and of the ones you've met maybe you've experienced a non-zero percentage of their work, but statistically that's also very unlikely.
If you think you can say what most PMs do or what PMs are likely to do, then, I'm sorry, but you are not even thinking like an engineer. You're thinking, actually, a lot more like a PM to many of us.
> just like good devs
I'm so sorry, my sides just can't handle the starry-eyed nature of these takes. This is just too much for me.
To many of us this reads like you've never met people before. But who knows, maybe you live in Lake Wobegon, where all the women are strong, all the men are good-looking, and all the children are above average! If so then we're jealous, but you still should be more careful about how unrigorous your mental model is because it will make you a worse engineer.
Experience with different PMs and developers aside, the older you get in the profession the more you will hopefully realize that none of your quality effort fantasy matters. Sales happen and money rolls in independently of whether you think the PMs or the people who call themselves engineers do a "good job". Businesses thrive on sales and marketing, not engineering.
This failure is human laziness, not an issue with the technology. People who use AI because they are trying to avoid doing work fall into a completely different category than people using AI as a force multiplier and for skills/capabilities enhancements / quality improvement.
It's also the only way to get those massive increases in productivity.
I second this
This is very much a "you're holding it wrong" response.
If your technology relies on humans using it in ways that go against the ways they are inclined to use them, then that is an issue with the technology.
I don't think that works as a critique of LLMs because it's far too broadly applicable to well-accepted tools.
Are advanced calculators bad because a student could use the CAS to ace calculus homework, exams or the SAT without actually learning the material?
Is copy/paste bad because a person could use it to copy/paste code from one place to another without noticing some of the areas they need to update in the new location, adding bugs and missing a chance to learn some more subtleties of the system?
Is Git bad because a manager could use it to just measure performance by number of lines of code committed instead of doing more work to actually understand everyone's performance?
Many tools can be used lazily in ways that will directly work against a long term goal of improving knowledge and productivity.
but in this case that's exactly what AI is doing, and no more. its filling in the gaps with some plausible sounding goo so that the person doesn't have to worry about the details.
ok, so for some of the jobs we're doing plausible sounding goo is just fine. and that's kinda sad. but the 'just playing around' case is fine for PSG, this isn't a serious effort but just seeing how things might work out without much effort.
taking the remainder, where understanding and intent are important, the role of the ai is produce PSG, but the intentional person now goes through everything and plucks out all the nonsense. this may take more or less time than simply writing it, but we should understand this is resulting in less real engagement by the ultimate author. where this is actually interesting is a parallel to Burrough's cutup method - where source text and audio were randomly scrambled and sometimes really clever and novel stuff pops out.
but to say the current model of vibe coding has much to offer in the second case is really quite unclear. to the extent to which coding is the production of boilerplate is really a problem with APIs and abstraction design. if we can get LLMs to mitigate some of that I the short term without causing too much distraction, that's fine, but we should really be using that to inform the solution to the fundamental problem.
so for me what's missing in your model is how LLMs are supposed to be used 'properly'. I don't think laziness is really the right cut here, make-work is make-work, and there's plenty of real work to be done. but in what sense does LLM usage for code actually improve our understanding of these systems and get us more agency?
I don't disagree with your take on most jobs or vibe coding as shown in countless proof-of-concept/0-to-1 demos. But the comment I was replying to was dismissing this statement from another commenter:
> People who use AI because they are trying to avoid doing work fall into a completely different category than people using AI as a force multiplier and for skills/capabilities enhancements / quality improvement.
This statement is absolutely true. There are ways to use LLM tools to significantly improve the quality of your work instead of to avoid doing hard work. (And the result can easily become something that requires more hard thought, not less.)
Some that I frequently enjoy that are usable even if you don't want the machine to generate your actual code at all: * consistency-check passes asking it to look for issues or edge cases * evaluation of test coverage to suggest any missed tests or proposed new ones * evaluation of feasibility of different refactoring approaches (chasing down dependencies and call trees much more faster than I would be able to do by hand, etc)
> to the extent to which coding is the production of boilerplate is really a problem with APIs and abstraction design. if we can get LLMs to mitigate some of that I the short term without causing too much distraction, that's fine, but we should really be using that to inform the solution to the fundamental problem.
I generally would disagree with this, though. I don't think there's solely a problem with abstraction design, I think the inherent complexity of many systems in the business world is very high (though obviously different implementations make it different levels of painful). If that's a problem, it's a people/social one, not a technology problem.
In my future we lean into the fact that people want features, they want complexity, for many things - everybody's ideal just-for-them workflow/tooling would look slightly different than the next person's - and use these tools to build things that do more, not less. Like the evolution of spellcheck from something you manually ran, to something that constantly ran, to something that can autocorrect generaly-usefully when typing on a touchscreen.
Let's get back to finding more features/customization to delight users with.
> This is very much a "you're holding it wrong" response
This isn’t actually an argument for or against anything, I don’t know why people say this. It is entirely possible that people are using this brand new, historically unprecedented tool wrong.
Cars have been a huge success in spite of requiring people to learn a bunch of new things use them.
It's not about having to learn things; it's about the required methods of using the tool going directly against the grain of the way people in general operate.
The classic "you're holding it wrong" was about the iPhone 4: sure, people could learn to hold the iPhone in such a way that they didn't block the particular parts of the antenna that were (supposedly) the problem. But "holding an iPhone" is a fairly natural thing to do, and if the way that people are going to do it naturally doesn't allow its antenna to connect properly, then that's a technology problem, not a human problem.
If the selling point for AI is "you can just talk to it, and it will do stuff for you!" (which may or may not be yours, personally, but it is for a lot of people), then you have to be able to acknowledge that "describing a problem or desire using natural language" is something that humans already do naturally. Thus, if they have to learn to describe their problem in very specific ways in order to get the AI to do what they want, and most people are not doing that, then that's a failure of the technology.
For the specific case at hand, what's being described is similar to the problem of self-driving cars: you're selling the benefit as being the AI taking a lot of the work off your shoulders; all you have to do is constantly check its work just in case it makes a mistake. Which is something that we already know, empirically and with lots and lots of data, that humans are bad at.
Once again, it's a technology issue. Not a human issue.
> selling the benefit as being the AI taking a lot of the work off your shoulders; all you have to do is constantly check its work just in case it makes a mistake.
Cars can take you from place to place much faster than a horse can, all you have to do is learn to drive and constantly keep your hand on the wheel.
Part of using a technology is, well, learning how to use it. It's not the technology's fault that humans are lazy or not able to pay attention and crash.
Maybe they are holding it wrong then. Like someone else said, people had to be taught how to drive a car and that cannot be in any sense said to be the car's fault.
Some people are lazy, plain and simple. If they want to blindly accept what the LLM tells them without critical analysis and review then that's on them.
Maybe for some subset of sotware (like CRM panels or something) PMs will do everything. But if you're projecting the way one sort of software (ie user-facing, business use oriented software) is developed and put to use with software writ large, then no I don't think so
Sure, I'm just talking about 90% of software which is basic CRUD, not complex systems or microcontroller programming. In that case it's likely that just a PM could build something with LLMs.
I literally can’t tell if this comment is a joke or not.
The last sentence was party facetious sure but the first paragraph is not, I have seen ticket quality go up quite a bit from a few years ago.
> Honestly, with the first step, it seems the PMs are already halfway there to implementation of the feature so I wonder if in the future they'll just do everything themselves
Yes please, I've seen the vibecoded slop PMs put out every day because software engineering is simply not a skill they have, and I'd love to make a LOT of money fixing their crap once it dies in production <3
I already do the latter, not very difficult to get into. Good consulting money.
I’m a former PM who’s now a founder and all the engineers I worked with loved me.
I can tell you right now most pm’s are absolutely useless and glorified project managers who don’t know how to think and get in the way - and don’t know how to enable engineers to be more productive.
> I think when LLMs first came out people thought they could just say something like, "Make a Facebook clone". But now we're realizing we need to be more exact with our requirements and define things better. That has always been the bottle neck in software.
This was substantially predicted by Fred Brooks in 1986 in the classic No Silver Bullets [1] essay under the sections "Expert Systems" and "Automatic Programming".
In it, he lays out the core features of vibe coding and exactly the experience we are having now with it: Initial success in a few carefully chosen domains and then a reasonable but not ground breaking increase in productivity as it expands outside of those domains.
[1] https://worrydream.com/refs/Brooks_1986_-_No_Silver_Bullet.p...
It's interesting how predictable some of this is.
The LLMs turn out fully formed clones of stuff for which there exists copious amounts of code openly searchable on the web doing the exact same thing.
LLMs require developer-like specification, task/subtask breakdown and detail where such example code already exists.
As a professional prior to LLMs, how many problems that you work on have many existing free solutions but you neglected to use that code and decided to spend days doing it yourself?
Well put, and same challenge to a lot of these demos & LoC numbers: if you were a pro prior to LLMs, how many of these demos could you fully recreate if you ignored copyright?
I’ve often reimplemented things at work that exist elsewhere. If I could just copy & paste whole solutions from GitHub and change the branding/naming slightly, I could make curl in an afternoon.
So true.
I can only think of hobby projects, like writing yet another emulator, expression parser or media processor in a new language I'm trying to master.
In a professional setting, you would always diligently explore libraries and only implement your own if there is no suitable alternative.
> how many problems that you work on have many existing free solutions but you neglected to use that code and decided to spend days doing it yourself?
Only when the existing free solutions are licensed with something like GPL. Now I can just say, write me a C webserver library similar to mongoose and I get the functionality without the license burden.
You might as well have ignored or removed the GPL notice. Running it through the LLM laundering gets you a "fork" of unknown origin, questionable quality. You're still potentially open to supply chain issues but the chain is obfuscated.
And you now own full responsibility for maintenance.
I just vibe coded a socks proxy because existing ones were too thick. And let me tell you, you are absolutely right. Go libraries I’ve never heard of, new implementations that has not been tested.. I think the word for this is YOLO
Indeed, no license burden but you get a maintenance burden instead.
Well I'd get that either way if I write it myself.
Also I was joking, I'd never do that; feels gross. But I suppose it is a legitimate "productive" use of AI.
"We've invented the silver bullet from the book 'No Silver Bullets'"
I read that as a programmer and, lol, you’re right.
I read how that’ll read to VCs coming from Altman and Musk and, ow, the entire stock market just made sense for a second.
We now have product owners trying to farm out their work to an LLM. The process didn’t work before because the person writing the requirements either put out vague requirements or bad requirements because they didn’t understand the business intent (or were careless).
LLMs just take the same vague or poor requirements and make them look believable until you dig in to them.
> The process didn’t work before because the person writing the requirements either put out vague requirements or bad requirements because they didn’t understand the business intent (or were careless).
You make it sound like writing good requirements is easy.
If it were easy we wouldn't need all these concepts around PMF, product pivots and the like. And even before that was Peter Naur's paper "Programming as Theory Building" [1].
If you truly understand the problem you're solving with software then requirements can be easy. But usually we don't, not right away, and so we have to build up our understanding of the problem first in order to solve it.
Even then, the problem we solve may not have been the problem paying users will have, so you can have "good requirements" and still have a bad business, or even the opposite where you somehow build a working business despite bad requirements, because you hit upon a customer's need quite by mistake.
Nothing about any of this precludes LLMs being helpful, though nothing guarantees LLMs will be helpful either.
[1]: https://cekrem.github.io/posts/programming-as-theory-buildin...
Plausible requirement generators as inputs to plausible code generators.. what could go wrong!
It’s a giant tragedy of the commons. I’ve fired remote people who pretended to work, knowing that I wouldn’t hire remote workers ever again after AI.
You're completely right and I thought this would be obvious. I never prompted anything remotely closely to "make a facebook clone". Instead, I make an explanation of how it should work. To give you an example:
And that one-shotted a simple gnome app indicator env switcher. Had to fix a few lines here and there but it mostly just worked. If you give the proper spec to the LLM, it'll do it right. You can even fake a DSL to describe what you want and it'll figure it out.Juxt's Allium https://juxt.github.io/allium/ is an interesting entry in this 'pseudo DSL' space to define and store system specifications and requirements. I think it's likely that this sort of 'persistent specifications to help bots work correctly' will be a good approach when things finally cool down a bit.
That's the kind of stuff where you would write a few lines of shell script or perl and not bother with the whole GTK stuff. Because GTK would be accidental complexity to the task (unless you used something like zenity).
This is one of the reasons I like the OpenBSD and suckless projects. There are solutions that are technically correct, but are overengineered.
Well I would never write shell because I loathe it's grammar/syntax. I enjoy GUIs and am a heavy mouse user, so the GTK part isn't really an "accidental complexity" but a must have for me. If a LLM can one-shot all the GTK boilerplate it's a win.
That's (as shown in my sample prompt) one great thing I've been using LLMs for: making GUIs for arcane Linux-based OS/userland settings that I have no interest in doing "sudo gedit yadda yadda" or learning man pages for. It's been 30+ years, we deserve a better desktop experience.
I've used suckless packages in the past, but it feels to me too close the GNOME/Apple way of giving zero settings and having opinionated defaults whose opinions do not ring well for me. I have zero desire to change my shortcuts/hotkeys to something random devs chose based on their past computer experience, mostly unix-based. Muscle memory > *.
And that’s fine.
I was pointing out that a simpler solution exists. I prefer simple solutions, because I want to test whatever idea I have in real world situation first before I go for a more complete one. Kinda like doodling before committing to do a sketch (or spend weeks doing a painting).
> It's been 30+ years, we deserve a better desktop experience
That desktop experience would need to be like smalltalk (where it’s trivial to modify the gui). The nice power of Unix is having the userland being actually a userland. Meaning you can design a system for your workflow and let the computer take care of that. Current desktop environment doesn’t allows for that kind of flexibility.
Also it’s the nature of unix that makes such basic utilities possible (and building them with raw xlib or tcl is easier than gtk). Imagine doing the same on macOS or Windows where everything is behind an opaque database where some other process fancies itself as its owner.
There's also a pattern based on the simple solution that used to be more common: One command-line program for updating and querying the current state, and a second GUI one that just acts as a dumb interface for the first one. Even aside from separation-of-concerns purity, there are two more practical benefits: this gives you scriptability (say, automatically choosing an environment on startup) as well as easier support for multiple desktop environments (two different dumb GUI frontends for the actual complexity in the command-line backend, or updating the GUI because of a change in the APIs without worrying about breaking the important logic).
That's easily more spec than script.
> You're completely right
I mean, no comment
What's even worse is that when dealing with human software teams, a vague requirement will (at least in a well-run org) receive demands for further specification. "What do you mean by 'get data'?", etc.
An LLM will just say, "Sure! Here's the fully implemented code that gets the data and give it to the user. " and be done with it.
ChatGPT 5.5 responds:
> What data should I retrieve, and where should I get it from? Please specify at least: ...
And it then goes on to ask just exactly what is necessary, being all constructive about it.
You're both right. The parent was a toy example, and if asked literally to an LLM, it will definitely ask for more information. Yes, it's important to be accurate but I don't think that applies here.
But the point still stands: in most contexts, the LLM will fill in the blanks with what it deems appropriate like an overconfident intern at best and a bull in a China shop at worst.
When the cycles are short enough, though, that is to some degree the right thing. That is, it's the right thing for things the users can then immediately see and give feedback on, because it lets them give feedback on something tangible.
It's the wrong thing for important things under the hood (like durability and security requirements) that are not tangible to them.
Just as poorly designed code can still compile. This is operator error, not a failure of the technology.
IME you give it very precise specifications and it still fucks it up.
When we talk about "the" bottleneck being specs it just isnt the case that it's the only thing LLMs do poorly. Theyre really bad at a lot of stuff in the SDLC.
They're also good at providing results which are bad but look ok if you either dont look too closely or dont know what you're looking for.
It's worse. Vague requirements still only power vague interpretations of the problem. Even if you provide good requirements, you still only have vague interpretations at your fingertips. The promise is that such things won't be a problem in the future, which is obviously not materialising.
"Make a facebook clone" is the vague human promise to the end user. The reality is that it leads to so many assumptions which are insurmountable due to the vague interpretation so you have to change your requirements in the end to claim success.
Thus everything turns into a mediocre compromise. There is no exceptional outcome, which is what makes a marketable product. There are just corpses everywhere.
You need something better to both define requirements and implement them than this technology.
Can someone pull that Steve jobs quote out re. The craftsmanship between a great idea and great product?
Anyone who thought that gap could be shrunk substantially lives in delululand.
Hence why we haven’t seen this explosion of ‘really great’ products come out.
Many will continue to parrot ‘bro but the models changed I swear’. I’m sure they did. But you’re missing the damn point.
This was already a reality for a years.
In several companies I have seen product managers joining teams and failing to even have minor requirement ready for months during “onboarding” of the PM. And then code being ready but taking months to release because DevOps is busy or QA can’t find time.
The pace of release of software has been disconnected from the coding part for the longest time, and we have been quiet about it.
The solution I've seen work is have engineers and designers that can take much of the detailed spec writing on, and have the PMs spend time with users/prospective users, partners, etc, understanding the market and users better. When you pull PMs in to all the details, often they turn into project managers, shuffling bug tickets around etc, taking time away from owning the user and the problem and shifting them too much to the solution side. Have a lead engineer own much / most of that. Every org / product is different of course.
I agree with you. The healthier organizations work in the way you mention.
product people love LLM because it doesn't ask
"what does X means? how will it work?"
while a programmer will ask, about all cases.
Did everyone forget about outsourcing and how outsourcing works?
The dudes in Eastern-Wherever not asking what something means is the scary part. You only find out at the end how deeply confused everyone was when making the thing. You can fix it with attention and management, but then only some projects sometimes are profitably outsourced and you still need competency.
Do they have a point?
Can't good marketing teams, backed up by World Class Product people, sell anything we build, more or less?
</devil's advocate>
Even if that were the case, I wouldn’t want to spend my working life building software poorly fit for the purpose, that nevertheless sells due to marketing.
> But now we're realizing we need to be more exact with our requirements and define things better.
That's why we write programs in programming languages and not English. Because they are much more efficient at giving precise instructions than natural language.
But horribly token inefficient.
Even purely from an information theory perspective it was obvious “make me a Facebook clone” was not going to work. The more you compress the information prompt the more detail you lose.
Realizing? Will be very happy if that is the case, but in my view all big company execs are still balls deep into the notion that you will be able to just ask it for the facebook clone and everything sucks as a result
> When I was working we used to get requirements that literally said things like, "Get data and give it to the user". No definition of what data is, where its stored, or in what format to return it. We would then spend a significant amount of time with the product person trying to figure out what they really wanted.
This is a big HN LLM discussion divide. I am in the same no-specs work background camp, and so the idea that the humans who input that into dev teams are suddenly going to get anything out of an LLM if they directly input the same is laughable. In my career most orgs there has been no product person and we just talked directly to end users.
For that kind of org, it will accelerate some parts of the SWEs job at different multipliers, but all the non-dev work to get there with discussions, discovery, iteration, rework, etc remains.
If the input to your work is a 20 page specification document to accompany multi-paragraph Jira tickets with embedded acceptance criteria / test cases / etc, then yes there is a danger the person creating that input just feed it into an LLM.
I’ve never understood engineers who complain about vague specs… if the spec was complete, it would be code and the job would be done already! Getting a 20 page spec delivered from upon high and mechanically translating it to code without any chance to send feedback up the chain sounds like… a compiler.
Yes, I don't think a job where I am programmed by a product manager would be terribly interesting. I would move on to be the product manager if I found myself in such a role.
Probably why I haven't ended up in any.
The demands are for functional requirements. Plenty to translate on the non functional side of things.
In my experience, the complaints are not about the specs and their vagueness. It's more about the political game to get them detailed. If you've not encountered the kind of organizational issues where getting an answer is like pulling teeth, you're kind of lucky.
Oh no, I’ve definitely experienced that, it’s terrible. But that situation makes me wish for more agency (for example, talking to customers directly), whereas it seems to make other engineers wish for less agency (please hand me a complete spec and I will mindlessly translate it to code). That’s what I don’t understand.
some of us couldn’t give a rat’s ass about the customer. One of our customers charges people for paying their own bills via certain methods, which is completely bogus and I remind everyone loudly all the time that they do this. Everyone agrees that this customer sucks to work with, and the less time spent with them the better. The people from the customer’s end suck, they’re not technical, they have in-fighting with their own teams during calls, have decades long errors with their integration that they have never fixed…the list goes on. For this customer and a few others, please give me a spec that I can implement, shove it back across the aisle, and forget about. The absolute last thing I want is to have to talk to them more.
Now it's worst, you get an PDF export of a long ChatGPT chat history with one sentence "Can you give an estimation for this?"
Cue that commitstrip comic from 2016.
https://web.archive.org/web/20161211074810/http://www.commit...
> I think when LLMs first came out people thought they could just say something like, "Make a Facebook clone". But now we're realizing we need to be more exact with our requirements and define things better.
The annoying thing is that giving an LLM vague instructions like "make a Facebook clone" does work... in certain limited cases. Those being mostly the exact things a not-very-creative "ideas person" would think to try first. Which gave the "ideas people" totally the wrong idea about what these things can do.
These same "ideas people" have been contracting human software developers to "make them a Facebook clone" (and other requests of similar quality) for decades now.
And every so often, the result of one of those requests would end up out there on the internet; most recently on Github. (Which is, once there's enough of them laying about, already enough to allow a coding-agent LLM trained on Github sources to spew out a gestalt reconstruction of these attempts. For better or worse.)
But for the most common of these harebrained ideas (both social-media-feed websites and e-commerce marketplace websites fit here), entire frameworks or "engines" have also been developed to make shipping one of these derivative projects as easy as shipping a Wordpress.org site. You don't rewrite the code; you just use the engine.
And so, if you ask an LLM to build you Facebook, it won't build you Facebook from scratch. It'll just pull in one of those frameworks.
And if you're an "ideas person", you'll think the LLM just did something magical. You won't necessarily understand what a library ecosystem even is; you won't realize the LLM didn't just generate all the code that powers the site itself, spitting out something perfectly functional after just a minute.
We arrived to that state today with Codex and Claude Code. I really don't know what people are doing wrong?
So the agent needs a “plan” mode where it works with the user and asks questions to define the ask.
AI is not supposed to bypass the process, but it can speed up things nonetheless, it can help with refactoring, writing boilerplate, finding errors you never even spotted before, and things that linters cannot catch.
I see so many comments that seem to me like either they don't use standard known processes, or they assume AI doesn't need you to follow the standards.
Can I ship more code and features? Absolutely I can, if I have a good set of requirements, and thorough testing. All AI written code needs to be reviewed and tested, and should be in discrete commits and pull requests, anyone pushing a PR with thousands of lines of code is a red flag, you wouldn't do it without AI, why would you do it with AI? Major rewrites / refactors are the only known exception, and even then I would argue that these should still have discrete commits you can switch to so you can see how things changed, and make a more informed decision.
If you show me a massive one shot commit or PR I will deny it. Break it down into bits a normal developer can audit.
People don't really understand that non-trivial software development isn't even 50% coding. The coding step is generally the 'easiest' part and given to Junior developers. In a large org most product changes span multiple systems and human operations. Seniors and even mid-level generally spend most of their figuring out how to shape the local priorities into a new arrangement of the existing cybernetic entity and then getting buy-in on that new vision given these other teams have their own priorities.
This naturally involves a lot of tradeoffs and politics - senior engineers know to avoid adding 'weight' to their airframes and fight hard to avoid adding scope to the systems they're responsible for or divergence from their intended direction of travel. So compromises have to be struck or escalations to management to choose between priorities have to play out.
Maybe AI solves that as well but that is a lot more difficult lift.
LLMs mostly only being code-writers was true a year ago, but it is not true now. Now they are tool-callers, which means a coding agent can effectively: run lints/typechecks/tests (and fix resulting errors), dig into observability platforms to identify root cause of isses (e.g. on Sentry or similar), run benchmarks to identify slow code / hot paths, keep systems up to date by reading migration docs (and applying them) for new majors of consumed libs, etc.
So sure, if you have none of these things set up to back-pressure agents and help them better understand the system, then they will just be dumb LLM code writers. But you can definitely go a lot further than that with the improvements that are rapidly happening to models and harnesses.
This article assumes that AI only has an impact on the development phase which is certainly not true. It can speed up every part of the step. Including ideation, legal, documentation, development, and deployment.
Ideation: Throw ideas back & forth, cross reference with knowledge bases, generate design documents. Documentation: Generate large parts of docs. Development: Clear. Deployment: Generate deployment manifests, tooling around testing, knowledge around cloud platforms.
Every single step can be done better & faster with AI. Not all of them, but a lot.
Even development. Yes some part of your job involves understanding the problem better than anyone & making solutions. But some parts are also purely chore. If you know you keed a button doing X, then designing that button, placing it, figuring out edge cases with hover & press states, connecting to the backend etc - this is chore that can be skipped. Same principle applies to almost all steps.
I tend to agree with the article.
A typical example of trying to add a new significant capability involves many meetings (days, weeks, months, etc. )with the business to understand how their work flows between systems X, Y and Z as well as all of the significant exceptions (e.g. we handle subset A this way and subset B that way, but for the final step we blend those groups together, except for subset C which requires special process 97).
Then with that understanding comes the system solutioning across multiple systems that can be a blend of internal system or vendor's system, each with different levels of ability to customize, which pushes the shape of the final solution in different directions.
There is certainly value in speeding up coding, but it's just one piece of the puzzle and today LLM's can't help with gathering the domain information and defining a solution.
What I've seen in an AI-forward looking environment is that it's much more common for PM/POs to be knocking up at least a UI prototype now, and experimentation is happening often even before writing the tickets. Similarly when devs are proposing something they often are coming with a couple of prototypes already implemented. Both of those mean decisions are coming a lot quicker.
I've seen proposals for Product Managers to define those conditions themselves by speaking with the LLM. A continuing architectural diagram is constructed and graph is updated until all cases are covered and then the LLM writes the code, writes the validations, pushes to CI environments, runs tests, schedules prod deploy (by looking at company event schedule), gets CAB approval, deploys code, tests in prod, and fixes regressions.
I'm not saying this is the correct thing, but companies are implementing it and it is "working". I don't think keeping our head in the sand is helping.
> I've seen proposals for Product Managers to define those conditions themselves by speaking with the LLM.
But the LLM is not aware of how the business works and why, so someone needs to work with the business to extract the information. Typically it's not well documented.
is it working though? The main outcome we've seen with companies that drink the AI Kool aid en masse is buggy unstable systems. clearly there's a level of rigor that's being missed for ship velocity
The article pretty much plays out whats happening in our place, heavy use of AI in software development but we dont see us shipping faster, about same or perhaps slower (for other reasons). Its a weird feeling as were waiting for this utopia to kick-in but its not and were cant fully put our fingers on it.
All of the above points align with our organization’s experience. But there is one more thing happening as well: we have more people in more roles able to create software solutions for issues that used to be brute forced via physical processes. (We are a small manufacturing business.) While these aren’t big giant enterprise projects that require deep swe experience, they are simple software tools that are improving process and productivity everywhere. It is pretty amazing what happens when your head of shipping can build a bespoke tool to solve a problem that previously they dealt with through burning through a lot of labor hours.
I would be really interested in the details of these kind of tools that are improving processes and productivity.
Are they reasonably documented/audited/put into any sort of version control like a lot of internal tooling? Or are they the kind of the thing that gets whacked together on the fly in a "move spreadsheet data from A to B", "I want a list of people's schedules with custom highlighting" kind of things.
Not doubting your productivity increase, I'm just curious how people quantify that when they say it.
One of our BAs created a site that tests the effectiveness of copy / layout adjustments. I don't even know exactly what that's called but he's able to do statistical analysis much faster on what works and what doesn't. It's really cool to watch him thrive and I feel like some of the thinkers that were not devs are going to find themselves to be one but in their specific domain in a few years
Yes. In the same way that spreadsheets are the dev tools for non-devs, LLMs could step into that role, but with much more powerful end result. With the caveat that in the same way you can create a powerful foot-gun with a spreadsheet you can probably create a foot-cannon with an LLM.
yeah the Coinbase CEO gleefully pointed that out as well and now the market thinks they are totally incompetent every time some UX quirk is found
looks like orgs have to have engineers on for optics. like having a legal staff with no lawyers, or a cybersecurity staff with no IT or certified people. Software has famously not needed state licenses or industry certification, but maybe thats a direction to consider to give utility to company optics.
The onus isn't on people using AI effectively to prove it to others.
In fact, these disagreements and disbeliefs create opportunities and salients in the market.
I know and I agree. It sounds incredibly arrogant but it's frankly is a bid sad to see how much HN is lagging behind AI adaption. It's been 90% noise over the last 3-6 months about problems that aren't truly problems if you really look hard at what AI is capable to do already today. It's mostly ppl & process problems. I could post a comment like the one above below almost every article on AI. But it is what it is. It's an opportunity for anyone who doesn't bite into the cynical tone here for sure.
Indeed. I suspect most effective AI users are quietly making real progress toward their objectives.
Anecdotally, I see a lot of problems/solutions content about AI that doesn't reflect at all the challenges I face. But trying to tell people that there are other ways of doing things, especially when it conflicts with token-maxxing, is a lost cause
Precisely. People don't realize that it's all numbers. Given average IQ of people involved in a project is 140, an AI with an IQ of 150 can replicate each and every such individuals in the pipeline. People saying AI can't do this or AI can't do that should come to terms with the fact that this IQ gap is monotonously increasing.
This is bizarre to me on so many fronts.
1: When was the last time you worked on a project where you thought the average IQ was 140? I don’t even think I have worked on a project where the maximum IQ was 140.
2: Who thinks the IQ of people on the project determines its success? There’s so much more to it than just “high capability team members” (to give IQ a generous interpretation).
3: (math joke) A sequence like (AI IQ - Human IQ) can be negative and monotonicly increasing and still never reach 0.
Funnily enough, though, I think it makes dumb people dumber.
I agree. Inexperienced people (not necessarily "dumb") are likely to accept everything at face value, not apply critical thinking skills, and not even check the AI generated output.
An AI does not have an IQ.
Monotonically although I do find the discourse on AI rather monotonous.
On the one hand, this is a clean post that explains exactly what a lot of us have been thinking and seeing on the job at large organizations doing tech work. Dear Author, I agree with you 110% and want everybody else to come to understand what you have written.
On the other hand, it feels like we've been over this tens of times recently, on HN specifically and IRL at work. Another blog post isn't going to convince leaders that this is how the world works when they are socially and financially incentivized to pretend like AI really will speed things up. So now I just wait for their AI projects to fail or go as slowly as previous projects and hope they learn something.
Sadly I think you’re right. I even shy away from sharing these types of posts at work because it feels like anything that doesn’t mesh with the status quo isn’t received well.
Same here. Anyone being even hesitant about AI is viewed negatively by management
Every time these types of posts are discussed at work, the point is always that there's more risk of falling behind (more like FOMO) at pace if others are able to launch or bring new features faster
I disagree, I think the visuals, Gantt charts, are precisely the kind of "PM speak" that can be understood. Sure it won't solve anything as long as C-suite and investors do innovation signaling but that itself can only last so long.
I think the point is that clarity has been published many times.
Humanity knows how to solve starvation. Clear routes were laid out long ago. The work is in adoption.
The alternative viewpoint is that if there weren’t people who continue to try to advocate for a better world, the world we’d live in would be even worse.
Yep. I have the luxury of having my mortgage paid off and being able to be a bit picky about my work for a little bit.
So I am spending my days gardening and obsessively working on personal coding projects with these agentic tools. Y'know, building a high performance OLTP database from scratch, and a whole new logic relational persistent programming environment, a synthesizer based on some funky math, an FPGA soft processor. Y'know, normal things normal people do.
So I know what these tools are capable of in a single person's hands. They're amazing.
But I hear the stories from my friends employed at companies setting minimum token quotas or having leaderboards of people who are "star AI coders" telling people "not to do code reviews" and "stop doing any coding by hand" and I shake my head.
I dipped my toes into some contract work in the winter and it was fine but it mostly degraded into dueling LLMs on code reviews while the founder vibe coded an entire new project every weekend.
These tools suck for team work or any real team software engineering work.
I'll just let this shake out and sit out until the industry figures it out. The only places that are going to be sane to work at are places with older wiser people on staff who know how to say "slow down!" and get away with it.
In the meantime, quantities of cut rhubarb $5 a bunch in Hamilton, Ontario area for sale. Also asparagus. Lots and lots of asparagus.
Yeah I think moving forward one of the questions I'll be asking companies I interview with is "what does your seniority distribution look like and how do you intend to maintain it?"
I think there's an interesting dichotomy. I find that for things I'm already capable at, LLMs are relatively inconsequential. But for things I'm no good at, it's a huge game changer. For a large company, that's going to be able to hire out most needed roles for any given project, this means the overall effect is going to be relatively inconsequential. At best, they may be able to cut down on labor costs by having one guy do a mediocre job at 5 people's jobs in exchange for a worse product. Short-term gains for long-term costs, wcgw?
But for a small studio, or independent developer, LLMs are a big game changer. Being able to do a mediocre job at 5 people's jobs is a huge leap over trying to get by without those jobs - relying on third party assets or other sorts of content, or even worse - doing a really awful job of trying to improv those jobs. See the UI of basically any program ever that was clearly laid out by a programmer and not a designer. Or there's the whole trying to rip off stuff from dribbble, but lacking the skills to do so. Whereas with AI, you can suddenly competently rip off everything and everybody - it's basically their entire MO.
> I find that for things I'm already capable at, LLMs are relatively inconsequential. But for things I'm no good at, it's a huge game changer.
What are the chances that this is the Gell-Mann amnesia effect? Sounds like the textbook definition of it.
Personally, I find the exact opposite to be true. LLMs only help me when I already know exactly what I'm doing.
I can give an anecdote. I'm a backend engineer for a service that I would consider pretty high horsepower. We get about 30k sign ups and trillions of events a day. I haven't touched the front end with a 10 foot pole since college.
I got the opportunity to rewrite our aging login page just as a fun experiment. I sat down with one of our analysts and we just went to town in a zoom trying out stuff with claude until we made something pretty sweet. Ran it through all our systems for accessibility, performance, etc and it came out clean. Made a PR and fired up a test that day in production. I haven't written a lick of our front end framework ever in my entire life and we were able to build something that has had a marked improvement in our user engagement in a day.
> a marked improvement in our user engagement in a day.
Do you have any idea what has caused this engagement improvement and indeed do you actually have any metrics or is it hearsay?
It is much easier to knock something up in a day as you have done, but often the reason manual things take longer is they are based on actual testing and research which takes longer than a day however you do it. The manual way gives you much more data on the hows and whys, and will inform you much more in the future when you need to change again instead of just 'ai did it last time, lets use it again!'
No, we did a actual test using our existing testing framework. We have shitloads of metrics to know when a user gets stuck, when they give up, which login path they took, etc.
This wasn't a half assed test but a legitimate effort to improve something that we never prioritized
We had a legitimate 25% reduction in users giving up logging in in a system that has millions of users.
We ran a 50-50 AB test for several weeks to confirm the data and then turned it on completely
edit: If you haven't already read my post, I'd also like to say that the benefit AI gives us is that I worked on something I never get to work on, the analyst got to try a hunch he always had, and we got to see it go live in a day. If it didn't' work out, we were out a day of work which beats the few weeks of an effort prior to AI that we would spend on something just to find out it didn't work.
This seems consistent with OP. You had a feature where most of his Gantt chart is, in effect, already done: you had a clear problem with a clear well thought out design/solution (with associated documentation) in mind, you had a well setup analytics process for deployment and followup... you really had everything except that big fat chunk in the middle labeled 'coding'. So in your anecdote, an agentic coding LLM really could deliver a huge speedup by doing the remaining 10% or whatever of the work.
This is why LLMs are really great 'knocking off the todo/wishlist' of things you always meant to do. The problem, as far as broader discussions of 'productivity multipliers' or 'total factor productivity' go is that there's a certain perverse diminishing returns to such wishlist items (if each item was all that important, why didn't it get done before?), they generally only apply to a small part of a large complicated whole (what % of your ecosystem/business/community as a whole is the login page, as pleasing and profitable as that fix is relative to the investment? Probably not a big %), and they are also finite (what happens when you have worked through your backlog of lowhanging fruit?).
Just because one isn’t good at a thing doesn’t preclude one from being a sufficiently passable judge of a thing.
To wit, the answer pre-AI was to hire an expert on that thing, and you would then critically assess their work product, despite being unable to build it yourself.
True, but if you hire a generalist and they are consistently under-performing specifically in the subject matter where you are an expert, it may behoove you to take the rest of their work with a grain of salt as well.
This is all substantially correct and gives us hints as to where to focus for AI to make the processes go faster.
Eg: I had a product manager say to me that he envisions a future where any meeting with stakeholders that does not result in an interactive prototype by the end of the meeting would be considered a failure. This feels directionally correct to me.
The other thing I expect to see is Vibecoding being the "Excel 2.0" where it allows significant self-serve of building interactive apps that's engaged in a continual war with IT to turn them into something with better security guarantees, proper access control & logging, scalability, change management etc.
But the larger historical point here is that every revolutionary transition produces, in the early stages, "Steam Horses". The invention of the steam engine had people imagining that the future of transportation would involve horse shaped objects, powered by steam, pulling along conventional carts. It wasn't until later developments that we understood the function of transportation as divorced from the form.
I started talking about Steam Horses originally in the context of MOOCs, which was a classic Steam Horse idea.
> he envisions a future where any meeting with stakeholders that does not result in an interactive prototype by the end of the meeting would be considered a failure.
Just learn something like balsamiq. You don't need code to build out a prototype. Just like you don't need actors and a camera when a few sketches can capture a scene.
I've found that AI is extremely useful when coding: For example, a task that used to take 3 days I can now do in about a day, in part because I can do things like have the agent write tests, or because I can have the agent start from some higher-level instructions which I can then clean up and debug.
BUT: The article is 100% right that I spend a lot of time doing other tasks: Reviewing other teammates' work, interacting with colleagues, planning, ect. AI isn't quite as helpful there. For example, I find that co-pilot code reviews don't add a lot of value; and the AI isn't good at judging a UI.
Maybe we'll get there soon? It's starting to look like the biggest challenge with AI is learning how to use it correctly.
> Yes, AI can generate code quickly (whether that’s a good thing is open for debate), but that doesn’t mean it’s generating the correct code.
No, the code is actually almost always correct. The way it’s added is probably not what you’re going to like, if you know your code base well enough. You know there’s some ceremony about where things are added, how they are named, how much comments you’d like to add and where exactly. Stuff like that seems to irritate people like me when not being done right by the agent, and it seems to fail even if it’s in the AGENTS.md.
> If you were to give human developers the same amount of feature/scope documentation you would also see your productivity skyrocket.
Almost 2 decades in IT and I absolutely do not believe this can ever happen. And if it does, it’s so rare, it’s not even worth talking about it.
>No, the code is actually almost always correct
That's not my experience, especially when the inputs are bugs or performance issues. It frequently hallucinates and misdiagnosis without a guiding hand. However, it can still RCA and analyze well and improve efficiency if you keep an eye on what it's doing and push it the right direction.
> If you were to give human developers the same amount of feature/scope documentation you would also see your productivity skyrocket.
I think you run into a ceiling how fast a person can digest and analyze the info compared to a machine
What tools are you using? What settings? What process? What's your code review like?
I think this varies a lot. I find with a c++ project I'm working on that the LLM needs a lot of guardrails and guidance, and still gets a lot wrong. But with a vite/js project it often one shots complex and intricate changes in large codebases.
Absolutely lovely article.
> Software development is about translating a problem into a solution that a computer can understand and automatically resolve. Preferably in a secure and scalable way.
True, meanwhile software engineering puts optional bit into the requirements bucket. (ie. Secure & Scalable)
---
For the problem description and gathering requirements sentiment; I don't think we'll _ever_ have a 100% proper way of doing this. If we did, we'd basically solve any and all problems in the world.
Nevertheless, I think AI can help with investigating and exploring the problem space. Especially when the problem is an already solved thing that the prompter hasn't gained enough expertise yet.
Moreover, I think (and keep mentioning) we will see different kind of models in the near future. Those would be more specialized per industry, per language (both programming and human languages), even per field.
Those will open up newer areas for employment & job market. Something like an "AI-trainer" but more of a knowledge-worker style. Although this can also be automated with LLMs, the limits on context length/size plus amount of compute required to re-train the models to iterate faster both are quite heavy.
That last paragraph sounds like a meta vp explaining to the engineers why it is important to log all their keystrokes and eye movements. Pinky promise we wont fire you.
The trend I DO see at least based on JDs is a whole lots of “agents” which are glorified claude code but in the cloud with tools focus on a given industry or domain. If this is what you mean, then you are correct.
Been having conversations like this with a client I've worked with. They get approved by corporate for us to use claude and ask how much faster we'll be able to move with it.
I tell them "Us engineers will probably be able to deliver some of our stuff faster but it won't have even a slight effect on the actual deliverable because we've never been the bottle neck", it's the fact that the process to get an S3 bucket allocated takes (not exaggerating) 4 weeks there.
Instead of mandatory AI workshops simply cancel all meetings with more than 3 people and no written agenda. Instead block the meeting time for productive work. That’ll be 2000$ of advisory fees for the insane productivity gains I just unlocked you. You’re welcome
If people got paid for telling the truth you’d be rich.
Yes, there are MANY in tech/non-tech management that will quietly admit that a lot of this top-down stuff is to create the appearance of motion to appease a higher more tech/AI ignorant authority.
I actually have data on this. I’ve been building sharc, a Common Lisp port of Hacker News. https://www.github.com/shawwn/sharc
If that sounds familiar, it’s because it’s what dang did over the course of several years.
It’s taken a few weeks. I started right around May, and now it’s able to render large HN threads (900+ comments) within a factor of five of production HN performance. (Thank you to dang for giving actual performance numbers to compare against.)
A couple days ago, mostly out of curiosity, I ran Claude with “/goal make this as fast as HN.” Somewhat surprisingly, it got the job done within a couple hours. I kept the experiment on separate branches, because the code is a mess, just like all AI generated code starts as. But the remarkable part is that it worked, and I can technically claim to have recreated HN within a few weeks.
The real work is in the specifications. My port of HN is missing around a hundred features. Things from favorited comments, to hiding threads, to being able to unvote and re-vote.
But catching up to HN is clearly a matter of effort (time spent actually working on the problem with Claude), not complexity. Each feature in isolation is relatively easy. Getting them all done within a short time span without ruining the codebase is the hard part. And I think that’s where a lot of people get tripped up: you can do a lot, but you have to manage it tightly, or else the codebase explodes into an unreadable mess.
It’s true that if you don’t do that crucial step of “manage the results”, you’ll end up making more work for yourself in the long run, by a large factor. But it’s also true that AI sped me up so much that I was able to do in weeks what would’ve otherwise taken years (and did take dang years). I’m not claiming parity, just that I got close enough to be an interesting comparison point.
AI can clearly accelerate us. But we need to be disciplined in how we use it, just like any other new tool. That doesn’t change the fact that it does work, and I think people might be underestimating how good the results can be.
I've had a handful of software projects in my career land essentially on the day I predicted, sometimes several months out, and the commonality across all of those projects was that the specification was crystal clear. Two of them were actual ports of an existing piece of software over to a new system. And so any time we had a question about the implementation, we could look at the existing version and immediately have our questions answered about what "correct" was.
I think projects where correct is very clearly defined can benefit from LLM acceleration, as you're describing here.
But so much of modern software development is figuring out what the right thing to build is. And in those situations, I don't think LLMs provide nearly as much benefit.
I think the role of llm’s is once you have a rich enough understanding of what you want - you can speed run to build it. And then perhaps re-build to cover the issues created by the llm.
Problem for model producers is - the revenues they get from this mode of work is tiny relative to what they need.
And there's a few domains where the spec is clear and the solution kinda easy to implement. But it breaks the contract with your users or downstream projects and you now have to coordinate communication. Code rarely exists in isolation.
“ AI can clearly accelerate us. But we need to be disciplined in how we use it,”
Therein lies the paradox. And the problem is, interacting with llm’s is akin to a slot machine.
And on top of that, llm producers want you to view it that way - that’s how they generate revenue and can play games
The article severely underestimates deployment times for large, world wide services. Usually the strategy is to have a smaller "blast radius" for deployments and going in stages that are also usually time bound ("let it bake"). It also does not account for outages and fixing things you only find in deployment. Programming languages like Python it using injection in Java (e.g. using Guice) either need pristine testing, and all test teams were converted to dev 20 years ago, or have a magical way to destroy all the help compilers and static analysis can give you. So yeah, you take the 4 weeks of development from your 6 month deployment, then add 6 weeks of debugging and retries by using AI. You're welcome that will be 3 million tokens, of which you wrote 1k, the rest was system prompts and "reasoning", which you do not control. This whole AI space is highly fixable, but requires investment no one seems to be willing to do, particularly in areas that were mistakes from the past.
At least where I am we can’t and shouldn’t know all the requirements of a project beforehand^. Every project is an iterative learning process between the users, product and engineers. The problem is if everyone uses AI to replace their thinking it breaks that process and no one learns anything.
^ I say shouldn’t because I work in research engineering. Most of the needs of our users are pretty unique. We’ve had people come in and try and specify every piece of work, -and ended up building a crud app no one wanted or used.
> Every software developer knows that you can’t make projects go faster just by typing faster. If that were the case we would all be taking typing lessons.
So well said.
AI is unveiling how the bureaucracy is the slow part.
> AI is unveiling how the bureaucracy is the slow part.
Computing has been doing that for decades. If your process is fucked, computers make it fucked faster.
It’s just that now, we have entire generations alive that have never seem a world without digital computers. ~LLMs~ AI is a fun new lever in some uses so clearly it is finally the hammer that will drive the screws and bolts for us, with less effort on our part!
They just have to learn from experience. It’s what you do when you can’t be bothered to learn the lessons of the past.
Bureaucracy cannot learn the problems of the past with bureaucracy because it is against their self interest.
Work in large orgs long enough and you will recognize these creatures. Ladder climbing is a skill orthogonal to adding any value to the customer/company.
Completely agree. It amazes me how some folks think AI is unlike any other technology revolution. History repeats.
You're right it's just like any other mechanization/automation revolution. Except it's not.
It's happening about 10x faster than any other I've seen or read about.
Conceive how long it took just to get barcode scanners rolled out in grocery stores. Or direct payment terminals. Or how many decades it's been getting robotics into the manufacturing of cars at scale. I worked through the .com boom and I can tell you that "webification" took 10 years or more for most businesses (and many of them now just gave up and just have a Facebook page instead etc)
This is a little insane what's happening now. It really does change everything. People who don't work in software I don't think have any idea what's coming.
It both is & isn't moving 10x faster.
It's highly salient to management, and being forced top-down by them at 10x speed, for sure, because they see a future cost save to reduce headcount.
For certain technical roles its a force multiplier and already very saturated for sure.
On the other hand there's a lot of solution-looking-for-problem going on in large orgs where layers of management have been banging the table for 2-3 years on AI KPIs without any value being delivered.
In the weekly AI wins mail at a friends company, multiple non-technicals were bragging how AI has saved them 15 minutes a day by summarizing their morning inbox. This was the big game changer for them.
This post makes it sound like an engineers role is only the collection and filling of feature gaps, but leaves completely out that an engineer is also responsible for the feasibility of a feature. If you get a request for a feature, but you are aware of the current system's limitations, it is your job do come up with a solution which fits into the business sides given frame. But nowadays engineers have been so much drilled that showing resistance to management is portrayed as a lack of skill and not a lack of trust from management into their staff. And when it is clear that your management actually doesn't clear it just tells you how much of the self proclaimed mission is the real motivation behind these people. If the acceptance criteria of management does not meet your principles you might not be the right fit and if, in my opinion, the ac of management are mostly based on the next promise made to investors or by sales to prospects, their goal is to make money and not to develop a quality product.
Delivering more complete details for a task at hand is a noble goal, but there is a problem.
Programming is a logical circuit breaker. There is a wide range of incompleteness that halts development or puts the solutions in an unpublishable state.
A product person has no compiler, no RAM, no database, no state machine. There is nothing that can fail. There are probably strategies to weed out some issues, but none will be perfect.
We need to combine reality with computers. Computers set the constraints and we can only check if we are in bounds of the constraints by solving the problems with computers.
Oddly enough AI has so far nothing to offer to improve the "product people" problems.
Fascinating, I was literally thinking about how to communicate this to coworkers the other day, literally down to the gantt chart. Now I don't even have to make one =)
> We are now talking about software development, but this is applicable to all processes that take longer than you would like.
Indeed, it's kind of a generalized version of Amdahl's law. Since we only speed up a portion of the work, there are upper bounds on time saved. Worse, work in progress tends to bunch up at a specific point: code review. A coworker of mine literally complained two months ago now that nobody was reviewing code (and that it was blocking his work). I'm not sure review delay has actually gotten better since.
I really love AI to be honest. I feel like I'm using it to achieve much more than I could ever dream of. It changed my life!
>What people typically don’t do is look at why this is taking so long, and even more importantly: long duration does not automatically mean the problem originates there.
To some extent, we tell as many lies as we can get away with. Some answers are more convenient then others.
"Why" this is taking so long, like "why did this fail?" are prone to broadly agreed lies. Sometimes this is for obvious blame liability reasons. Often, this is because the lie conflicts with some "meta."
One such fallacy is the idea that software=value. Code= money, because it cost money to write. Features=revenue. Etc.
Irl.. startups produce features very quickly because they actually need features. They start with zero features.
But... LinkedIn, visa or even Facebook.... What they are short on is opportunities to develop code with value. Ie... Something that will increase revenue.
FB aren't resource constrained. They're demand constrained. If there were a "write code, make revenue" opportunity available... they'd have taken it already.
This totally conflicts with the experience of working somewhere. That's because you have wishlists, road maps and deadlines.... and it always appears that demand for code is sky high.
It’s completely wild to me that lifelong programmers come into contact with agentic coding and come to the conclusion that their jobs are safe for one reason or another. AI will definitely be able to write entire software, inclusive of figuring out requirements and asking the right questions. It’s not that far already. Why is it that everyone looks at weaknesses of a technology that didn’t exist a couple years ago instead of appreciating the incredible rate of improvement? I know why, because it’s inconvenient to the narrative of what makes us valuable. But still, our job is to turn ideas into a sequence of logical steps. Why can’t we do the same when forecasting the impact of AI on our jobs?
>...the incredible rate of improvement?
Because the "rate of improvement" is only astonishing in well understood areas and really only astonishing if you yourself are not that great at what you do. Speaking for myself here, my job is extremely safe given that my boss doesn't wanna sit there and prompt AI all day and i work in a fun little 4 person company. We already have plans for the 3 next years which involve me :-)
Because the "rate of improvement" is only astonishing in well understood areas and really only astonishing if you yourself are not that great at what you do.
This is a bold vague claim many on HN make, but never put back-of-napkin numbers on. e.g. do you think agentic Opus 4.7/GPT 5.5 are 95th percentile coders but you're 98th percentile? Or are you saying you're a middle-of-the-road 60th percentile coder and AI is 20th percentile so only 20% worst programmers should worry? Let's be specific about the claim being made.
I don't think we're going to be able to have rational conversations about this with C-level folks for quite some time. They mostly seem too wrapped up in copying each other to think clearly, and it's only when the bottom line starts suffering that we might be able to start asking some questions about their strategy.
Yes, it is true for large enterprises, but not for startups ans individual creators. AI is accelerating speed for anyone who is not stuck in Corporate breaucratic processes.
I think it will.
The primary issue is simply that developers are the most immediately impacted by this technology. The combination of being able to adopt, willing to adopt, and the tech actually being incredibly good at developer related concerns is unique. The rest of the business will eventually catch up. I'm watching it happen in real time. It is agonizingly slow in most places, but it is happening.
The developers being able to drain a one year long work queue in an afternoon is meaningless if the rest of the business cannot absorb the effects of that work in the same timeframe. The business will not leave your idle work queue on the table for long though. Keep pulling a vacuum on them and they will fill the space eventually.
It’s amazing to see some people talk with 100% confidence about the macro view of AI assisted development when we have had strong coding agents available for less than a year.
Exactly and the tools weren't even that great for most of that year. They only got properly usable around the end of last year. At least for me. I'd call it more like half a year.
If you don't like the state of technology with AI tools, just wait a few weeks. Things are still changing at a quite rapid pace. The scope of what is possible seems to shift regularly. A lot of what I did in the last weeks was complete science fiction even a year ago.
This article makes a few good points though. AI won't magically make processes faster. You might actually have to change the process. A lot of processes in companies are about people and how they communicate. The more people you have, the more communication you get. It's an exponential. Using AI in that context just adds to the communication noise.
But if you restructure your processes you might get different results. Most companies have not really gone through that process yet. It's too early to call success or failure. And especially non technical people have mostly not yet experienced any agentic tooling at all. We've yet to see how that will change companies. My guess is that some companies will be better at this than others. And we'll see a bit of darwinism play out.
People are far too charitable about an industry with chronic short-term thinking. We'll just lower the standards to whatever fits the success story.
Some organizations added a ton of process around software development because it is expensive and risky. They require a ton of approvals and sign-offs, then some managing overhead on top to check if their investment is on the right track. This approval process is bound to change by the fact that development is far cheaper and faster now.
Another aspect that is not captured here is that the lawyers and subject matter experts will also be using AI to speed up their parts.
I think the thing that gives human developers a leg up is the ability to read between the lines of a spec and have the ability to intuit the expected output more than an LLM in many cases.
The human their cumulative experience over a career of the nuances behind every decision and their evolved context at their given company. This context allows them to take that one-line spec and extract tons of detail from it by knowing who wrote the ticket, what was the "trigger" for the ticket, what other work is being done in tandem that might need to be incorporated, etc.
LLMs can be given this context but it's a manual process of transcription into its prompt/memory/skills and that content must be continually updated and refined. It just pushes lots of work to spec writing from the more intuitive nature of feature development a lot of us have a level of mastery over. Then you must constantly have a back-and-forth to refine the output.
Any senior engineer knows that a lot of that communication is wasted energy. If I have a good idea of what I'm building I can develop the feature in a focused flow of output that I refine in an almost unconscious way because I don't need to translate intent into words, just code, and that process is incredibly automatic after years of developing software.
When all the effort is placed into writing specs, re-prompting and then reviewing (often over and over again), that intuitive and automatic ability to build software degrades. Think of a time when you were mostly focused on PR reviews and not contributing to a project. You may have been able to help developers build better code, but if you were to jump into that project to contribute, there would be a real and painful effort to re-familiarize yourself and reconstruct that intuitive familiarity of the project.
LLMs have many very useful qualities but so far I fear an over reliance on them can be more a hinderance than a benefit.
It's felt awhile similar to what we see in parallel computing:
- shift towards throughput-oriented vs latency-oriented. Can juggle more tasks, but increasingly hard to speed up individual ones.
- strong scaling is tough. Might even see slowdowns for individual tasks, so reliable benefits come from being able to juggle more and eat the per-task inefficiency
- amdahl's law: we can't speed up tasks beyond their longest sequential (human) unit, so our work becomes identifying those bits and working on them. Related: you can buy bandwidth, but you can't buy latency
The promise of AI is in doing things at all that couldn't be automated before, at least economically. And when you find a use case where a bit of automated inference is sufficient and can replace human inference, it can wildly speed up a process, from when Susan has time for it, to right now.
Handholding is an issue which is affected by 3 factors: the model, the tooling and the human expertise. Out of the three, the last is the weakest link, due to the fact that it takes the longest to nurture.
Once tooling (e.g. agent harnesses, external tools) becomes more mature and consistent, the other 2 will become less of a bottleneck.
If I were to take a gamble here, I would argue that development will at one point reach the more ideal scenario, whereas the project planning, the scoping, will become longer. Also, the documentation section will take almost the same as the development, slightly longer at the edges.
The new ai-assisted era will most likely push companies to adopt a Waterfall management, rather than an Agile one.
Honest question: Does anyone know about any quantitative study or analysis on productivity gains using code assistants? Asking for numbers comparing between the "pre AI era" and now.
Also, I have the impression that LLMs bring some gains or benefits for individuals but not relevant enough at the organization level.
I believe it is very hard to quantify „productivity“. I’m sure that for suitable definitions you can find gains from coding assistants. Personally I get more code written and more features implemented. Yet I’m very wary of coding assistants because I believe they deal a fatal blow to my ability to understand the system. All LLM generated code is (at best!) code that was written by an intern which I just helped with the design and reviewed (unless productivity expectations cut down my review time and I get LLM assistance for reviews too). My grasp on the inner working of that code is much more tenuous than had I written it myself. I will never become an expert by just reviewing code and prompting.
For a while this is not a problem: I can work with my current mental model. But every generated PR erodes my expertise a little bit. Eventually my mental model won’t fit anymore.
So how much of that model maintenance should I count into my productivity metric? Does that even matter or will the next model be able to reason well enough that my mental model doesn’t matter?
Followup question: Does anyone know about any quantitative study or analysis on productivity without using code assistants? (as a baseline)
Every large corporation is stuck in communication problems and approval processes. They have grown so large as to have minimal alignment between what the company attempts to produce, what makes the company profitable, and what people actually do. Enshittification, The Gervais Principle, Bullshit Jobs. Pick your favorite, flawed way to look at what is going on, it's all blind people touching different parts of the same elephant.
The way AI makes your processes go faster will have little to do with cutting software development time in itself, but by letting an organization be made with fewer people, which in itself lowers your misalignment issues. A giant company of 200K people will still be about as messy as one today, but you might be able to do a lot more with the same number of people, just like a lone programmer today, without AI, already does quite a bit more than anyone could do by themselves the 80s.
Maybe some of the advantages are that you don't need quite as many developers, or maybe you can use a smaller marketing team, or you don't need to spend that much time answering questions, because an LLM is doing it for you, and it's tracking what it's been asked of it, turning the questions into product research. Either way, the gains come from being able to run leaner, and therefore minimizing organizational misalignment.
While this is true, it doesn’t stop businesses being overzealous with AI. It’s a compound issue of a decade of ZIRP, grow at all costs and then covid overhiring and AI is suddenly poised as some kind of magical panacea.
The broader issue is the sheer number of businesses that build massively overcomplicated stacks, bought heavily into bandage solutions like AWS lambda, got on dumb tech bandwagons like big data, nosql etc. This is just another one.
I think you can engineer yourself into being leaner, in some businesses AI will help but we’ve had over a decade of “we can just add more complexity” and it just does not work.
I’m a rails guy. People forget for every unicorn there’s 10 9 figure businesses just ticking away on some niche with a VPS, rails and like 4-10 devs.
Insofar as I have seen anyone get actual productivity boost from AI, the process went like this:
We have a person who wants, effectively, a formatted report generated on demand from four sources. The current interface is four different programs, all of which were written by different groups inside the corp, but they also all draw from the same or similar databases. There's a unified login, but each interface has its own permissions.
The company brings in an AI initiative and soon enough drops all security restrictions for the AI's access to the databases. The new formatted report gets generated through the use of a few tens of thousands of tokens each time, and about 5% of the time synthesizes non-existent data.
A competent DBA and application programmer could have spent a week doing the same thing, producing a program which would do the job faster, cheaper (at run-time), secure and in a way which could be extended and debugged.
But DBA and application programmer time is expensive up-front and the execs are gung-ho about the stock-price now that they are hip and trendy.
Recent NYT podcast showcased how China and the US are putting time, effort, and money into using AI. I have to say I liked China’s approach of AI percolation into the economy than US approach of walled gardens with cloud.
https://podcasts.apple.com/us/podcast/the-daily/id1200361736...
> Every software developer knows that you can’t make projects go faster just by typing faster.
You know, typing fast and accurately is kind of important.
The new speed skill that developers now need is speed reading. LLMs just make copious amounts of output (from tests, documentation, diagnostics). They also produce code so quickly that a skill for focusing on weak points is so important.
> "requirements were always the bottleneck"
> "faster typing won't make you faster".....
I understand a Deloitte consultant has specific incentives. But let's first try to answer a baseline question: why do some companies have thousands of software engineers? What do they all do?
And then, a follow-up: what is actually the bottleneck at most companies? What causes "requirements gathering" to take long?
> And then, a follow-up: what is actually the bottleneck at most companies? What causes "requirements gathering" to take long?
Complexity.
In my experience (medium size businesses, i.e. 200 million to 2 billion annual revenue) we're trying to understand how a complex set of systems and business processes and different businesses (external partners) interact and then trying to morph all of that into a shape that now has capability X layered on top or in the middle.
Here's a concrete example, business X that makes their own products and has retail stores as well as an ecom site wanted to add the ability to put complementary items built by other companies on the website and have them drop shipped from the vendors to the consumers. The final solution involved 21 different interfaces between 4 different systems (ecom system, store system, omni channel system, external drop ship mgmt system) as well as a new internal system to manage this activity. It's takes a significant amount of time to understand and solve for all of the low level details.
Isn't the answer to both questions straightforward? Real life is complex and has nearly infinite degrees of freedom. This means it's hard to approximate in software. Over time, real life, your understanding of it and your approximation (the software) all change. Keeping the approximation accurate enough that it's useful takes considerable effort since now you need to understand both the real life and the previously existing approximation of it.
What do they do ? Give power to their management ? "I am responsible for 50 people, I am important". "I managed over 250, I am important, give me money".
> If you were to give human developers the same amount of feature/scope documentation you would also see your productivity skyrocket.
This is how I felt when I first started seeing people discuss things like AGENTS.md etc.
It definitely made the process of testing features with the users 10x faster. You can iterate, test and throw away bad ideas much much faster.
The proper implementation and design still take time, but still faster in systems with a lot of available resources online.
Large corporations with orthodox methodologies will take time to extract the best benefits from AI. Small teams, which still remember the original Agile Manifesto, will soar and overtake their competitors.
Speaking about the middle, once I was shown advice from ai that a particular ticket would stall at “frozen middle management” and should be shelved until “coordination” improved. That sounds accurate, but can you imagine what a token-obsessed PM might say?
If the underlying workflow is noisy, ambiguous, or overloaded with coordination overhead, faster generation just produces more low-context output to review and reconcile.
This is so true. Recently, I’ve been working on a project involving almost every department, including Product, Engineering, Compliance, Finance, etc.. We kicked things off late last year with a many meetings. Product was primarily coordinating between the teams, but engineers also met directly with non engineering departments to explain technical details and accelerate the timeline.
However, while the engineering team successfully fast tracked development, UAT, and production testing largely thanks to AI other departments only began digging deeper into the project toward the end of April. To be fair, they do use AI in their workflows to some extent, but they haven't adapted their processes to keep pace with engineering's increased productivity.
In my opinion, this lag is mostly because many employees in those departments are older and hesitant to change their routines. While I understand that resistance to change is a natural human trait, what comes to my mind is this beautiful German adage, "Wer nicht mit der Zeit geht, geht mit der Zeit" which loosely translates to, "Who doesn't change with time is left behind by time"
Before, when there was the notion that "building is expensive", product teams would think things through, do user interviews up-front, actually do discovery around the customer + business context + underlying human process being facilitated with software.
This has shortened the cycle to first working prototype, but I'd guess that in the longer scale, it extends the time to final product because more time is wasted shifting the deliverable and experience on the user during this process of discovery versus nailing most of the product experience in big, stable chunks through design.
At the end of the day, there is a hidden cost to fast iterative shifts on the fundamental design of the software intended for humans to use and for which humans are responsible for operation. First is the cost on the end users who have to stop, provide feedback, and then retrain on each cycle. Second is that such compounding complexities in the underlying implementation as product learns requirements and vibe-codes the solution creates a system that becomes very challenging for humans to operationalize and maintain.
Ultimately, I think the bookends of the software development process are being neglected (as author points out) to the detriment of both the end users and the teams that end up supporting the software. I do wonder if we're entering an "Ikea era" of software where we should just treat everything as disposable artifacts instead.
LLMs are great at two things: search and speed of generating code.
I get most value from them when I'm asking it to either fill in the blanks of something already half implemented or when I need some feature in a given context/language that only exists in other languages
I'm sure this take will age well.
There is another problem. For developers, productivity means "functionality produced per hour of work", but that's not what productivity means for businesses. To them, productivity means "money produced per hour of work", and because AI costs money, it is this number that needs to go up (not quite, as it's more "value" than money, but until the economy adjusts they are similar). Even if we could considerably reduce the time between releases and/or do it with fewer people at scale across the industry, for it to pay off, we'll need to see a corresponding rise in demand for software and/or features.
Another option is that lower software costs would significantly reduce the cost of whatever non-software product the software supports (manufactured good, electricity, services, telecom etc.) but I don't know in which industry the cost of software is a large portion of the overall product cost.
And there's another thing. A company that makes tractors can't produce food without land. A company that makes metal machining equipment can't make cars without the raw materials. But a software company that makes software that automatically makes software could just produce the result software itself rather than sell the software-making software. If AI ever reaches the point it makes software at a marginal cost that's not much higher than the cost of the AI itself, what would be the incentive of selling that AI?
Great explaination it is true that AI doesn't generates the correct programs everytime but sadly it has become a common practice to involve AI in every aspect of Software Engineering, and it is true that it made Software Engineers to become product manager and their work has become to debug and test the entire codebase which adds more frustation.
Everything is OK, but the size of Gantt chart should be expanded.
Someone I know said "software is made of decisions". <https://siderea.dreamwidth.org/1219758.html> Seems very applicable here.
Our current most popular methods of using AI with software development is either waterfall or autocomplete. We aren't at a great pair programming experience yet. I presume that would improve speed and accuracy, but it's still unclear.
You know, AI could help you to produce better-looking charts.
Ai in my mind is a new primitive of computing, like compute a db blob storage.
It goes faster, at least for a while, if you don't look at the code.
Another post that doesn’t understand effective use of GenAI in software engineering.
The assumption is that there’s no way to extract speed and accuracy matching business models.
This isn’t obviously false to the majority of dev/arch’s because most are vibe-coding, but it is extremely obvious to the minority that has focused on accuracy first THEN speed.
https://devarch.ai/
To be honest, I think my process go 10x faster with AI. It's literally visible lmao.
This blog post is nonsensical and the arbitray time boxes aren't realistic. Not all development cycles or features require legal input and I would hazard most don't, even in Big Tech. Documentation takes seconds to generate. Same as tests.
Feature development could take minutes to hours depending on how you iterate it. These days, all we do now is just think of a feature and add it within an hour using AI. We have a process that is a year old now that is fixing bugs that would have taken us hours or days and it spits out a fix in about 10-15 minutes that is 95% accurate. 5% is garbage, but 24 months ago, 95% of it was garbage so the progress is staggering. The longest pole is code review which is all human, but that will all be automated soon.
Not everything will be much faster, but most processes will be 1-3 orders of magnitude faster. To ignore this or find excuses why LLMs/AI won't speed things up or remove the need for large swathes of humans is delusional and cope-ism.
In my world automotive/mechanical engineering we are also observing how much AI can help you build a mental model, fetch unstructured data and help shape your understanding of the system. Onboarding new engg. figuring out what is what in the system. It could have taken hours before to fetch right info, now we are able to do this in seconds.
I could see two ways out
- People need to be trained to use AI in ways that we don’t call slop, meaning half is made up by the LLM
- To this effect, LLMs should be trained to ask for more input before offering any kind of final output
While I agree with the article, I think AI can speed up all steps in the Gantt chart. It's really good about aggregating and summarizing information.
>Process blocked on human inputs
Have AI check chat, email, issue tracker and see who it's blocked on and what latest status is. It may not save a huge amount of time but it can dig through the info pretty quick.
>Exploration
Once again, have it scour issue tracker, chat, customer suggestions, product documentation and summarize history and current status. Much quicker than setting up new meetings to try to rediscover and organize existing info.
Another use case, have agent build prototype, hand to people, have AI summarize and integrate feedback.
Claude or ChatGPT + Slack MCP + Jira MCP + Google Docs MCP + internal knowledgebase MCP + gh (GitHub) CLI + Datadog MCP--really 1 MCP per process in the Gantt chart--has been a huge boost at work just digging through context scattered all over the place and summarizing.
That said, it definitely still needs supervision and hand holding along the way
So we have spent 40 years trying to get management and investors to understand that 9 people can't make a baby in one month.
There's no point in falling under the illusion that they'll finally get it now. This will all fall on deaf ears. They're convinced they're automating us out of existence when in fact they'll need the services of people who can surf complex systems more than ever.
We will be able to do more than ever and potentially faster. The issue remains that most of the things these people ask us to do and want us to do and pay us to do remains basically stupid and as TFA points out, the last mile of getting shit properly shipped isn't going to speed up. It's going to slow down.
If you want to see what happens when you put people in charge who sincerely believe in the "AI automates SWEs out of existence" mantra, take a look at the code quality of Claude Code and the recent "bun rewrite in Rust" fiasco.
I'm very much enjoying how Anthropic is basically an anti-advertisement of how things will go if you try to run a company with text generators. The univerally-despised customer support, constant outages, hilarious bugs in CC, and now how badly the bun acquisition backfired..
This is wrong already because it makes the assumption is used only for development.
No. AI is used all the way from the very start to the very end and after.
Whilst the conclusion of the article certainly seems plausible, it glosses over the cost calculations and simplifies them too much.
The cost of a subscription is somewhat offset by being guaranteed income regardless of usage, following the financial models of gyms. Whilst api costs represent both the convenience of on-demand pricing and the scale for applications with many users.
Further, the costs of api and subscriptions need to cover the operating costs of the business, the massive SOTA training costs as well as the costs of inference.
The true cost of serving tokens is buried in all of that in these enormous, opaque companies.
It absolutely will make some things faster. Anyone that has ever churned out some boilerplate code with it knows that.
...but yeah most organizational processes & people aren't set up for leveraging it and roll out will be slow (same on learning where it does / doesn't work).
I’m not convinced. I’ve been using AI pretty heavily for about 18 months and agents for a little over 6 months.
I’m currently working on a data migration for an enormous dataset. I’m writing the tooling in go, which is a language I used to be very familiar with, but that I hadn’t touched in about 12 years when I started this. It definitely helped me get back into go faster.
But after the initial speed up, I found myself in the last 10% takes the other 90% of the time phase. And it definitely took longer for me to wrap my head around the code than it would have if I’d skipped the AI. I might have some overall speed up, but if so it’s on the order of 10-20%. Nothing revolutionary.
I have been able to vibe code a few little one off tools that have made my life a little easier. And I have vibe coded a few iPad games for my kids for car trips, but for work I still have to understand the code and reading code is still harder than writing it.
This is also not from lack of trying , I spent $1000 last week during a company wide “AI week”. Mostly on trying to get AI to replicate my migration tooling, complete with verification agents, testing agents, quality gates, elaborate test harnesses etc…
I’d let Claude (opus 4.7 max effort) crank away overnight only to immediately find that had added some horrible new bug or managed to convince the verification agent that it wasn’t really cheating to pass my quality tests.
What I learned from last week is that we are so far away from not needing to understand the code that everyone who says otherwise is probably full of shit. Other people who I trust who have been running the same experiments have told me the same thing.
Until and unless we get to that point, it’s always going to be a 10-50% speed up (if that).
>if so it’s on the order of 10-20%. Nothing revolutionary.
For many businesses that is revolutionary.
Not sure that's enough magic to make the math work for the trillions being invested, but on a ground level within companies even small wins stack up. You may have burned through $1000 without getting much done, but from a company perspective they've probably got an employee with better instincts as to what does or doesn't work
I think the $1000 was worth spending just as a one time experiment. And there are use cases where LLMs are fantastic. It’s great at debugging because tracking down a bug usually takes much longer than verifying it once it’s pointed out.
Where I have a problem is with the FOMO, panic, and mania that has come down from up top. There are people in my company saying that we should be spending 3x our salaries in tokens.
But if you’re in a business where a 20% speed up is revolutionary, there are so many things that have been on the table for years that you could have been focusing on. I’ve seen at least 5 advances over that have happened over the last 20 years with that kind of boost.
That’s probably about you’d get from spending time really learning vim or eMacs.
How does that 10-20% change when the cost of tokens rises to meet post-IPO earnings targets? For example if it increases 2, 5, or 10x, does this 10-20% gain net out? (Rhetorical question)
Maybe my existing processes not but it can help you enormously. I literally found a problem with AI analyzing packages in Wireshark and it hinted and steered me in the direction in me finding the error setting in the end. Could a senior network guy found it? Yes but probably not even faster. Did I as a L2 SWE not being familiar with much of networking and the companies stack(was like 1 Month at this company) found it with no AI, absolutely no.
The METR report continues to hold up. I would add "No Silver Bullet" to the reading list.
Careful who you share this information with- better to roll with the kool-aid drinkers when they're holding the cards.
Research tells us that only 15% of software engineering is the “writing code” part. It looks like we are rediscovering that.
What a naïve article. People don’t write software this way anymore. A Gannt chart? We don’t use those anymore.
People have to stop promoting this narrative of the AI doesn’t make you move faster as it’s not helping anybody.
I get it. We all worked hard for our skills and it’s really difficult watching them get automated away, but it’s been this way since the printing press assembly lines and the industrial revolution itself. Things change, and you have to adapt to them and stop thinking about it from a centric point of view. The narrative people should be pushing is that you can build great things with AI.
Of course you might not have a job for a while and yes, that’s a big deal but it doesn’t mean that AI is wrong or stupid. It means you have to adapt.
It makes small teams without organizational overhead go lightning fast.
It might be the ultimate tool of disruption.
Exactly. The larger the organization the less percentage time devs actually are doing dev work and the less direct benefit there is from AI assisted coding tools.
I have a colleague who vibes the shit out of his part, and it results in large commits that take a lot of time to understand, and that makes cooperation practically impossible. LLMs are not team players.
I get much different results than others when using these tools. Turns out there is some skill in wielding them, and knowing the domain in which you do.
That's a you guys problem. Maybe one or both of you.
Have you thought about pair programming together with the AI?
My LLM outputs are intentional, in my style, and tightly reviewed by myself.
I'm also emitting Rust, which I've found to be the very best language to work with in AI. The AST and language design is focused around control flow and error handling. The borrow checker, sum types, filtering and mapping makes it such that good design is idiomatic.
There's a lot JavaScript, Python, PHP, and Java in the world. A lot of it isn't great. The architectures and styles are wildly varied too. Rust doesn't have that problem. The training data is really solid and idiomatic.
cars are not faster than horses
The bottom line is that AI is genuinely useful at prototyping new features, acting as a sounding board, and generating quick initial drafts, even if the quality isn't uniformly excellent. It seems plausible to conclude that it will only take a little additional effort to refine and improve that initial draft to achieve excellence and truly high-quality, production-grade code. In reality, whole processes to build properly with AI-generated outputs and that mitigate thoroughly against the fundamental limitations and constraints of AI agents (many of which are not well understood even by daily users) really need to be invented and implemented.
I think many things that were true prior to AI are still true or more so today, but new workflows and processes altogether are needed. I suspect that comprehensive, detailed planning and specification documentation must be assembled in advance of beginning code (akin to waterfall) when working with AI agents. Furthermore, I still believe customers and other key stakeholders need to be involved early and often so that the product can iterate towards a better ultimate end state (i.e., agile). Unlike prior to AI, it's completely plausible to implement both types of approaches, and they aren't mutually exclusive. We can do comprehensive, exhaustive, thorough planning and specification documentation prior to handing off to dedicated engineering and products teams, AND we can work quickly and iteratively via sprints that aim for frequent meetings and updates with the stakeholders that matter.
I also think the same validation gates that mattered before -- linting, SASTs, but most importantly, comprehensive automated testing that gets run locally and in CI/CD and is regularly expanded to cover all expectations about the behavior and structure of newly-implemented functionality -- continue to matter now, more than ever.
New tools and processes also must be built to make human review, the single biggest bottleneck in software development today, more simplified and streamlined, and less taxing. I think tools like CodeRabbit and Qodo can help automate and expedite the code-review and approval processes, but they would be even better if they were working off more surgical and tiny edits. Bloated, verbose AI-generated code edits are the core problem here. Process management techniques to mitigate the problem of AI code overload can prohibit the submission of AI-generated PRs, require senior engineer approval of any PRs prior to merging, or block the maximum number of lines or changes made. More sophisticated processes like Graphite's stacking of PRs are genuinely helpful in breaking down massive PRs into smaller chunks.
Finally, precision-editing tools for AI coding assistants like HIC Mouse (full disclosure, my project) that move beyond the existing options available to AI agents of whole-file replacement or exact string-replacement to enable agents at the editing-tool layer to perform surgical, tiny changes that don't touch any unrelated content, giving agents specialized visibility, recovery, and next-step guidance mechanisms that safeguard AI workflows, can materially reduce AI code slop by alleviating burdens upstream of code reviewers, both automated and human.
The bottom line: Shipping secure, production-grade code was never easy and always took a long time. It's not necessarily easier now just because certain aspects to the overall process can be generated much more rapidly. Arguably, the hardest parts like human review and approval are much harder now -- not easier. Solutions will take hard work and must be tested in the crucible of real-world enterprise usage. I am guessing that companies that deploy successful processes will be wildly profitable. Those that don't, including well-established incumbents, will fail. I do think AI absolutely can give organizations a game-changing boost in development velocity of genuinely high-quality code that might even be better than anything ever created previously. I also fully agree with the author that for many organizations, AI will not make their processes go faster and may even slow things down.
"In 1975, Dr. Joseph Sharp proved that correct modulation of microwave energy can result in wireless and receiverless transmission of audible speech."
I just spent a few days cleaning up someone's web app they created with Claude Code. There was more than 30k lines of DEAD code, and I was able to cut the code that was actually being used down by ~30-40%. If I just wrote this app myself It would have taken a day or two.
LLMs are not helpful, they make everything worse. They make you worse, or reduce you to average at best. I really just don't see what ya'll are seeing. I have access to every model with no limits, Its not issue of "holding it correctly" I can assure you, I've tried.
Yes it can create very small programs with low complexity, but anything of any size ends up as a literal Eldritch horror or with so many subtle bugs that make life miserable. I actually hate all of you that are pushing it onto people, its such a lie.
Yeah, totally agree. In my experience I've found that people think AI is making them more productive, but mostly it just seems to amplify their existing failure modes. They don't realize they are wrong, because they never had the skill in the first place.
So, for example, if someone is a poor at architecture, then they ask for AI's help to design a new feature, they won't know when to push back on the AI design, so the design will be overly complex and not solve the problem optimally.
If they are a poor debugger, and ask for the AI's help they will not know when it has incorrectly made a false assumption on the root cause or interpreting data and come to a faulty conclusion.
If they are poor at writing optimized code , and ask for ai to write some , they won't push back when the code is literally 10x the size it needs to be to solve the exact same problem.
Same, exact experience here.
This one non-technical PM guy at work used Codex to develop a project I was expecting would fall on my plate. He asked me to do a code review on it. What it produced was riddled with SQL injection vulns and the UI was complete garbage.
Off of that example, the key stakeholders on my project are demanding I start vibe-coding everything. I raised the security flag and now they are saying, "well, now we have a prototype and real development can continue," but it's clearly just to mollify me and make me shut up, because no such development effort on that other project has been planned, scheduled, budgeted, etc. They are kind of just sitting around on it, hoping they can get everyone distracted long enough to sneak it out the way it is.
"But he did it in a week!" Yeah, it would have taken me only a week to make whatever of value actually was in that project. The reason our software projects at our company take longer than a week is not because of code, it's because we have an IT department that blocks production deployment of everything unless you literally get the president of the company to make them do it. That's not a repeatable process that every project can leverage.
There was another project another more-technical-but-not-a-developer guy (he knows how to use MS Access) did in Claude Code where, yes, Claude could read a bunch of PDFs he got from the client, get the salient details out, made an Access database out of it, and made a static HTML website out of it to make those documents easier to search and navigate. But again, the UI was complete, unadulterated garbage. And, the best part, he spent several weeks on just getting Claude to reliably process the entire set of documents. He never could quite get it to end-to-end do the entire process. It kept missing documents and reprocessing the same ones over and over again. A for-loop to iterate over a directory of files would have taken 2 minutes to code by hand and he got stuck on it for over a month.
AI will speed us up, my ass.
Look, if AI means I never have to open another PowerPoint from a client to read a "quad chart" on one particular slide to get the data I need to do my project because my client doesn't understand that PowerPoint is not a data transmission format, fine. I'll be happy with just that: AI vision as a library I can call out to from my code, just like we've been trying to do with OCR but traditional OCR sucks at the job. But there's a bigger drumbeat than that and it ends in dilettantism and laying off the junior analyst and developer staff. I will be no party to that.
I'm not even a front end guy, and have little experience with UI/UX, but its wild how easily decision makers are impressed with UI spit out by an LLM. This era of anybody being able to make a dashboard with Claude Code has made me really appreciate the amount of sweat that designers and devs put into a good user experience.
I agree with it being great for OCR, the most impact LLMs have had for me are structured outputs I can call from a function: "extract X value from this ambiguously structured document and return json that can my code can deserialize into Y type." However, how many people are doing something similar and spinning up $500k in GPUs just to avoid writing regex.
They are so bad at UI. I'm not a traditionally trained UI dev, but in the past I've been the solo dev on so many projects that I had to git gud on making at least a clean, functional UI. No crazy animations and "delightful experiences," but I'm skeptical that level of design is necessary for anything outside of consumer apps designed for kids and the child-like. My default, slapped-together UIs for just getting out of the way and getting forward movement are still infinitely better than the inconsistent, buggy, overly decorated UIs the LLMs produce.