> The thing is that even if I was wrong (I'm not) and AI was somehow helpful for software engineering (it isn't), I still wouldn't want to use it.
So even if you were wrong on the facts (you are) you still wouldn't change your mind? In other words, you're unreasonable and know you're unreasonable and think that's totally fine?
> I actually like writing code. Why would I want to give up something I enjoy?
This line was good and lines up well with why I use minimal AI. But indeed the rest of the article shouldn't have really been needed then if this was the point.
It's fine if people don't want to use AI for anything, and honestly I don't even believe you need to justify it. The justification given here is interesting and I think shows misunderstanding.
At one point the author writes
> AI is a tool that can only produce software liabilities
which I would argue is completely caused by misuse of AI. Sure, you can have AI write a ton of code that often comes with subtle bugs. But using AI doesn't mean that it has to write any code for you at all. I've been using LLM often for security analysis and the results are quite good. Vulnerabilities that we had collectively missed were shown and we could fix them ourselves.
In this case, instead of creating liabilities, we were able to use LLM to get more information about our code. It's completely possible we could have deduced this information on our own, but we didn't and LLM is capable of doing it much more quickly than humans.
I suspect people with opinions like the author’s haven’t been in a project where people use LLMs responsibly. We had a senior dev basically just prompt and push, with very little overseeing and minimal instructions, causing so many bad PRs and even prod bugs. That made me a sceptic for many months about agents.
Then we started to have (myself included) people actual plan out the tasks for the bot: give good specs, good ac, file context, better self-review, better ”agentic practices” (i.e. asking it to review it own work can sometimes help), and suddenly I noticed you really can use agents in a real world 1mil LOC project. If you do it well and responsibly (also meaning you still retain some sense of ownership and actually review the shit)
Nope. My point is not about the quality of the generated code, it's the fact that it was generated. No matter how good it is it will always be a liability without the accompanying asset which is the understanding produced by undertaking the effort of writing the code. Generated code is exclusively cognitive debt. It is also, by definition, legacy code since no one wrote it.
I was a hand tool woodworker, but the first time I had to rip 56 6 foot boards into 7 strips I immediately purchased a table saw. Now I use hand tools rarely because I find the speed and quality of my cuts are better. I still use hand tools for things that require certain standards, but electric tools almost always produce better quality results.
It’s about the same for AI coding, I just get better results.
Similar to wood working, sometimes I use the LLM rough out the concept quickly then refine it. The initial roughing looks awful and this seems to bother some people a lot. It’s fine for me because I still have the correct tools to pull it all together. It saves me immense amounts of time.
Another analog is using power tools to make jigs for hand tools. I’m constantly rigging up test or data wrangling harnesses to improve my ability to verify and refine solutions. It’s so ridiculously useful for improving outputs, even if it isn’t writing the code that makes it to production.
Your power tools run out of tokens and you have to open yet another online account to get around the daily sawing limits in order to finish the task today?
You can use qwen 3.5 for genuinely useful stuff without worrying about subscriptions and tokens. The 35b model works well on my Mac Studio and does all kinds of menial tasks so I can use my subscriptions for more important or complex things. I don’t think it’ll be long until models comparable to Sonnet today will run on my machine.
I have no idea what the frontier will look like in a few years but I don’t doubt local models like qwen will still be a staple of my workflows.
And for what it’s worth, there are people out there who lose their sawing ability because a safety brake totals their blade and needs to be replaced for something like $100. Sometimes we pay extra for features we value. We can always pull out the hand tools if we have to. In the mean time, make hay I guess.
I think we have to be careful with such analogies. One does not have to have sweated for years with hand tools to understand what an accurate rip cut through ply looks like. On the other hand, if you just gave someone some rough cut wood and an electric sander, how would they even understand what that wood could look like having never used a good, sharp hand plane?
With AI coding we're talking about people producing abstract artifacts that most people do not understand and do not know how to test. These aren't just strips of board. They are little machines. So you shouldn't be asking whether you'd trust a table saw to cut your boards, you should be asking whether you'd trust someone who has never cut boards to build your table saw.
Everyone is talking about AI coding like only brainless idiots are using it. I’m a professional, I can judge and fix the clankers output. I don’t give a shit if some other idiot is using their tools right.
The vast majority of people using AI to code, even in production, are brainless idiots. Not knowing anything about the process and not needing to care is the entire draw of AI for most people regardless of the medium, and particularly for employers. Processes are moving to eliminate humans from the loop of AI production, not to require them.
People like you are an anomaly, not the norm. "I wrote an entire production quality SaaS without knowing what a function is" is the norm.
Is it? Isn't the inverse? The speed of your cuts is improved with AI a bit, but aren't the cuts all rough and need additional work? Isn't the quality less than what you would do by hand?
Because that's what every AI usage I've experienced has been.
I think I’m lucky that I never enjoyed programming, I enjoyed thinking about problems. That makes AI coding great, because I’m good enough at programming that I can describe what I want easily to an LLM, and I can judge the results very well for myself. I read and understand each line so I know I’m not committing crap.
I feel similarly. I wanted to develop software, I didn’t want to “program”. I want my code to fix problems, I want the end result to feel great to use, I want it to be able to fix problems and feel great a year from now, too.
I want to be better month after month, I want to be able to discover new areas.
Using AI tools makes sense to me. It’s important that you don’t believe everything the hype men are telling on Twitter, but it would also be a mistake to believe there is nothing valuable in this technology.
> It feels as good for developer ergonomics as the move away from CRT monitors.
I kind of think CRT monitors were much better for developer ergonomics than LCD because of the tendency to set modern monitors much deeper into the desk and have to lean forward to see them. CRTs forced you to sit with better posture
Just how much boilerplate have people been putting up with for this to be an oft cited advantage of LLM usage? I know boilerplate has to exist somewhere, but I've been labouring these past couple decades under the assumption that boilerplate should be rare and to be avoided.
Most people would not be able to ride horse properly, that would end up in catastrophe (or nothing just standing around or going to random directions). So your analogy is good but not in way you probably intended.
- Chips are becoming more politicized; I fear artificial scarcity as with housing will be put on chips, driving up prices.
- It causes a lot of centralisation. No, I cannot run deepseek at home. I don´t have 100.000+ USD laying around. 1TB of VRAM is not chump change.
- It can be a threat to the flourishing of open source. There is no longer a reason for me to work with other devs to build something in public together. I just have the LLM write what I need. It isolates.
These are the only drawbacks. Eveything else is clearly the artisans' ego getting in the way. That being said, if a piece of code is critical infra onto which many other things hinge, I will still hand code it.
EDIT: I think software will centralize heavily eventually; all the individual software devs we have now, and all the little custom shops, will all coalesce into a few megacorps per state. Clothing used to be made by famillies (micro scale), for the village, not produced centrally. It's not unthinkable the same will happen with software. The vendors have unprecedented access to all software being made; not just the code, but all the reasoning and iteration behind it. Plus, they can use their own model for development, allowing them to undercut any software house they want. The software world will be completelty unrecognizable in about two decades, I estimate.
It doesn't have to be only extremes. How about using a little bit of AI? I also like coding but on the other hand writing down the 1000th loop to iterate over some array is not exactly fulfilling either.
It's the same with wood craft as a hobby. On one end of the spectrum is a CNC router, that's something that would somehow defeat the purpose (for me). I am electric drill/screwdriver because it would be tedious to do everything manually. On the other hand I like to saw with a japanese saw because it is so good that I can work fast. Your mileage may vary.
Thinking about it we might reconsider this whole philosophy of "software as a craft".
what is the main reason you think this? i have literally 0 experience with coding but ive used AI to build be stuff i need what would you say the major concerns are?
I think they're basically the same. ”AI is useless for everything, therefore I will never use it", or "AI will solve everything so i'm never going to even look at the code it produces". Both extreme/absolutist positions and both impractical/foolish.
The ideas are alluring because they're extreme. However the relatively boring stuff in the middle is far more likely to reflect reality or be actually useful.
I'm all in favor of talking about drawbacks of AI coding and potential future problems. No problem. But at this point just the blanket statement that you'll never use it is not reasonable. It's the equivalent of a master car mechanic seeing a robot that can pretty reliably rebuild a transmission in a few minutes saying "I'll never use that; I'll always do it myself." Okay, sure buddy. You keep taking 8 hours to do what now takes everyone else 5 minutes. Knock yourself out.
almost everything he says is reasonable and correct though. using AI does undermine understanding, and companies hiring less juniors will be the death of them. Also juniors using AI will be the death of deep understanding if they continue. Robots fixing cars is not an apt analogy because it's a rote task. LLMs are being used for far beyond rote tasks and that's where the danger lies. People forget that most frustration and struggle are crucial, not something to remove. And people especially beginners do not have the judgement to know when struggling is appropriate.
By analogy, you can imagine a mid-century human computer shouting "I will never use computing machines to perform numerical calculations! I must perform every addition by hand!" You can even imagine a commune forming around "hand-made calculation" and trying to sell services that are certified "automation-free".
> People used to proudly use Vi to write code. But now IDE is commonplace.
this is funny because I met many software developers who did not have a, let's call it "Unix hacker background", who only ever used Git through an IDE. Now sometimes when they need to log into some system and use the commandline they are lost because they are neither familiar with a shell environment (readline and stuff) nor the Git commands.
I guess you can work that way but that's not for me, because I want to understand stuff. Of course I also use an IDE but I don't rely on it.
As much as I also enjoyed the actual coding part, a lot of it is just .. boring plumbing. I enjoy solving the problems - designing the solutions, the algorithms, choosing the right tech, coming up with nice abstractions.
When doing agentic development, you need to be in control, at least for now. Every frontier model will still do incredibly stupid stuff, and if you let it cook unchallenged, you'll have a codebase that doesn't scale. Claude will happily keep piling turds upon your tower of turds, but at some point, even an LLM will have a hard time working in it.
When you are at the wheel, the engineering hasn't changed. You're still solving all the same problems, but you can iterate a lot faster. Code is now ~free, and the cost of having a bad idea is now much cheaper, because you can quite literally speak the solution out loud and fix it in a few minutes.
Back in the day of the early industrial revolution and roads being improved, I imagine there were quite a few "horses forever" people. Some people embrace progress, some hate it. No one however is comfortable with change, if they had any skin in the game.
And everyone having a calculator from grade 4 in school, hasn't made everyone an accountant.
But to be fair, no one has ever experience change as fast as our profession has.
Passion for code, dedication to the art of it... is what always defined me since 1980 on my Magnavox
Odyssey. So I perfectly understand what he is talking about, and I share most of it. Still, he makes
me smile sometimes with condescension on his stubbornness (I know what I'm talking about, I am
stubborn).
Well now, he just makes me smile, not laugh. I keep my laughs to those who embrace AI yelling
"hooray" that they no longer need to code while they pretended to love it for so many years. No, you
didn't. You though you did, as many people think they love their partner. Or childs. But to which
point? What would you sacrifice for it? Whatever you say, you don't really know, and you probably
have to say it anyway for the sake of looking weird, or a so-called bad person.
I don't care of what people think here, so let's do it: I sacrified my social life to my
passions. My professional life too. I turned down promotions, even early. Because not coding, or
coding less, was not worth any salary. Besides I'm not made to manage teams anyway, they would blame
me for being harsh, too demanding, so no, forget it. I want to remain happy, your employees too. Let
me do what I love and everything will be fine (though don't take me for a grunt, I have things to
say in my field, this is MY field). Yes, I sacrified my life to it. Did you? No, you're not
dedicated enough. That's not a shame, maybe I'm the one to blame, maybe I'm the one pointless, the
one too much this or too much that, but I am what I am.
So I won't blame him for this article. We're probably the same kind of nerds in that regard, and
nerds are just that: living in another dimension. Not only different from a so-called conformity,
but something more unfathomable. That's why they marginaly work together: they can't even understand
each other completely.
However, I would not have written that I don't use AI. Because I do use AI, but undoubtly and
definitey not the way most people do (or pretend to). And probably in a way the author did not
really try. No need for the damn Claude and such, come on. Free options are enough for that way of
using it. Need to refactor? Why would I ask the AI, I prefer to do it with LSP in my Emacs
editor. Takes longer? Maybe. But I'm still aware of the whole thing. My brain cells refresh, like a
RAM.
AI does not write my code. It often suggests, so often that it's not rare I ask it firmly to stop
writting code, only talking about it, about some logic in a specific area. That's a quite different
approach. And even if its code is good, I would be ashamed to kill/yank (you though I would
copy/paste? Come on!). First, it's not my style, not my naming conventions, etc. I know we can lead
it to use our style (users of Claude always talks about config files for such things), but I fucking
don't care. I don't want to depend on this, needless to say what I think about paying for it.
I can say it now: AI is the better companion of the lonely nerd EVER. I wish the author would find
it at some point. Not to write code for him, but to help when in doubt on something. Oh damn, I
always have doubts in many ways. That's sane to doubt. Never leaving the thinking apart, no way! My
brain cells need that.
Also to have clues of the options. Clues of the newer paradigms. Or simply chit-chat
about... code. Common practices. Algorithms (more often, it just responds about things I already
know, so what? It's not in my mind, let's continue). For example, it's very good to embrace modern
C++, there are so many things that changed in that field. It's also good to make sense of sometimes
over-verbose compiler errors, especially when you go crazy with your own templates (omg, yes, in
that damn context it's a typename, such things). Or to work with unreadable regexps too. Such things
again.
That's not an approach for work, for jobs, to make a life. Where we have dead-lines and must respect
it as much as we can. I don't work since a year or two, I'm just keeping an eye on technology, as I
always did anyway. I always work much more with my own projects, it's too satisfying to stop, or
find an excuse to stop; no boredom. The best times are when I do things for myself, for my own
dedication. In that context, without dead-line, the use of AI is sooo different from what they're
all mumbling about. TBH, I've read many things about it, and NOT A SINGLE time I've read something
that closely matches how I use it.
I don't buy time with AI. Most people do, but I definitely don't. On the contrary, I lose
time... but for the better. The same way I always lost time digressing of my current goal. What's
that new thing it talks about? Wait a minute... wtf? Let's dig it... wow, that's cool! One hour
lost, still... not really lost. And again, and free, no charge, not for that. Yes, I need to recall
some context sometimes, just enough for what a fresh session needs to know. It does not need to know
all my codebase, just that bit, and maybe that bit too, and off we go.
So yes, you see, I have much more fun reading them all than reading that article which, in many
ways, makes sense (and that's a breeze in the AI hype), but which is still too stubborn, and
believe me, I rarely say that from other people in my loved field ;)
AI helps me learn. Not to code, I do this since ~45 years! No, learn new things, new paradigm, or
old ones I may have missed, because coding field is so large you can't know everything. You just
have to insist when it seems to just say what you already know. There's always something to learn at
some point. One doesn't have to trust it all the way, we have tabs! Dig the docs, the manuals, the
APIs... just like before, except the AI often prevents searching for too long for the good terms.
I could write more on it, a lot more, but I'm not writting an article, so let's stop now.
Two months ago I would have written the same thing.
I have 50 years experience programming. I have adapted to change over time to stay employable. And I have cultivated programming as a craft, taking pride in my experience and expertise and knowing how to write working code "by hand."
Then a couple of months ago my employer adopted AI, and I saw almost immediately that I couldn't keep up with it. I could mock it, criticize, point out the silly mistakes it makes, but I found it hard to argue with the results. The programmers using AI (Claude Code in our case) got their work done faster, and I couldn't honestly say their work looked any worse than it had before AI -- in fact I noticed more unit tests, fewer regressions, and abilities enhanced even from the more junior programmers. I had to get on the bus or get off, so I learned how to use AI and have seen my own productivity increase at least 3x.
I think we need to distinguish between programming as a craft -- the thing the author says he enjoys and won't give up -- and programming as labor someone else pays for. Anyone who has worked in the software development business for very long understands that our employers and customers don't care about our craft. They don't care about readabiity, maintainability, technical debt, best practices. They care about getting things done that address the business problems they have, or think they have.
For a long time we -- programmers or whatever euphemism you prefer -- have held the upper hand. Our bosses and customers had no alternative but to pay us to write code for them. They have had to put up with shockingly unpredictable processes that lead to chronic schedule and budget overruns. They have paid for low-quality software, then paid us to do it over. Only a fraction of software projects succeed (go into production and/or result in profit or cost savings), and an even smaller fraction get delivered on time and within budget. I don't mean to imply that we have done that on purpose, but programmers do like to pat themselves on the back and talk about best practices and clean code and every other method and tool "stack" we present as silver bullets, but have little to show for it, for decades.
Now AI comes along and the curtain gets pulled back, and we're indignant, threatened, defensive. A mere bot can't possibly write code as good as I can! The AI companies reek of fraud, corruption, environmental destruction.
No matter what happens to the current crop of AI companies, or how much money gets wasted or grifted, or how much pollution they cause, the LLMs and the coding tools they enable won't go away. They work, regardless of their owners and the damage they cause. Programming will look like this from now on whether we like it or not.
We can retreat into our craft, like the guy with hand tools carving tables in his garage. But I know I can't feed myself or my family with my software craftsmanship, because no one will pay for that anymore. Faced with this reality I had to decide to either leave the business (I am at retirement age anyway) or adapt and continue to get paid. We will all have to make that choice.
In my so-far limited but overall good experience with AI programming I think knowing how to program, and having a lot of experience, gives me a significant advantage over a non-technical manager or a newb programmer. I know how to tell the tool what I want it to do in clear unambiguous terms, and I know how to decide among alternative approaches, and how to judge the result. I won't call myself a "prompt engineer" anytime soon but that describes what I do now. The author can wait for this all to blow over and for programming to go back to hand-crafted code, but I don't think that will happen.
This writing is terrible and immediately put me off. The ‘superfluous’ swearing that the author seems to be proud about is instead going to put off a lot of his potential audience. Anyways, the ideas are nothing new that people haven’t read before as far as arguments against AI and the AI industry.
I prefer this writing style 100x over the bland AI assisted garbage that I have to read every day.
Give me something with an opinion, personality and evidence of battle scars any day. There’s actually extra signal here that helps me process what I’m reading. When I understand where the author is coming from I can extrapolate, attenuate and compare/contrast the content with my existing mental model far better.
> These are not the same thing. You don't develop skills by reading about them. You have to use them, to process the information, integrate what you've learnt into your existing mental schema,
My mental AI detector would classify that passage as AI-generated with confidence around 85%. It would be 95% if the list had stopped at three items. Regardless of who wrote it, it's the same style.
The final sentence says it all:
> The thing is that even if I was wrong (I'm not) and AI was somehow helpful for software engineering (it isn't), I still wouldn't want to use it.
So even if you were wrong on the facts (you are) you still wouldn't change your mind? In other words, you're unreasonable and know you're unreasonable and think that's totally fine?
Well, cool. Next time, lead with that.
> I actually like writing code. Why would I want to give up something I enjoy?
This line was good and lines up well with why I use minimal AI. But indeed the rest of the article shouldn't have really been needed then if this was the point.
The bar for insulting people who have anything negative to say about AI is getting scarily low.
You can certainly disagree, but that sentence isn't unreasonable - you cut out context.
Thanks, helped me save some time
It's fine if people don't want to use AI for anything, and honestly I don't even believe you need to justify it. The justification given here is interesting and I think shows misunderstanding.
At one point the author writes
> AI is a tool that can only produce software liabilities
which I would argue is completely caused by misuse of AI. Sure, you can have AI write a ton of code that often comes with subtle bugs. But using AI doesn't mean that it has to write any code for you at all. I've been using LLM often for security analysis and the results are quite good. Vulnerabilities that we had collectively missed were shown and we could fix them ourselves.
In this case, instead of creating liabilities, we were able to use LLM to get more information about our code. It's completely possible we could have deduced this information on our own, but we didn't and LLM is capable of doing it much more quickly than humans.
I suspect people with opinions like the author’s haven’t been in a project where people use LLMs responsibly. We had a senior dev basically just prompt and push, with very little overseeing and minimal instructions, causing so many bad PRs and even prod bugs. That made me a sceptic for many months about agents.
Then we started to have (myself included) people actual plan out the tasks for the bot: give good specs, good ac, file context, better self-review, better ”agentic practices” (i.e. asking it to review it own work can sometimes help), and suddenly I noticed you really can use agents in a real world 1mil LOC project. If you do it well and responsibly (also meaning you still retain some sense of ownership and actually review the shit)
Nope. My point is not about the quality of the generated code, it's the fact that it was generated. No matter how good it is it will always be a liability without the accompanying asset which is the understanding produced by undertaking the effort of writing the code. Generated code is exclusively cognitive debt. It is also, by definition, legacy code since no one wrote it.
I was a hand tool woodworker, but the first time I had to rip 56 6 foot boards into 7 strips I immediately purchased a table saw. Now I use hand tools rarely because I find the speed and quality of my cuts are better. I still use hand tools for things that require certain standards, but electric tools almost always produce better quality results.
It’s about the same for AI coding, I just get better results.
Similar to wood working, sometimes I use the LLM rough out the concept quickly then refine it. The initial roughing looks awful and this seems to bother some people a lot. It’s fine for me because I still have the correct tools to pull it all together. It saves me immense amounts of time.
Another analog is using power tools to make jigs for hand tools. I’m constantly rigging up test or data wrangling harnesses to improve my ability to verify and refine solutions. It’s so ridiculously useful for improving outputs, even if it isn’t writing the code that makes it to production.
Does your saw require you to pay for each use?
Your power tools run out of tokens and you have to open yet another online account to get around the daily sawing limits in order to finish the task today?
You can use qwen 3.5 for genuinely useful stuff without worrying about subscriptions and tokens. The 35b model works well on my Mac Studio and does all kinds of menial tasks so I can use my subscriptions for more important or complex things. I don’t think it’ll be long until models comparable to Sonnet today will run on my machine.
I have no idea what the frontier will look like in a few years but I don’t doubt local models like qwen will still be a staple of my workflows.
And for what it’s worth, there are people out there who lose their sawing ability because a safety brake totals their blade and needs to be replaced for something like $100. Sometimes we pay extra for features we value. We can always pull out the hand tools if we have to. In the mean time, make hay I guess.
I’m a professional so I don’t mind paying for tools.
local models exist
I think we have to be careful with such analogies. One does not have to have sweated for years with hand tools to understand what an accurate rip cut through ply looks like. On the other hand, if you just gave someone some rough cut wood and an electric sander, how would they even understand what that wood could look like having never used a good, sharp hand plane?
With AI coding we're talking about people producing abstract artifacts that most people do not understand and do not know how to test. These aren't just strips of board. They are little machines. So you shouldn't be asking whether you'd trust a table saw to cut your boards, you should be asking whether you'd trust someone who has never cut boards to build your table saw.
Everyone is talking about AI coding like only brainless idiots are using it. I’m a professional, I can judge and fix the clankers output. I don’t give a shit if some other idiot is using their tools right.
The vast majority of people using AI to code, even in production, are brainless idiots. Not knowing anything about the process and not needing to care is the entire draw of AI for most people regardless of the medium, and particularly for employers. Processes are moving to eliminate humans from the loop of AI production, not to require them.
People like you are an anomaly, not the norm. "I wrote an entire production quality SaaS without knowing what a function is" is the norm.
A table saw does not make decisions for you.
Is it? Isn't the inverse? The speed of your cuts is improved with AI a bit, but aren't the cuts all rough and need additional work? Isn't the quality less than what you would do by hand?
Because that's what every AI usage I've experienced has been.
Faster, yes. Useful, yes. Not better "finish".
I love using AI to code, as it saves me a lot of boring and repetitive typing.
I only commit code that is roughly the same as I would have written anyway.
It feels as good for developer ergonomics as the move away from CRT monitors.
I think I’m lucky that I never enjoyed programming, I enjoyed thinking about problems. That makes AI coding great, because I’m good enough at programming that I can describe what I want easily to an LLM, and I can judge the results very well for myself. I read and understand each line so I know I’m not committing crap.
I feel similarly. I wanted to develop software, I didn’t want to “program”. I want my code to fix problems, I want the end result to feel great to use, I want it to be able to fix problems and feel great a year from now, too.
I want to be better month after month, I want to be able to discover new areas.
Using AI tools makes sense to me. It’s important that you don’t believe everything the hype men are telling on Twitter, but it would also be a mistake to believe there is nothing valuable in this technology.
> It feels as good for developer ergonomics as the move away from CRT monitors.
I kind of think CRT monitors were much better for developer ergonomics than LCD because of the tendency to set modern monitors much deeper into the desk and have to lean forward to see them. CRTs forced you to sit with better posture
Just how much boilerplate have people been putting up with for this to be an oft cited advantage of LLM usage? I know boilerplate has to exist somewhere, but I've been labouring these past couple decades under the assumption that boilerplate should be rare and to be avoided.
We still know how to ride horses but we also drive cars, now.
You want to be delivery service that takes 2 days instead of 30 minutes to bring you pizza so that you don't forget how to ride your horse..?
Most people would not be able to ride horse properly, that would end up in catastrophe (or nothing just standing around or going to random directions). So your analogy is good but not in way you probably intended.
My point was, if you still want to know how to code you can, without having to lose all your customers.
[dead]
What's bad about AI:
- Vendors get to know everything about you
- Chips are becoming more politicized; I fear artificial scarcity as with housing will be put on chips, driving up prices.
- It causes a lot of centralisation. No, I cannot run deepseek at home. I don´t have 100.000+ USD laying around. 1TB of VRAM is not chump change.
- It can be a threat to the flourishing of open source. There is no longer a reason for me to work with other devs to build something in public together. I just have the LLM write what I need. It isolates.
These are the only drawbacks. Eveything else is clearly the artisans' ego getting in the way. That being said, if a piece of code is critical infra onto which many other things hinge, I will still hand code it.
EDIT: I think software will centralize heavily eventually; all the individual software devs we have now, and all the little custom shops, will all coalesce into a few megacorps per state. Clothing used to be made by famillies (micro scale), for the village, not produced centrally. It's not unthinkable the same will happen with software. The vendors have unprecedented access to all software being made; not just the code, but all the reasoning and iteration behind it. Plus, they can use their own model for development, allowing them to undercut any software house they want. The software world will be completelty unrecognizable in about two decades, I estimate.
It doesn't have to be only extremes. How about using a little bit of AI? I also like coding but on the other hand writing down the 1000th loop to iterate over some array is not exactly fulfilling either.
It's the same with wood craft as a hobby. On one end of the spectrum is a CNC router, that's something that would somehow defeat the purpose (for me). I am electric drill/screwdriver because it would be tedious to do everything manually. On the other hand I like to saw with a japanese saw because it is so good that I can work fast. Your mileage may vary.
Thinking about it we might reconsider this whole philosophy of "software as a craft".
> writing down the 1000th loop to iterate over some array is not exactly fulfilling either.
Code completion and snippets work well for this use case without AI.
Or just using a language that doesn’t make it hard to iterate over arrays.
TBF I haven't met one that does yet.
C comes to mind. Also Lua and Go, kind of.
Heh, ok. :) C and Go were on my mind when I wrote GP.
what is the main reason you think this? i have literally 0 experience with coding but ive used AI to build be stuff i need what would you say the major concerns are?
> Maybe the only useful thing about AI coding is making it easy to identify engineers that don't enjoy writing code.
Not true. I love coding. Much like I love walking and bicycling. I still own and use a car and an electric bike.
I also like having the option of getting to where I need to be fast.
AI discourse perfectly illustrates why you should just ignore people with view points on the extreme ends of the spectrum.
It's not the extremism I mind, it's the absolutism.
Right now, I couldn't declare "I'm right" about any part of what's going on in this space right now, and I'm surprised when others do so.
I think they're basically the same. ”AI is useless for everything, therefore I will never use it", or "AI will solve everything so i'm never going to even look at the code it produces". Both extreme/absolutist positions and both impractical/foolish.
I don’t love that take. Extreme positions are often where the interesting ideas are, even if you don’t agree with them.
The ideas are alluring because they're extreme. However the relatively boring stuff in the middle is far more likely to reflect reality or be actually useful.
Exactly. I find these proclamations pointless and sanctimonious. We don't know if he is using these AI tools privately.
But yes this is a very extreme position.
I'm all in favor of talking about drawbacks of AI coding and potential future problems. No problem. But at this point just the blanket statement that you'll never use it is not reasonable. It's the equivalent of a master car mechanic seeing a robot that can pretty reliably rebuild a transmission in a few minutes saying "I'll never use that; I'll always do it myself." Okay, sure buddy. You keep taking 8 hours to do what now takes everyone else 5 minutes. Knock yourself out.
LLMs are fundamentally chaotic and unpredictable. They cannot do anything reliably.
almost everything he says is reasonable and correct though. using AI does undermine understanding, and companies hiring less juniors will be the death of them. Also juniors using AI will be the death of deep understanding if they continue. Robots fixing cars is not an apt analogy because it's a rote task. LLMs are being used for far beyond rote tasks and that's where the danger lies. People forget that most frustration and struggle are crucial, not something to remove. And people especially beginners do not have the judgement to know when struggling is appropriate.
By analogy, you can imagine a mid-century human computer shouting "I will never use computing machines to perform numerical calculations! I must perform every addition by hand!" You can even imagine a commune forming around "hand-made calculation" and trying to sell services that are certified "automation-free".
I’ll never use a tractor. Real farming is done with a horse and plow.
not the best analogy. It's like industry has an ox pulling a plow and we just check back at the end of the day to see what's happened to the field.
Not everyone vibecodes
Similar as: I will never use a high level language. I enjoy writing ASM
This kind of writing makes you sound immature.
People used to drive manual. Now it’s all automatic transmission. Some cars even drive itself.
People used to proudly use Vi to write code. But now IDE is commonplace.
People used to write asm by hand. Transport Tycoon was written in assembly. But these days that would be insane.
Technological progress is an absolute thing. It produces too much convenience and wealth to ignore.
> People used to drive manual. Now it’s all automatic transmission.
Maybe in the USA. Here in Czechia, people generally hate automatic transmission cars.
> People used to proudly use Vi to write code. But now IDE is commonplace.
Plenty of developers use Vim and Neovim, which have numerous advantages over “modern” IDEs.
> People used to proudly use Vi to write code. But now IDE is commonplace.
this is funny because I met many software developers who did not have a, let's call it "Unix hacker background", who only ever used Git through an IDE. Now sometimes when they need to log into some system and use the commandline they are lost because they are neither familiar with a shell environment (readline and stuff) nor the Git commands.
I guess you can work that way but that's not for me, because I want to understand stuff. Of course I also use an IDE but I don't rely on it.
People still have fun riding horses. Doesn’t mean we use them to get anywhere anymore
Why is this submission flagged? It's on topic isn't it?
This.
I noticed that my friends who like AI, don't like coding much
I'm not an AI enthusiast, but a great recent use-case has been to puzzle out the pyqgis interface.
In a seemingly endless class hierarchy of non-pythonic objects, using SuperGrok to figure out how to do QGIS automation has been a timesaver.
Basically its an improved Google search in this mode, but, hey: credit where due.
See: https://open.substack.com/pub/smitty1e/p/qgis-styles-for-mul...
I Will Never Swear Again After Realizing How Cringey It Looks In Reality
Good for you?
Meanwhile written by AI.
As much as I also enjoyed the actual coding part, a lot of it is just .. boring plumbing. I enjoy solving the problems - designing the solutions, the algorithms, choosing the right tech, coming up with nice abstractions.
When doing agentic development, you need to be in control, at least for now. Every frontier model will still do incredibly stupid stuff, and if you let it cook unchallenged, you'll have a codebase that doesn't scale. Claude will happily keep piling turds upon your tower of turds, but at some point, even an LLM will have a hard time working in it.
When you are at the wheel, the engineering hasn't changed. You're still solving all the same problems, but you can iterate a lot faster. Code is now ~free, and the cost of having a bad idea is now much cheaper, because you can quite literally speak the solution out loud and fix it in a few minutes.
I share the author's love of coding and thus don't use AI for my own personal for-fun projects.
When it comes to employment and other people paying you to code, though, not using AI is increasingly a non-starter for most of us.
"I will never use AI to code" (publicly)
Not the hill I would die on.
Back in the day of the early industrial revolution and roads being improved, I imagine there were quite a few "horses forever" people. Some people embrace progress, some hate it. No one however is comfortable with change, if they had any skin in the game.
And everyone having a calculator from grade 4 in school, hasn't made everyone an accountant.
But to be fair, no one has ever experience change as fast as our profession has.
Passion for code, dedication to the art of it... is what always defined me since 1980 on my Magnavox Odyssey. So I perfectly understand what he is talking about, and I share most of it. Still, he makes me smile sometimes with condescension on his stubbornness (I know what I'm talking about, I am stubborn).
Well now, he just makes me smile, not laugh. I keep my laughs to those who embrace AI yelling "hooray" that they no longer need to code while they pretended to love it for so many years. No, you didn't. You though you did, as many people think they love their partner. Or childs. But to which point? What would you sacrifice for it? Whatever you say, you don't really know, and you probably have to say it anyway for the sake of looking weird, or a so-called bad person.
I don't care of what people think here, so let's do it: I sacrified my social life to my passions. My professional life too. I turned down promotions, even early. Because not coding, or coding less, was not worth any salary. Besides I'm not made to manage teams anyway, they would blame me for being harsh, too demanding, so no, forget it. I want to remain happy, your employees too. Let me do what I love and everything will be fine (though don't take me for a grunt, I have things to say in my field, this is MY field). Yes, I sacrified my life to it. Did you? No, you're not dedicated enough. That's not a shame, maybe I'm the one to blame, maybe I'm the one pointless, the one too much this or too much that, but I am what I am.
So I won't blame him for this article. We're probably the same kind of nerds in that regard, and nerds are just that: living in another dimension. Not only different from a so-called conformity, but something more unfathomable. That's why they marginaly work together: they can't even understand each other completely.
However, I would not have written that I don't use AI. Because I do use AI, but undoubtly and definitey not the way most people do (or pretend to). And probably in a way the author did not really try. No need for the damn Claude and such, come on. Free options are enough for that way of using it. Need to refactor? Why would I ask the AI, I prefer to do it with LSP in my Emacs editor. Takes longer? Maybe. But I'm still aware of the whole thing. My brain cells refresh, like a RAM.
AI does not write my code. It often suggests, so often that it's not rare I ask it firmly to stop writting code, only talking about it, about some logic in a specific area. That's a quite different approach. And even if its code is good, I would be ashamed to kill/yank (you though I would copy/paste? Come on!). First, it's not my style, not my naming conventions, etc. I know we can lead it to use our style (users of Claude always talks about config files for such things), but I fucking don't care. I don't want to depend on this, needless to say what I think about paying for it.
I can say it now: AI is the better companion of the lonely nerd EVER. I wish the author would find it at some point. Not to write code for him, but to help when in doubt on something. Oh damn, I always have doubts in many ways. That's sane to doubt. Never leaving the thinking apart, no way! My brain cells need that.
Also to have clues of the options. Clues of the newer paradigms. Or simply chit-chat about... code. Common practices. Algorithms (more often, it just responds about things I already know, so what? It's not in my mind, let's continue). For example, it's very good to embrace modern C++, there are so many things that changed in that field. It's also good to make sense of sometimes over-verbose compiler errors, especially when you go crazy with your own templates (omg, yes, in that damn context it's a typename, such things). Or to work with unreadable regexps too. Such things again.
That's not an approach for work, for jobs, to make a life. Where we have dead-lines and must respect it as much as we can. I don't work since a year or two, I'm just keeping an eye on technology, as I always did anyway. I always work much more with my own projects, it's too satisfying to stop, or find an excuse to stop; no boredom. The best times are when I do things for myself, for my own dedication. In that context, without dead-line, the use of AI is sooo different from what they're all mumbling about. TBH, I've read many things about it, and NOT A SINGLE time I've read something that closely matches how I use it.
I don't buy time with AI. Most people do, but I definitely don't. On the contrary, I lose time... but for the better. The same way I always lost time digressing of my current goal. What's that new thing it talks about? Wait a minute... wtf? Let's dig it... wow, that's cool! One hour lost, still... not really lost. And again, and free, no charge, not for that. Yes, I need to recall some context sometimes, just enough for what a fresh session needs to know. It does not need to know all my codebase, just that bit, and maybe that bit too, and off we go.
So yes, you see, I have much more fun reading them all than reading that article which, in many ways, makes sense (and that's a breeze in the AI hype), but which is still too stubborn, and believe me, I rarely say that from other people in my loved field ;)
AI helps me learn. Not to code, I do this since ~45 years! No, learn new things, new paradigm, or old ones I may have missed, because coding field is so large you can't know everything. You just have to insist when it seems to just say what you already know. There's always something to learn at some point. One doesn't have to trust it all the way, we have tabs! Dig the docs, the manuals, the APIs... just like before, except the AI often prevents searching for too long for the good terms.
I could write more on it, a lot more, but I'm not writting an article, so let's stop now.
Two months ago I would have written the same thing.
I have 50 years experience programming. I have adapted to change over time to stay employable. And I have cultivated programming as a craft, taking pride in my experience and expertise and knowing how to write working code "by hand."
Then a couple of months ago my employer adopted AI, and I saw almost immediately that I couldn't keep up with it. I could mock it, criticize, point out the silly mistakes it makes, but I found it hard to argue with the results. The programmers using AI (Claude Code in our case) got their work done faster, and I couldn't honestly say their work looked any worse than it had before AI -- in fact I noticed more unit tests, fewer regressions, and abilities enhanced even from the more junior programmers. I had to get on the bus or get off, so I learned how to use AI and have seen my own productivity increase at least 3x.
I think we need to distinguish between programming as a craft -- the thing the author says he enjoys and won't give up -- and programming as labor someone else pays for. Anyone who has worked in the software development business for very long understands that our employers and customers don't care about our craft. They don't care about readabiity, maintainability, technical debt, best practices. They care about getting things done that address the business problems they have, or think they have.
For a long time we -- programmers or whatever euphemism you prefer -- have held the upper hand. Our bosses and customers had no alternative but to pay us to write code for them. They have had to put up with shockingly unpredictable processes that lead to chronic schedule and budget overruns. They have paid for low-quality software, then paid us to do it over. Only a fraction of software projects succeed (go into production and/or result in profit or cost savings), and an even smaller fraction get delivered on time and within budget. I don't mean to imply that we have done that on purpose, but programmers do like to pat themselves on the back and talk about best practices and clean code and every other method and tool "stack" we present as silver bullets, but have little to show for it, for decades.
Now AI comes along and the curtain gets pulled back, and we're indignant, threatened, defensive. A mere bot can't possibly write code as good as I can! The AI companies reek of fraud, corruption, environmental destruction.
No matter what happens to the current crop of AI companies, or how much money gets wasted or grifted, or how much pollution they cause, the LLMs and the coding tools they enable won't go away. They work, regardless of their owners and the damage they cause. Programming will look like this from now on whether we like it or not.
We can retreat into our craft, like the guy with hand tools carving tables in his garage. But I know I can't feed myself or my family with my software craftsmanship, because no one will pay for that anymore. Faced with this reality I had to decide to either leave the business (I am at retirement age anyway) or adapt and continue to get paid. We will all have to make that choice.
In my so-far limited but overall good experience with AI programming I think knowing how to program, and having a lot of experience, gives me a significant advantage over a non-technical manager or a newb programmer. I know how to tell the tool what I want it to do in clear unambiguous terms, and I know how to decide among alternative approaches, and how to judge the result. I won't call myself a "prompt engineer" anytime soon but that describes what I do now. The author can wait for this all to blow over and for programming to go back to hand-crafted code, but I don't think that will happen.
This writing is terrible and immediately put me off. The ‘superfluous’ swearing that the author seems to be proud about is instead going to put off a lot of his potential audience. Anyways, the ideas are nothing new that people haven’t read before as far as arguments against AI and the AI industry.
I prefer this writing style 100x over the bland AI assisted garbage that I have to read every day.
Give me something with an opinion, personality and evidence of battle scars any day. There’s actually extra signal here that helps me process what I’m reading. When I understand where the author is coming from I can extrapolate, attenuate and compare/contrast the content with my existing mental model far better.
> These are not the same thing. You don't develop skills by reading about them. You have to use them, to process the information, integrate what you've learnt into your existing mental schema,
My mental AI detector would classify that passage as AI-generated with confidence around 85%. It would be 95% if the list had stopped at three items. Regardless of who wrote it, it's the same style.
Not exactly anonymous, but even so:
https://www.penny-arcade.com/comic/2004/03/19/green-blackboa...
I'm with you. What is the point of that
[dead]
[dead]