I'm very thankful I came of age during the golden age of personal computing. I was able to own my own computer(s) and earn a living writing software on them and for them. Fifty years was a good run, and I consider myself lucky to have participated in it.
IMO we've gone full circle: dumb terminals chained to mainframes and the whimsey of someone else's rules, restrictions, and rent-seeking, to my own bought-and-paid-for computer sitting on my desk that did exactly what I told it to do using software that never changed unless I wanted it to change, and now we're back to dumb terminals (browsers) that talk to mainframes (the cloud) that not only harvest and sell my personal information to the highest bidder but constantly change the rules and restrictions on my software and have gone back to renting me the software and pushing changes that I never asked for and never wanted in the first place.
I will never use spicy autocomplete for anything, and I find it depressing that people are being forced to use it in order to keep their job. I see a very dark future for computing if real skills are all replaced with garbage being vomited out by rules engines that harvested their "guess the next word" results from today's internet.
> I've come up with a set of rules that describe our reactions to technologies: Anything that is in the world when you're born is normal and ordinary and is just a natural part of the way the world works. Anything that's invented between when you're fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it. Anything invented after you're thirty-five is against the natural order of things.
It's funny because I was watching a lot of amazing new tech appear after I was 35 and most of it was exciting. Learning these things was fun and rewarding. You could say it made me happier.
Not sure why LLMs feel the opposite. Maybe it's because of the terrible marketing and pushing it down everyone's throats. Maybe it's because of the personality of people like sama, or how it's being used to produce the so-called AI slop globally. Maybe something completely different. But there's something bleak and off-putting in it.
I’m not so sure. Douglas Adams was an avid technologist who worked on two interactive fiction games: the famously-cruel Infocom Hitchhiker’s Guide and Starship Titanic. I don’t remember whether there was anything free-form about the dialogue in the HHGG game, but Starship Titanic had many bots you could talk to. It was immensely fun, and I suspect he would have loved the ability to spin out dialogue a little more naturally.
On the other hand, the HHGG universe is just packed to the brim with deranged robots. Everybody loves Marvin, of course, but my favorites were the sycophantic ones like the elevators that sigh with pleasure upon delivering you to your destination. Adams always seemed to do perfectly anticipate the insanity of marketeers, and I expect that we’ll actually get some of this someday…
A huge part of what attracted me to programming was how free and open it was. The fact that literally anyone with a computer could install Python/Javascript/etc for free and create virtually any software they wanted, limited only by their own abilities and determination, was wildly exciting to me. I would say empowering, if that weren't such a cheesy overused term. If you were any good at it, you could get a great job at an interesting company.
Now like you said we're entering a world where anyone with a computer can pay a giant tech company thousands of dollars a year to spin up some agents for them. That's much less exciting to me, and I'm certain I would not enter the field if I were just starting out right now (assuming there even was a junior job available).
We've seen how big tech monopolies treat domains they control like search and social media. They try to extract all of the value, leaving nothing for the individual or common good, and they're quite effective at it. I'm not looking forward to them gatekeeping the field of software development as a whole.
> A huge part of what attracted me to programming was how free and open it was. The fact that literally anyone with a computer could install Python/Javascript/etc for free and create virtually any software they wanted, limited only by their own abilities and determination, was wildly exciting to me.
but you can still do that, AI is not preventing you from doing any of that in any way.
Good to see another Luddite (as they’d call us) on here! I am quitting tech in a month. I chose to go through 6 month of identity crisis, depression and reinventing my life after 20 years in software than having those bullshit generators imposed upon me to compete with those for whom thinking is démodé.
Meanwhile the front page is people complaining that using a particular word causes their evil genie to go haywire. You guys still call this stuff engineering? Writing requirements in prose, because programming languages are too hard? Fuck that, I’m out.
Going full circle is what we do, it's everywhere throughout human history. Actually, one could argue it's how life works. Nature has seasons to help life grow and be balanced. We're only starting to understand how this affects us in a larger scheme of things. Who knows, maybe we will wipe ourselves to dust and be discovered by the next iteration until we reach v1.0.0
> that not only harvest and sell my personal information to the highest bidder but constantly change the rules and restrictions on my software
yeah i'm gonna call BS on that. what you describe was happening well before modern-day AI (LLM, agentic stuff etc) became mainstream: think of google accounts binding your identity to your searches, gmail, google adsense, facebook, instagram and twitter (and others).
And the products and services that do what you describe can do that just as well without ai.
So yeah the problem is absolutely real but AI is not the culprit here.
I mean, that works for you since you're retiring. But for people still working in the industry, you adapt or die. As it's always been.
The fact of the matter is, a person working with a bunch of agents is a lot more productive than just a person. It makes research faster. It makes experimentation faster. It makes output cleaner. And this is true across many disciplines, not just tech.
Also, it is a skill. Yes, anyone can chat with an LLM. But understanding the optimal work flow for what to delegate and what to do yourself is difficult. Understanding the need for precision in the language used, and learning how to elegantly phrase things that were previously just abstract thoughts is absolutely a talent that can be refined.
If i had to guess, I'd say we'll probably see major breakthroughs across multiple disciplines within the next decade, largely because researchers and engineers can cover much more ground individually now, freed from the slow moving coordination mechanisms that team dynamics require. Pretty good for "spicy autocomplete" as you put it.
Are they? I remember when heavyweight IDEs where all the rage, there was a similar sentiment that if you weren't using one of them then you would eventually be so much slower that you'd be out of a job. It only took maybe five years until people started asking themselves if the dependency on a big IDE (and cost) was worth it. I don't think anyone would look at someone who prefers a stripped down text editor today and think they are backward or doing it wrong.
We have yet to see hard numbers on time saved by those who use LLM tooling extensively. It could be it doesn't turn out as compelling as we might expect.
Just sayin', I never forced software developers to use NetBeans or Intellij IDEA. I'm certainly not changing my tune and forcing them to use LLM tooling either.
Vim and Emacs can do a lot of what IDEs used to offer thanks to language servers and build servers. Before those (lang/build servers) they were largely useless for large scale development (believe me, i tried).
Maybe it depends. If what you want to build is one-shot crap anyway, then micromanaging LLMs to make them vomit what you need for that is "productive". I wouldn't know, because I prefer real work over the make-believe and leave the AI coding acolytes to be left behind and die when their ingenious plans explode in their faces.
> a person working with a bunch of agents is a lot more productive than just a person
[citation needed]
I try LLMs for something every couple of months, and I have yet to see them produce anything actually correct. Calling non-existing library methods, confabulations, etc.
But sure, they produce a lot of stuff in a short while. The utility of any of that another question.
> I try LLMs for something every couple of months, and I have yet to see them produce anything actually correct. Calling non-existing library methods, confabulations, etc.
That's too pessimistic, the productivity gains are real and substantial.
OTOH, the hype train is out of control. It is nowhere near perfect and requires a lot of handholding and guardrails to avoid going sideways.
You need to adopt it to stay relevant, but don't fall for the excessive hype. At the end of the day the limitations are significant.
> I mean, that works for you since you're retiring. But for people still working in the industry, you adapt or die. As it's always been.
There are jobs outside of IT. They are harder, they have less benefits, they pay less. It's a whole project to switch your lifestyle so you can even afford them.
I know nobody who regrets making the jump. I hope to make it within this year. I'll be poor, but at least I won't work in IT.
> But understanding the optimal work flow for what to delegate and what to do yourself is difficult.
No it's not, you can learn it in less than a day. I've done it a few times while evaluating how much the agents have progressed (despite what people keep saying, not much).
> Understanding the need for precision in the language used, and learning how to elegantly phrase things that were previously just abstract thoughts is absolutely a talent that can be refined.
Some of us learned technical writing to communicate with _humans_ before, and we're sitting here alternating crying and laughing as y'all scramble to figure it out just to put all that into a hallucination machine.
Respect. I moved countries for lower cost of living, and I’m gonna become a starving artist, so to speak, trying to use my software skills to make myself useful and earn enough to buy food, in a field where human ingenuity still reign supreme.
And if I ever find money under the mattress, I’ll make a solar farm. Something useful for the world, for once.
Better content and poor than living in golden handcuffs.
If your worry is that you won’t be able to “keep up” and you’ll be laid off, or fired, just wait for that to happen. Keep making a paycheck until then. Then you can start your barista job.
If the problem is that you hate the work, then fine. But why barista? Fine, if that’s what makes you happy. But there are a million jobs out there _if you are willing to relocate_.
Bluntly? Because working with y'all is becoming insufferable. Because I don't want to work in IT. Note this isn't "I don't want to program" or whatever. That's cool and fun. But the people in here? Oh gods.
Also I'm sick and tired of working on projects where the best social benefit from my work would be if I stopped. And IT has this talent of doing this to even most superficially useful projects. I worked on solar panel software that got turned into a scam by marketing. That takes a talent, of sort.
The best time to jump out of IT was to never get into it. The second best time is now.
As for why barista? People need food and drink and coffee is great.
It depends on where you land. Not all programmers (and their managers) are brain-amputated zombies. But I do admit that finding that rare pocket of sanity requires a good portion of luck.
It's a bit arrogant and borderline Luddite to suggest that 'your era was legitimate' and that somehow these new things which you don't understand are somehow 'lesser' or illegitimate.
In the long arc of history, I'm doubtful we'll see 'the last 50 years' as 'the Golden Age' - that's just a personal, contemporary romanticization. More than likely, the advent of computers -> web -> AI etc. will be one block of the 'informational industrial revolution'.
The people who made the ostensible 'Golden Era' were pioneers, just as those breaking new ground are pioneers today, it's honestly 'depressing' that people who consider themselves 'Engineers' wouldn't see that as clear as day, an be hopeful for the future on some level.
AI is very real phenom, obviously vastly over-hyped in many ways, and it doesn't feel nice to have to get caught up in a tectonic shift against one's will, but it is bringing about legitimate progress in every sense that the Engineers and Creators before us did.
In the exact same spirit as DaVinci or Babbage.
If one wants to keep a horse in the stable, or a typewriter around for posterity or any other reason that's fine, but not under the notion that somehow they are better or more useful.
That the Luddites were acting on principle doesn't mean wouldn't use the term.
Also, if you want to 'go there' you could find a much better word than 'immature' to say what you're trying to say.
The OPs posture is not tonally arrogant, but it's it's definitely intellectually arrogate.
The OP claiming heritage of the 'Golden Era' which is a dramatic, egoic romanticization.
To place one's 'own story at primacy' above all others, insinuating that 'his skills' are those which are 'true and relevant' and that those using new tools are 'lesser' or 'not substantial' , and also grossly myopic to the truly great Engineering that's going on ... is arrogant and insulting frankly.
We can empathize with having to yield to a changing world, or being too out of scope to even fathom the 'new tech', but that's very different with saying that 'Frank Sinatra was the Only Great Singer, those that came after him had not talent'.
If it were the case, then fine, but it's obviously not. AI is a legitimate advancement that narrow minded people are struggling to fathom, and it's coming out in some ugly ways.
A true creator would probably take magnanimous position that after having made their contribution, they are sad to not be able to participate in what is maybe an even more substantial era of progress, and all of the wonders that will come of it.
Good gripes - we're all about to have robots in our homes (!) probably within 5-15, we're witnesses Sci Fi unfold in front of us ...
There is no doubt that AI is world changing technology. I'm not sure I want to go back to the world before LLMs. However, he's right to lament the impending demise of personal computing. Our computing freedom is being attacked on all fronts by governments and trillion dollar corporations alike, and things are not looking good for us. Our machines are increasingly locked down by rent-seeking corporations. Software is increasingly in the cloud. Thanks to remote attestation, we get ostracized from digital society if we take ownership of our machines. It's starting to look like the "you'll own nothing" future is actually coming.
I do hope the open weight models keep distilling the frontier models, and that powerful and unlocked computer hardware remains accessible to us mere mortals so that we may run them with no limitations in our own homes. That's optimistic though.
Any engineer (any person actually) can “learn to use AI” in a couple of days. It’s not rocket science; there’s no chance of left behind. If you haven’t use LLMs at all, a weekend would be enough to be on par with everyone else in the industry
Others are disagreeing with you here, and I do too.
The difference is profound, and takes more than a couple of days to get your head around the implications. I'd summarise it as: "if you give a computer the same input it always produces the same output, but if you give a model the same input it always produces different output". Add to that the output is often wrong and it can't reliably follow instructions, and the difference is so great it breaks most of your intuitions.
The reward working with this piece of unreliable jelly is it can be far smarter than you (think the difference between a man with a shovel and a 20 ton excavator - they can literally find bugs in minutes that would take a human hours or days), and they know far more than you.
The engineering challenge is to make this near random machine produce a reliable product. It isn't easy.
The hype you see around them is it's trivially easy to get it to produce a feature rich but very unreliable product, as Anthropic demonstrates with their vibe coded claude-cli. I refuse to use it now. Among its other charms, it triggers a BSOD on windows: https://github.com/anthropics/claude-code/issues/30137 (Granted, it's just another Windows bug: https://learn.microsoft.com/en-ca/answers/questions/5814272/..., but if you are shipping to Windows you should be working around such bugs.)
The better you are at architecting or even directing a junior developer, the better your output too. Dont let AI make decisions, its supposed to take your decisions and turn those into code. When AI makes decisions, well the unexpected outcome is always on you.
> Dont let AI make decisions, its supposed to take your decisions and turn those into code.
I let the AI make decisions all the time. I often approve them, and I sometimes revert them. Most of the time they’re really good decisions based on my initial intent, but followed by analysis I didn’t make but agree with.
I think there's a spectrum of where to draw the line.
There's clearly some level where you want a human making decisions for even the most vibey of project, because without some kind of a spec about what you're trying to build and what features you want you'd get nonsense.
But like... maybe don't stress the details too much.
> clearly some level where you want a human making decisions
Yes, clearly. There was a meme out there, "just make something cool idk".
Statements like "Don't let AI make decisions" are made because of the loss of control we experience as mechanical parts of our work (such as writing to files) gets automated.
i always found it to be easier to write code myself than to direct a junior developer.
the level of teaching involved would always mean the overall velocity of work slowed down.
some people say you can throw them the drudge work but i find that if you're doing coding right (e.g. you dont let your code base degenerate into a mess of boilerplate), there is barely any drudge work to do.
You're missing the real goal of directing a Junior, which is you're teaching them to be a team player, Junior devs will surpass your expectations, the rate at which they goof or are about to goof should decrease over time the more you mentor them. If you do it right, you not have a strong ally and coder under your belt, or would you rather someone else teach them their bad habits?
Perhaps but at least when you are directing a junior developer, even if badly, you'll eventually get a non-junior developer on the other side. With an AI agent, you'll get ... what?
With current models, you're right, there will be nothing to show for the effort except the code itself.
I suspect that will change sooner or later. Models will be cultivated over time the way we cultivate full-time employees now, with an acquired awareness of what they're building, new skills picked up in the process, and insight into how the larger system works.
There's a long-running instance of the model on the provider that's allocated to my organisation? Or are you thinking more of a server-side memory system, similar to the (currently very fallible) ones like Honcho and Mem0?
What happens when my org stops paying that provider? Do I get to take the now senior agent with me to the next provider? Does the provider have to delete it (and all that learning is lost forever)? Does that now become a free agent that can be hired by the next organisation like an employee (one that probably doesn't know how to keep industry secrets to itself)?
> If you haven’t use LLMs at all, a weekend would be enough to be on par with everyone else in the industry
Disagree. It takes a lot of experimenting to find the right balance between sufficient guardrails and insane halluciations. And it'll be different depending on work domains.
I'm still refactoring AI workflows every week after more than a year or so and still working on it. Will probably be a perpetually ongoing effort as models change.
But does this translate as "one year of cumulative work" or rather "one year of rearranging your workflow and discarding obsolete ideas"?
If you spend a year walking in circles, someone can easily close the gap with one step. Especially if models and harnesses are supposedly getting more powerful all the time.
can you share any examples of these "new and better ways to use them"? because the only way I've used LLM and seen other people use it is to literally just talk to it, which doesn't require any skills beyond basic conversational abilities.
They're difficult and hard to predict because they're still primitive, despite what their companies say. When (or if) they get advanced enough to deliver consistently, there will be no chance of being left behind, because even a kid will be able to use them effectively. Right now they're still at the gimmick level, although a very impressive one.
If the models get to a point of total consistency there's still a LOT that we need to figure out and learn about how to use them.
Let's say models can exactly and correctly write any code you ask of them.
- How do you break down a project into a sequence of requests to models?
- How can you most effectively parallelize the work - models will never be instant, so there will always be benefits in working out how best to use several agents at once
- Now that the models can handle the details of Lean, and Swift-UI, and Oracle stored procedures, and thousands of other technologies that you never got around to learning in the past... what can you do with those and how do you pick which projects to go after?
- How do you collaborate with other engineers and designers and product people in a world where you can churn out the right code reliably in a few minutes?
The models we have today are already effective enough to change the shape of our work as software engineers. As the models continue to improve figuring out and adapting to whatever that new shape is becomes even more complicated.
Just learn, sure. But the difference between my efficiency of using it on my day 2 and month 6 is significant. Yet I feel I am barely scratching the surface of it.
> a weekend would be enough to be on par with everyone else in the industry
I kind of agree in general that it is a learned skill, but considering how unclear people generally are when they communicate, I'm guessing it'll take longer than a weekend to be able to catch up, especially catch up to people who've been working on precise and careful communication and language for years already in a professional environment.
A weekend is enough to get going, but not nearly enough to 'be on par' with everyone else.
That said - what we have learned in the last year could be compressed quite a lot - there are a lot steps we could skip, and 'learn by failure' that need not be repeated.
It takes a while to get the subtleties of it, it's among the most highly nuanced things we've ever encountered.
If one has been reading a wide variety of books/papers/articles/whatever their whole life, and one has been mindful of how to communicate with the "written word" as it were, it takes about 3 hours to be wildly effective with this technology. I think it took longer to learn google-fu than it did to learn how to use this technology effectively.
The statement is absurd because the skill curve for AI tooling is so small you can can mess around for a day or two and get "caught up" with the zeitgeist. And what you need to know to get started is actually far less these days than it was 1.5 years ago thanks to all the product refinement that took place in the space.
The only real risk is that today there's an expectation from employers that you've got some AI experience under your belt you can articulate. But you can get that experience today.
6-12 months ago I felt like i was constantly behind the curve with all the different things people were doing to get more out of their claude code. as the year has progressed though, all of those features keep making their way into vanilla claude code, at a faster and faster rate. Now someone working on the bleeding edge is using things that i'll be using without having to think about them a month from now. It has really reduced my anxiety of being left behind.
That's the thing, any "advancement" you might discover will be integrated into main tools soon enough, I am going to say that in fact, you probably shouldn't even learn them before they are integrated. Helps you filter through all the noise and avoid wasting time on learning something that isn't going to take off.
Most good programmers are good at writing. If you’re capable at simultaneously writing instructions for a dumb abstract machine and have those instructions being understandable for humans, you’re clearly good at expressing at least technical ideas.
I feel like I use AI this way, but a majority of my peers lean too much into it. There used to be the sentence "we don't think, we google", and I see that with ai usage. As soon as a roadblock appears, the situation is pasted in GPT without further engaging with it, then they pick up the phone and open an app while GPT does its thing 0_o
I have a coworker like that too, my pet theory is that they're not passionate about their job to begin with. It's just something that can pay their bills.
While waiting for Claude to finish, we talked about our hobbies outside of work, and the same guy will go into deep details on how steroids and the HPG axis works, and even gave me a spreadsheet with several NCBI PubMed links on the topic.
I think we are all naturally be more creative and opinionated in things we are interested in.
An orthogonal observation: Bearblog seems to have become an anti-AI echo chamber. Their community responds very positively to posts exactly like this one [1] [2] [3]
I think it's just important context to keep in mind that these sorts of takes are very typical to top https://bearblog.dev/discover/ in the same way that certain types of posts are designed to rank well here. I considered migrating my blog there earlier this year and ended up deciding that, while I loved the product, the community was not healthy.
People are worse at mental arithmetic than they were in the recent past, so it's not clear that they aren't "dumber" in the sense people meant at the time.
And did our thinking about the importance of being good at arithmetic change in response? I think so.
We also used to be much better remembering things, when we relied on oral histories, our memory skills have degraded quite a bit. And there's a quote from Socrates criticizing how writing is a crutch that degrades our skill (https://www.perseus.tufts.edu/hopper/text?doc=Perseus:text:1... , the last bit). Over time, we've just moved to valuing other things more.
Well, with anything, practice is key. When I was in school, I was in a math competition where you had to do everything in your head. There was no scratch paper, you could not modify your answer once written, and erasing was obviously not allowed either. I wasn't the greatest at it, but I didn't suck at it either. That was decades ago, and I no longer do math in my head that way. What I used to do in seconds for a result now takes a couple of seconds to think about what needs to be done and then the time to come up with the result.
Students score lower on standardized tests in the 2020s than those in the 1990s. So your stance feels misguided. Although I don’t think Google and Calculators are the main culprits, I do think it’s due to larger technology/internet landscape.
I once worked for a guy who typed 7 + 4 into a calculator, after freezing for 1.5 secs trying to work it out in his head. It was in a "stressful" situation (not something extreme, we just were in a hurry), and I'm sure the guy could add those numbers in his head, generally... he owns his own business, after all. It took so much out of me to not move a face muscle.
Sounds like you haven't used it much. It starts small with you forgetting the arcane params to commonly used tools that you don't need to type anymore. Where it will stop nobody knows.
"The reasonable man adapts himself to the world: the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man."
Richard Stallman was unreasonable. So was Linus Torvalds. I'm hoping that something wonderful and entirely human-centered will come out of the anti-AI movement, up to and including a bifurcation of the internet.
Take electricity adoption for example, with your adage then it means unreasonable pro-electricity people vs unreasonable anti-electricity people. We know how this turns out, I don't see a lot of people joining the Amish.
It's okay to have strong opinions (be "unreasonable"), but in the end humanity as a whole (the "reasonable" people) will judge whether your opinion is a good one or not. Only time will tell.
Once you write something and publish it, its just out there, it doesn't really get healthy or unhealthy. I do not think all writing is meant to be, or needs to be, the representation of someone's mind and its health. We write to have the opinion exist outside of ourselves. Why would we even read things if what we read didn't have strong beliefs or opinions? It sounds so boring!
In my practice, I found AI are more useful in adversarial mode ("criticize this concept, "find a possible bug in this code", "challenge me", "quiz me on the knowledge"), because the knowledge found adds up to your own skills.
You don't need super big brain IQ to be creative and expressive, all you need is simply a strong opinion on something, and you don't let AI (or other people) dictate otherwise.
Now the skill issue lies in whether your opinion is a good idea or not lol.
Some people who don't use AI will be left behind - those who work on things where LLM's are capable of a substantial amount of the tasks will be left behind if they just refuse to leverage the superhuman properties that LLMs have.
I don't think it's hard to catch up if such a person changes their mind, though.
Some people who do use AI will be also left behind - those who use it to replace their skills without developing new ones themselves, and those who use it to do the same or worse work more cheaply. They will be left behind in a competitive world where others will work out how use it to do more or better work with no reduction in effort.
If LLMs mean I never have to open a PowerPoint from a client to pull out their "data" again, that'd be great. I gain nothing from being a manual data entry monkey for people who don't understand the concept that presentation-ready output formats are not data transmission formats.
But if I'm to be expected to employ vibecoding in my day to day job as a software engineer, I'll dismantle my house and go live off grid somewhere in Alaska. I have enough power tools and knowledge to do it. Probably massively healthier for my kids.
For now at least, I think it really depends on what type of coding that is.
I don't have any particular predictions going forward about it, but something I think about right now is, do I want to focus my time where the interesting decisions, the valuable contributions I make, are product-level thinking about what to build and what problems to solve? Or do I want to focus my time where the interesting decisions are technical ones, fully wrapping my head around a technical problem and coming up with a solution?
I do think both options are still available, and personally I love them both. But I don't know what types of coding would involve significant amounts of both activities anymore.
There’s still a lot of place for both. Because they are just a shift of perspective around the same thing: Solving a problem for someone.
Product is when you’re seeing things as the one who have the problem and designing the solution in a way that is usable. Technical is when you shift to see how the solution can be implemented and then balancing tradeoffs (mostly costs in time and monetary resources).
While the code is valuable (as it is the solution). Building it is quite easy once you have a good knowledge on both side.
The issue with AI is not in their capabilities, but in people rushing to accept the first version when there are still unknowns in the project. And then, changes costs almost as much as redoing the project properly.
I think it's pretty obvious that people who offload their thinking to an LLM will eventually get used to not thinking hard about things. Anything you do that you stop doing regularly eventually atrophies. Thinking hard about things and performing on work is as much a skill as it is an innate property of being smart, as evidenced by the many "prodigy" sort of folk who languish in obscurity later in life.
A lot of people do outsource their thinking to AI, so it's not that weird to bring up. That's effectively how many AI companies are marketing the technology.
But it's definitely possible to use AI without letting it think for you. OP should at least acknowledge that.
Those who dogmatically refuse AI outright may be disadvantaged for some things in the future. But it's also probably hyperbolic to say they will be "left behind".
I agree with OP it's the other way around, while some will gradually lose basic skills by relying more and more on AI for productivity sake and laziness, those "people who don't use AI" value will go up by choosing to simply keep "learning the hard way"
> Why wouldn't you aim to be better, to learn how to be or do something that AI would never?
Because it doesn't make sense to be better than a tool. A woodworker could use a hand saw and take an hour to cut wood... Or he could use a buzz saw and cut it in a few minutes. Is the woodworker any less of a woodworker when he uses a buzzsaw vs a hand saw?
Outsourcing thinking to AI is not healthy, and certainly if everyone used AI like this we're doomed.
I still think it's true that those who don't use AI will be left behind, but it's a bit tautalogical because the thing they're behind left behind on is AI. A lot of the biggest companies on earth are putting a lot of money in AI, but if you're OK with working for a company that is not putting all their money in AI that's perfectly fine.
Just like block chain was everywhere ten years ago and now is just kinda _there_. If you got in before the hype you could have made a lot of money. If you didn't, you were left behind. I was left behind and I'm OK with that.
I think what we're seeing is it's just an amplification of whatever intrinsic motivations people have; the whole mirror to the self thing, on steroids.
Obviously people who are motivated by curiousity will have a different view and those who value creativity will end up thinking otherwise.
Also, it's basically impossible to separate the technical capabilities with the big money fascists pushing it.
a good job is one that brings you joy and improves your creativitiy, they by definition can't be in hell. if you mean well paying, that's a different thing entirely, ditch the fancy car and adjust your lifestyle
I have a fancy car? News to me. I'm just trying to pay my bills and live a sensible and reasonably comfortable life. These days that requires a lot of money.
I'll be in my home for longer than the recession lasts. Seems a little silly to sell it and walk away from my hard-earned lifestyle (which I quite enjoy) to save some money for just a few years.
mind you given the topic of the article I took the jobs in question to mean largely in the software industry, not talking about minimum wage workers here.
But as a programmer I am quite baffled when peers my age, often without kids struggle in this kind of way. My essentials are rent, food, metro card, library card. When I hear people who make what I make say that living requires a lot of money that usually includes two dozen subscriptions, a few grand in useless electronics per year and ordering food most of the week.
So, are you suggesting that an acceptable lifestyle would be an empty studio apartment with nothing inside of it, no pets, no partner, no meaningful possessions?
Personally, I have pets, a partner, and thoughtfully selected and meaningful possessions. I don't collect crap, and nothing I own or do is particularly extravagant. But I'm not exactly living an ascetic life either. A pretty typical lifestyle, I'd say. And I don't consider any of this a moral failure, if that's what you're getting at. Though admittedly it's not as affordable as living like a monk.
I don't advocate living like a monk. I play in a band and play football every weekend, play chess, you can go to church, most of that is practically free.
I'm very much in favour of participating in culture, real culture though. You can ditch the 1k Taylor Swift tickets, Disneyland visits, and high end gym for the 20 bucks local jazz festival.
I think the people who live socially like ascetic monks are ironically the people who build themselves a home gym for 10k and then complain about not having any time for friends because everything is too expensive
Organizations and communities that embrace both AI users and AI abstainers will be the true winners.
A bimodal strategy gets you the best of both worlds: the ability to rapidly explore and develop ideas, and the ability to critically and cautiously assess them.
I have a feeling that a big risk of using AI all the time is that our own neurological capacity starts to dwindle.
Just as many people leading sedentary lifestyles have to make a deliberate effort to exercise, because inactivity is really bad for our bodies, I think we're going to realise that a similar process is necessary for our minds.
You really want to be spending a bit of time every day operating at your cognitive limits - trying to fully engage your System 2 - if you want to avoid brain atrophy. Coding used to kind of give you this exercise for free, but you can go really far with just your System 1 nowadays - literally get things done while scrolling Reddit.
I'm trying to allocate 30-60 minutes a day to doing something difficult, like writing code by hand for an unfamiliar problem or reading and summarising difficult papers without AI.
People who can only use AI will be left behind. It is easy to shut off your brain when using AI and then get overwhelmed by the amount of code it produces. Worse is though when people replace programming experience with AI. I have seen a lot of really bad AI code. I can spot and repair it. Others can not. And that is a problem. And I am not talking about purist principles. I am talking about bad unoptimized code that I can spot with just one look.
It is a tool just like syntax highlighting, code completion and refactoring tools before it. You need to know how to use them, where their usefulness ends and you should probably have an idea how to do it yourself without the tool. It is okay if you will be less efficient, but it's bad if you just can't.
I don't (yet) use AI in the way I am expected to. I have not integrated it into my IDE -- can't be bothered, plus I code in notepad++. Rather I use in a browser and have raging arguments with it over the course of three days about a design. After we come to an agreement I write the code.
This energy would be better directed anywhere else.
The author chose to take offense by connecting with a false dichotomy presented vaguely in a way that serves no purpose other than dividing and poorly labeling everyone in an area where much nuance applies.
I think this is perhaps a side effect of consuming too much content and feeling overwhelmed with it.
Engaging with stuff like this only amplifies its effects, how about do anything else instead? Maybe learn something new, like how to channel your anger.
Trading practice of primary skills for indirect skills like AI is like a writer deciding they should stop writing directly and get really good at Microsoft Word.
Personally I just accept that all technologies are great and must be embraced! This way I do not have to think about ethics and potential implications for society.
My take on it is I would rather code than ask the machine to code. It's frustrating though how many open source projects now are overrun with massive PRs and nobody to code review them. This feels like fallout from too much reliance on AI.
> My take on it is I would rather code than ask the machine to code.
Same. I don't really care about productivity or if AI is so much more productive, tbh. I'd rather just change careers at this point. I'd prefer not to just be a full time code reviewer while my agents go do the actual work.
But I'm also tired of this in between state. Either rip the bandaid off already, fire everyone, and force governments to implement UBI so I can finally be free, or finally admit that the productivity gains have been vastly oversold and the LLM apocalypse is only a half truth, half grift and get on with our lives.
> But I'm also tired of this in between state. Either rip the bandaid off already, fire everyone, and force governments to implement UBI so I can finally be free, or finally admit that the productivity gains have been vastly oversold and the LLM apocalypse is only a half truth, half grift and get on with our lives.
I'm also tired. I wish this would happen too.
I just don't think it will because it would devastate the market.
Well of course too much is bad for you, that's what "too much" means you blithering twat. If you had too much water it would be bad for you, wouldn't it? "Too much" precisely means that quantity which is excessive, that's what it means. Could you ever say "too much water is good for you"? I mean if it's too much it's too much. Too much of anything is too much. Obviously. Jesus.
Or a manifestation of having risk aversion that isn't easily swayed by peer pressure...
Everyone seems to know you can't trust the AI output, and that it is on you to review it. But whenever I talk to people who claim to be getting big benefits, there is always a moment they reveal that they are not really reviewing the output. They are just going with it.
Similarly, so many who claim to use AI as a search index eventually seem to just trust the summary instead of checking the references to figure out whether it is regurgitating fact or fiction.
I don't really know if these users always had low quality standards or low diligence, or whether the tool usage degrades them. But I see the correlation among the friends-of-friends network I can observe.
Yes, but it goes both ways. Using AI can be a great way to be productive while purposefully NOT learning how the sausage is made—say, boilerplate code in some devops system that you don't care about—allowing your attention to be focused on the part of the stack you actually care about.
On the contrary, using AI is like outsourcing your DIY to a professional joiner.
Sure, he'll get it done twice as fast and you might notice some tricks as you look over his shoulder. But when you need a second door hung, you'll either have to start learning from scratch or call him again.
I think of AI as just another abstraction layer, somewhat similar to what high-level programming languages provide compared to writing machine code. Deciding how deep to understand the abstraction layers is a choice the user has to make, which could be optional if they don't really need to.
Nevertheless, the responsibility of whatever a human produces with AI is still on the human.
With that said, knowing how to use AI the way it's right for you can give you a huge advantage. You don't have to though. And there is not a standard way of doing it.
What I recommend to everyone is give it a try and see if and how it could help you. At the end, you have to make the decision based on your constraints and what you're aiming to and can sacrifice, including but not limited to speed, accuracy, learning, etc.
Just like every single trend that came before, they said you would be left behind:
If you didn't embrace OOP
Test driven development
Behavior driven development
Events driven development
Pants in head driven development
SOLID
DRY
Cloud first
Virtualization everything
Microsservices
Serverless
Everything js
Everything ts
Everything Microsoft
This will never stop.
You either let someone be in the middle of you and what you want to accomplish, or you will be left behind.
Think about the most mediocre person you know. Now remember 50% of people around you is dumber than that
Friendly reminder that we're still in the hype phase, even if it's the late stages.
To me the idea that a GPU which costs as much as a car must read its entire VRAM just to output a word sounds incredibly wasteful. I'm exaggerating here, but it is literally reading gigabytes of data and processing it to produce relatively little information.
Some data is truly worth the effort, but the majority won't be able to afford this long term - especially when those who capture the market increase prices.
> a GPU which costs as much as a car must read its entire VRAM just to output a word sounds incredibly wasteful
Kind of how I feel about Bitcoin at this point. The coins take so incredibly long to mine if you aren't in a pool that it could cost hundreds of dollars in electricity to own a fraction of the coin months later.
It’s gonna be painful and many firms will go bust - not because of being left behind. But they got so deep in pushing llm’s their competitors came in and offered their customers what they wanted - the customers don’t care how you produce it. They want the thing to that leaves them in the best economical state. And this llm mania is gonna cause many firms to forget this and go down paths they will later regret.
The author makes a great point about learning. Learning is what increases your intelligence and if we substitute learning for AI lookup we will literally get dumber. That said, AI models have a lot of information and can assist in learning. It's a tool, how will people use it? My fear is they won't use it to help learn.
I'm no AI-hypeman (nor the opposite, I guess), and I agree that replacing AI for critical thinking and writing will only turn out bad in the long-term.
But "your dignity"? You mean like "I feel shameful over that people saw that my writing was actually AI?" or something else?
I meant the indignity of trying to have a conversation with someone who at first seems like a reasonable professional, but who at some point in the conversations insults you with something like "I asked Claude and ..."
I suppose. So I still don't understand why users of AI should feel like they've lost their dignity, does it matter where the AI runs or using AI is just shameful regardless?
I sympathise with the author and the argument. I know the text is a rant. As such, I can understand that the proposed consequences might not make sense. Yet still, there is a fun game you can play, where you replace AI by "chess engine" and you get a text that would be fitting for a late 90s chess grandmaster but seen as totally anachronistic today:
"Chess players who don't use engines will be left behind", they say. I can't emphasize enough how much I hate it when I hear/read shit like that because I'm pretty sure, in fact, that what will happen is the exact opposite.
People who rely on engines are the ones who will be left behind. They'll forget how to think, how to move the pieces, how to solve a simple straightforward mate in 3, how to tell victory from stalemate... they'll forget how to fucking LEARN. I think that's the part that makes me the saddest. What a beautiful thing it is just to play chess.
If you think Deep Blue can do better than you, why would you just let it? Why wouldn't you aim to be better, to learn how to be or do something that a chess computer would never do?
The problem is AI is being pushed and used as the equivalent of using a chess engine to tell you the best move during a match.
Maybe there's a way AI can be used to make developers better but it mostly just seems to be the equivalent of grand masters saying how great vibe playing is because now they can play 1000x more games every day. But don't worry, they're still steering the games.
Sounds like something Magnus Carlsen might say. I hear he's doing quite well out of the game of chess, and pointedly not playing how a computer would play, even though Deep Blue is clearly capable of winning more than he is and from more difficult positions.
Also, the world isn't as trivially solved by computation as a game of chess, so maybe delegating your job or how to be a better human to ChatGPT isn't as much of a winning strategy as getting the computer to suggest chess moves.
Deeper reasoning, longer term planning, and more efficient solutions have always separated amateurs from experts. That experience cannot be applied asynchronously or reduced to supervision. It has to be "in the loop" and there is always a lot of out-of-band information that only an experienced eye would notice and can't be trained into a model.
And I'm sorry to nitpick - but "People who rely on AI are the ones who will be left behind" is NOT the opposite of "People who don't use AI will be left behind".
> "People who don't use AI will be left behind", they say. I can't emphasize enough how much I hate it when I hear/read shit like that because I'm pretty sure, in fact, that what will happen is the exact opposite.
> [...] they'll forget how to fucking LEARN. I think that's the part that makes me the saddest. What a beautiful thing it is just to learn stuff.
I love learning. My life of self-education is so much richer with LLMs to help me.
There are dozens of other arguments for not engaging with AI. If your reason is "I love learning" I recommend at least dipping your toes in before you declare that AI is a hindrance, not a help, to people who love to learn new things.
Seriously, these are an autodidact's dream. I've been having an absolute blast learning about stuff from government structures and the different approaches to fusion power to what types of electrical conduit are used for what applications and appropriate connectors, heat pump sizing, etc. It's so ridiculously empowering. All this info that you had to use an enormous amount of time synthesizing and studying is now available at everyone's fingertips. I think we're going to see an explosion in productivity on all sorts of fronts, not just writing code.
Maybe it's a generational thing, but I'm old enough to remember when personal and office computers were really hitting mainstream in the late 70s and 80s, the messaging was a lot more friendly and how they will save you time, help you, etc. Even though practically speaking it reduced a lot of manual jobs.
This AI/LLM push from leadership is so damn tone deaf, like "you better do this", "ai layoffs", etc. I feel like they are jumping way too hard and fast into the "post-employee" thinking and deserve every bit of scorn from laymen.
It's always obvious that LLMs are bullshit. It's blockchain, but far far worse. US invests too much in it and the collapse has already begun. Half of planned data center builds have been delayed or canceled across the country.
I just don't get even the presumed risk here. How can something be so revolutionary in its capacity to increase productivity but still so esoteric or specialized that there is a risk of being "left behind"? Like all these things people talk about are, at the end of the day, products that want you to use them; they aren't gonna make it hard for someone to onboard in the future. Sure if all coding became ecommerce overnight and I'd never "learned" Salesforce, there might be brief friction there, but I could still just, like, learn Salesforce. It's gonna be a lot easier than learning good software engineering in general.
Why spend your life "learning" something whose whole deal is about not needing to learn? Even if you gamble incorrectly, its not going to be hard to get into!
Like, what, if I don't start practicing now I am not going to be able to... express concepts with natural language as well?
In the 1950s, COBOL was introduced with the idea that programming could be written almost as if one were speaking English. But eventually people realized that writing COBOL well, in a style that resembles English conversation, was itself difficult.
Today, we are hearing a similar claim: “If you can describe the program in natural language, programming is basically finished.” But the industry is now discovering that describing the program well is the hard part.
This is also why ideas like harness engineering are appearing: methods for controlling the range of outputs, from poor to excellent, that can emerge from minimal input.
And honestly, I do not think the “vibe coding” phenomenon is entirely bad. The essence of programming is automation. Many people were previously limited because they did not know programming languages. Now, through AI, they can express themselves and turn that expression into working apps. Seeing this, I understand how deeply people have wanted to create.
I write industrial software that runs in large factory environments, and because of the nature of my work, it is difficult for me to use AI directly. These environments are usually closed networks, so AI does not really benefit my own production work. Even so, I still defend AI, because it functions as a new kind of voice that allows more people to express themselves..
Of course, capitalism distorts this. Many people use AI to chase money and capital, and as a result, a lot of low-quality apps are being produced. But on the other hand, what is wrong with the motivation of wanting to make something one wants to make?
I have been studying the history of programming, and I like Dijkstra’s famous line:
> Computer science is no more about computers than astronomy is about telescopes..
To me, this means that computing is fundamentally about automation.
AI has existed as a research topic almost since the birth of computers. We tend to think of it as recent, but it is a field with a history of more than sixty years. Starting from early work such as the Perceptron, there have always been people claiming that AI was a fraud or an illusion.
But now a new seed has germinated. The amount of complexity that a single human can handle has increased. Historically, the techniques for managing that complexity were things like programming patterns and software architecture. And even people who strongly argued for software architecture also warned that if architecture becomes detached from code, then something has gone wrong.
Memes always damage the essence of ideas. As information circulates, it degrades, and eventually the original meaning disappears.
The Dunning-Kruger effect is a good example. The original paper was not simply saying, “ignorant people show off, while knowledgeable people do not.” It was more about how both less competent and more competent people can have difficulty accurately assessing their own metacognition. But the idea became distorted.
The same thing happens to many famous ideas in programming. Knuth’s statement about premature optimization is also constantly distorted as it circulates.
In that situation, can we really say it is always bad to step away from online communities and learn through AI while cross-checking against books?
When I see people making extreme claims about this, I sometimes find it absurd. Of course, many people may flag or downvote my comment. But this is how I see it.
this discussion is so stupid. no one who isn't a moron is offloading all work and thought to LLMs. no one who isn't a moron is seriously afraid of their thinking and learning skill "atrophying", whatever tf that means.
it's clear that LLMs are unique in that you actually do have the capability to turn your brain off and blindly trust whatever it does for you. but it should be equally clear that that's a stupid approach. people will still use their minds, and this use gets empowered with proper use of LLMs. it's that simple. ffs, we take the fact that they pass the Turing Test routinely for granted now. let's not forget that this technology is legitimately incredible. it stands to reason that you are seriously handicapping yourself by not trying to use it.
People not using AI will 100% get left behind as sure as those refusing to 'cars' or 'computers'.
There is absolutely not doubt; and it will be impossible to avoid as using 'plastic' or 'electricity'.
The narrow challenges of 'AI aided development' or 'AI aided creative work' are legitimate - that part is real and fair, but it'd be an over-statement to contemplate 'not using it'.
The cyclists who keep their muscles strong the 'hard way' ... will win the delivery war vs. cars!?
The carpenter who hammers every nail and saws every plank by hand 'the hard way' ... will win over the guys using power saws and nail guns!?
No - AI is changing the landscape.
What is 'hard and easy' are changing.
We won't need some skills, we will need others.
It maybe harder to maintain some critical skills, but the upside is obvious.
What is fundamentally missing from this treatise is that 'there is always a hard way'.
Personally - I have never been more 'cognitively overloaded' than ever. The AI 'amplifies' the depth of complexity one can reach, it's just at 1/2 a layer of abstraction above the code.
Driving a 'race car' at the highest speeds - is as challenging - and perhaps more so - than riding a horse.
The 'instinct to push back' is fair and there are innumerable legit criticisms ...
... but AI is just a new part of the stack and it will be as horizontally applied as 'software or the transistor' - it's not reasonable to think one could or should avoid it entirely.
with AI agents, you're obtaining a mildly lossy perspective of the code itself. whereas if you wrote it by hand, you'd have a more concrete understanding.
This is not too different from an engineering manager directing junior developers.
The stereotype of the engineering manager who forgot to write a line of code is not wrong.
That's a fair point, but you're i) radically underrepresenting the broader impact of AI ii) under estimating the power it will have over the short horizon, and iii) missing the fact that 'abstractions are real'.
i - AI is going to interject in so many things and so many ways beyond 'helping you write some modules' so consider that.
ii - AI 2 years ago was useless for code, you can see how well it works not, and this progress is still very real. By this time next year, the power will be more evident, making the position harder to take.
iii - to your point - the real answer is 'abstractions'. We used to write machine code by hand as well, until someone came up with FORTRAN and C etc. Now, people have 'forgotten' how to do that, largely, because we don't need people to do it.
AI is crudely that abstraction. You don't have to know a lot about some things.
Now - it's very fair to highlight the fact that the abstraction isn't very clean (!!!) but that will come over time.
So yes - for writing software today, we're '1/2 a layer abstraction up' - and it's 100% essential to keep an eye on the code, the architecture etc. - it's 'not fully there' but it's better to look at this through the lens of growing capabilities because over the horizon, the argument starts to tilt.
I made very concrete claim: that AI will be universal and widespread - embedded within all of the technology and systems we use.
It's so completely obvious, that anyone denying it has to be living in some kind rhetorical bubble.
It's truly a feature of 'online rhetoric' like HN/Reddit where people can consider these asymptotic postures and take themselves seriously.
We will use AI like you use plastics, cars, electricity, computers etc..
That's it.
I'm sure there were a few people who thought that 'hand writing machine instructions' was the 'one true way' of writing software, but hey, what would we call them in hindsight?
There are so many legitimate ways to be curmudgeon or wary of AI, but this reactionary stuff is anti-reason. It's not an argument, it's guttural, we should just ignore it.
Yes, now that's a reasoned though on how AI will affect us, but fortunately - the AI is not 'doing our thinking for us' any more that 'calculators did', and, that's not going to stop us from using AI.
People not using AI will be about as useful as those refusing to use e-mail or computers.
AI is a broad term and ML aglos for playing chess fall under that since the 1920s.
AI may replace some cognitive activity, it also required cognitive intelligence to use 'slide rules' - which have been replaced and we have not looked bad.
It's not a bad rhetorical question - but it's moot in the face of the question of 'should we use it or not'.
It will do a lot of things for us - that part is inevitable and unavoidable.
This is an abacus-to-calculator situation. Some people still use an abacus. The vast majority do not. It's wild living through one of these technological transitions. People just eschew all common sense and critical thinking as it relates to the adoption of new technologies.
If it's good, lots of people will use it commercially. If it's generationally good, everybody will use it commercially because commercial use is about competition. It either gets banned outright, like steroids, or — if it doesn't get banned — those who use it will have a clear advantage and that will lead to a very small number of people who don't use it (in business).
This is not really something that opinions are required for because if you think LLMs are going away, your opinion is historically incorrect. Things that reduce toil and increase output do not go away.
Thank ghod I'm retiring in six months.
I'm very thankful I came of age during the golden age of personal computing. I was able to own my own computer(s) and earn a living writing software on them and for them. Fifty years was a good run, and I consider myself lucky to have participated in it.
IMO we've gone full circle: dumb terminals chained to mainframes and the whimsey of someone else's rules, restrictions, and rent-seeking, to my own bought-and-paid-for computer sitting on my desk that did exactly what I told it to do using software that never changed unless I wanted it to change, and now we're back to dumb terminals (browsers) that talk to mainframes (the cloud) that not only harvest and sell my personal information to the highest bidder but constantly change the rules and restrictions on my software and have gone back to renting me the software and pushing changes that I never asked for and never wanted in the first place.
I will never use spicy autocomplete for anything, and I find it depressing that people are being forced to use it in order to keep their job. I see a very dark future for computing if real skills are all replaced with garbage being vomited out by rules engines that harvested their "guess the next word" results from today's internet.
> I've come up with a set of rules that describe our reactions to technologies: Anything that is in the world when you're born is normal and ordinary and is just a natural part of the way the world works. Anything that's invented between when you're fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it. Anything invented after you're thirty-five is against the natural order of things.
Douglas Adams, The Salmon of Doubt
It's funny because I was watching a lot of amazing new tech appear after I was 35 and most of it was exciting. Learning these things was fun and rewarding. You could say it made me happier.
Not sure why LLMs feel the opposite. Maybe it's because of the terrible marketing and pushing it down everyone's throats. Maybe it's because of the personality of people like sama, or how it's being used to produce the so-called AI slop globally. Maybe something completely different. But there's something bleak and off-putting in it.
True. Even Adams himself would have been sickened by GenAI.
Happy Service!
Kinda funny that you're quoting a real author who would never in a million years have resorted to using slop generators.
"I strongly feel that this is an insult to life itself." -Hayao Miyazaki
I’m not so sure. Douglas Adams was an avid technologist who worked on two interactive fiction games: the famously-cruel Infocom Hitchhiker’s Guide and Starship Titanic. I don’t remember whether there was anything free-form about the dialogue in the HHGG game, but Starship Titanic had many bots you could talk to. It was immensely fun, and I suspect he would have loved the ability to spin out dialogue a little more naturally.
On the other hand, the HHGG universe is just packed to the brim with deranged robots. Everybody loves Marvin, of course, but my favorites were the sycophantic ones like the elevators that sigh with pleasure upon delivering you to your destination. Adams always seemed to do perfectly anticipate the insanity of marketeers, and I expect that we’ll actually get some of this someday…
A huge part of what attracted me to programming was how free and open it was. The fact that literally anyone with a computer could install Python/Javascript/etc for free and create virtually any software they wanted, limited only by their own abilities and determination, was wildly exciting to me. I would say empowering, if that weren't such a cheesy overused term. If you were any good at it, you could get a great job at an interesting company.
Now like you said we're entering a world where anyone with a computer can pay a giant tech company thousands of dollars a year to spin up some agents for them. That's much less exciting to me, and I'm certain I would not enter the field if I were just starting out right now (assuming there even was a junior job available).
We've seen how big tech monopolies treat domains they control like search and social media. They try to extract all of the value, leaving nothing for the individual or common good, and they're quite effective at it. I'm not looking forward to them gatekeeping the field of software development as a whole.
Fortunately, we do have more or less open models, and they get better and better each year.
Unfortunately, sama & co hunger for global domination makes them more and more expensive to run.
> A huge part of what attracted me to programming was how free and open it was. The fact that literally anyone with a computer could install Python/Javascript/etc for free and create virtually any software they wanted, limited only by their own abilities and determination, was wildly exciting to me.
but you can still do that, AI is not preventing you from doing any of that in any way.
True, but this is like saying 10 years ago: you don't need to learn React, you can continue coding in Angular.
People do want to learn and use new tech but instead what is promoted is an access to a proprietary and (increasingly more) expensive API.
Good to see another Luddite (as they’d call us) on here! I am quitting tech in a month. I chose to go through 6 month of identity crisis, depression and reinventing my life after 20 years in software than having those bullshit generators imposed upon me to compete with those for whom thinking is démodé.
Meanwhile the front page is people complaining that using a particular word causes their evil genie to go haywire. You guys still call this stuff engineering? Writing requirements in prose, because programming languages are too hard? Fuck that, I’m out.
Going full circle is what we do, it's everywhere throughout human history. Actually, one could argue it's how life works. Nature has seasons to help life grow and be balanced. We're only starting to understand how this affects us in a larger scheme of things. Who knows, maybe we will wipe ourselves to dust and be discovered by the next iteration until we reach v1.0.0
> that not only harvest and sell my personal information to the highest bidder but constantly change the rules and restrictions on my software
yeah i'm gonna call BS on that. what you describe was happening well before modern-day AI (LLM, agentic stuff etc) became mainstream: think of google accounts binding your identity to your searches, gmail, google adsense, facebook, instagram and twitter (and others).
And the products and services that do what you describe can do that just as well without ai.
So yeah the problem is absolutely real but AI is not the culprit here.
Take me with you, please.
Where he goes, he has to go alone.
That sounds like it could be a quote from a book or a movie. Any hints?
Basically a generic movie trope.
The Road by Cormac McCarthy
I mean, that works for you since you're retiring. But for people still working in the industry, you adapt or die. As it's always been.
The fact of the matter is, a person working with a bunch of agents is a lot more productive than just a person. It makes research faster. It makes experimentation faster. It makes output cleaner. And this is true across many disciplines, not just tech.
Also, it is a skill. Yes, anyone can chat with an LLM. But understanding the optimal work flow for what to delegate and what to do yourself is difficult. Understanding the need for precision in the language used, and learning how to elegantly phrase things that were previously just abstract thoughts is absolutely a talent that can be refined.
If i had to guess, I'd say we'll probably see major breakthroughs across multiple disciplines within the next decade, largely because researchers and engineers can cover much more ground individually now, freed from the slow moving coordination mechanisms that team dynamics require. Pretty good for "spicy autocomplete" as you put it.
Are they? I remember when heavyweight IDEs where all the rage, there was a similar sentiment that if you weren't using one of them then you would eventually be so much slower that you'd be out of a job. It only took maybe five years until people started asking themselves if the dependency on a big IDE (and cost) was worth it. I don't think anyone would look at someone who prefers a stripped down text editor today and think they are backward or doing it wrong.
We have yet to see hard numbers on time saved by those who use LLM tooling extensively. It could be it doesn't turn out as compelling as we might expect.
Just sayin', I never forced software developers to use NetBeans or Intellij IDEA. I'm certainly not changing my tune and forcing them to use LLM tooling either.
Keep in mind, we don't do a lot of things that big IDES used to do.
Dumb example: graphical user interfaces. Heavyweight IDEs used to have a GUI designer (Netbeans had a very nice one).
GUI development is niche nowadays.
Also we have much better cross-editor tooling, just think of language servers (https://microsoft.github.io/language-server-protocol/) and build servers (https://build-server-protocol.github.io/). Back in the day each IDE had their own.
Vim and Emacs can do a lot of what IDEs used to offer thanks to language servers and build servers. Before those (lang/build servers) they were largely useless for large scale development (believe me, i tried).
Maybe it depends. If what you want to build is one-shot crap anyway, then micromanaging LLMs to make them vomit what you need for that is "productive". I wouldn't know, because I prefer real work over the make-believe and leave the AI coding acolytes to be left behind and die when their ingenious plans explode in their faces.
> a person working with a bunch of agents is a lot more productive than just a person
[citation needed]
I try LLMs for something every couple of months, and I have yet to see them produce anything actually correct. Calling non-existing library methods, confabulations, etc.
But sure, they produce a lot of stuff in a short while. The utility of any of that another question.
> I try LLMs for something every couple of months, and I have yet to see them produce anything actually correct. Calling non-existing library methods, confabulations, etc.
That's too pessimistic, the productivity gains are real and substantial.
OTOH, the hype train is out of control. It is nowhere near perfect and requires a lot of handholding and guardrails to avoid going sideways.
You need to adopt it to stay relevant, but don't fall for the excessive hype. At the end of the day the limitations are significant.
> I mean, that works for you since you're retiring. But for people still working in the industry, you adapt or die. As it's always been.
There are jobs outside of IT. They are harder, they have less benefits, they pay less. It's a whole project to switch your lifestyle so you can even afford them.
I know nobody who regrets making the jump. I hope to make it within this year. I'll be poor, but at least I won't work in IT.
> But understanding the optimal work flow for what to delegate and what to do yourself is difficult.
No it's not, you can learn it in less than a day. I've done it a few times while evaluating how much the agents have progressed (despite what people keep saying, not much).
> Understanding the need for precision in the language used, and learning how to elegantly phrase things that were previously just abstract thoughts is absolutely a talent that can be refined.
Some of us learned technical writing to communicate with _humans_ before, and we're sitting here alternating crying and laughing as y'all scramble to figure it out just to put all that into a hallucination machine.
> I hope to make it within this year.
What's your plan?
Changing flats so it's cheaper (it's hard but still possible here), then go for an entry-level "barista" job.
It's gonna be very broke, but I'm not the first one in my friends circle to make the jump, so I have some support.
Edit: I probably will keep coding. Just... nobody else is ever going to see or use my code again.
Respect. I moved countries for lower cost of living, and I’m gonna become a starving artist, so to speak, trying to use my software skills to make myself useful and earn enough to buy food, in a field where human ingenuity still reign supreme.
And if I ever find money under the mattress, I’ll make a solar farm. Something useful for the world, for once.
Better content and poor than living in golden handcuffs.
Why jump now?
If your worry is that you won’t be able to “keep up” and you’ll be laid off, or fired, just wait for that to happen. Keep making a paycheck until then. Then you can start your barista job.
If the problem is that you hate the work, then fine. But why barista? Fine, if that’s what makes you happy. But there are a million jobs out there _if you are willing to relocate_.
Bluntly? Because working with y'all is becoming insufferable. Because I don't want to work in IT. Note this isn't "I don't want to program" or whatever. That's cool and fun. But the people in here? Oh gods.
Also I'm sick and tired of working on projects where the best social benefit from my work would be if I stopped. And IT has this talent of doing this to even most superficially useful projects. I worked on solar panel software that got turned into a scam by marketing. That takes a talent, of sort.
The best time to jump out of IT was to never get into it. The second best time is now.
As for why barista? People need food and drink and coffee is great.
> working with y'all is becoming insufferable.
It depends on where you land. Not all programmers (and their managers) are brain-amputated zombies. But I do admit that finding that rare pocket of sanity requires a good portion of luck.
Probably goose farmer https://www.linkedin.com/in/dryuan/
That sounds quite expensive to start, to be honest. But if you can? Sounds fun.
It's a bit arrogant and borderline Luddite to suggest that 'your era was legitimate' and that somehow these new things which you don't understand are somehow 'lesser' or illegitimate.
In the long arc of history, I'm doubtful we'll see 'the last 50 years' as 'the Golden Age' - that's just a personal, contemporary romanticization. More than likely, the advent of computers -> web -> AI etc. will be one block of the 'informational industrial revolution'.
The people who made the ostensible 'Golden Era' were pioneers, just as those breaking new ground are pioneers today, it's honestly 'depressing' that people who consider themselves 'Engineers' wouldn't see that as clear as day, an be hopeful for the future on some level.
AI is very real phenom, obviously vastly over-hyped in many ways, and it doesn't feel nice to have to get caught up in a tectonic shift against one's will, but it is bringing about legitimate progress in every sense that the Engineers and Creators before us did.
In the exact same spirit as DaVinci or Babbage.
If one wants to keep a horse in the stable, or a typewriter around for posterity or any other reason that's fine, but not under the notion that somehow they are better or more useful.
The Luddites were rational. It's immature to use that word as an insult.
Nothing the parent said was arrogant.
That the Luddites were acting on principle doesn't mean wouldn't use the term.
Also, if you want to 'go there' you could find a much better word than 'immature' to say what you're trying to say.
The OPs posture is not tonally arrogant, but it's it's definitely intellectually arrogate.
The OP claiming heritage of the 'Golden Era' which is a dramatic, egoic romanticization.
To place one's 'own story at primacy' above all others, insinuating that 'his skills' are those which are 'true and relevant' and that those using new tools are 'lesser' or 'not substantial' , and also grossly myopic to the truly great Engineering that's going on ... is arrogant and insulting frankly.
We can empathize with having to yield to a changing world, or being too out of scope to even fathom the 'new tech', but that's very different with saying that 'Frank Sinatra was the Only Great Singer, those that came after him had not talent'.
If it were the case, then fine, but it's obviously not. AI is a legitimate advancement that narrow minded people are struggling to fathom, and it's coming out in some ugly ways.
A true creator would probably take magnanimous position that after having made their contribution, they are sad to not be able to participate in what is maybe an even more substantial era of progress, and all of the wonders that will come of it.
Good gripes - we're all about to have robots in our homes (!) probably within 5-15, we're witnesses Sci Fi unfold in front of us ...
There is no doubt that AI is world changing technology. I'm not sure I want to go back to the world before LLMs. However, he's right to lament the impending demise of personal computing. Our computing freedom is being attacked on all fronts by governments and trillion dollar corporations alike, and things are not looking good for us. Our machines are increasingly locked down by rent-seeking corporations. Software is increasingly in the cloud. Thanks to remote attestation, we get ostracized from digital society if we take ownership of our machines. It's starting to look like the "you'll own nothing" future is actually coming.
I do hope the open weight models keep distilling the frontier models, and that powerful and unlocked computer hardware remains accessible to us mere mortals so that we may run them with no limitations in our own homes. That's optimistic though.
Any engineer (any person actually) can “learn to use AI” in a couple of days. It’s not rocket science; there’s no chance of left behind. If you haven’t use LLMs at all, a weekend would be enough to be on par with everyone else in the industry
Others are disagreeing with you here, and I do too.
The difference is profound, and takes more than a couple of days to get your head around the implications. I'd summarise it as: "if you give a computer the same input it always produces the same output, but if you give a model the same input it always produces different output". Add to that the output is often wrong and it can't reliably follow instructions, and the difference is so great it breaks most of your intuitions.
The reward working with this piece of unreliable jelly is it can be far smarter than you (think the difference between a man with a shovel and a 20 ton excavator - they can literally find bugs in minutes that would take a human hours or days), and they know far more than you.
The engineering challenge is to make this near random machine produce a reliable product. It isn't easy.
The hype you see around them is it's trivially easy to get it to produce a feature rich but very unreliable product, as Anthropic demonstrates with their vibe coded claude-cli. I refuse to use it now. Among its other charms, it triggers a BSOD on windows: https://github.com/anthropics/claude-code/issues/30137 (Granted, it's just another Windows bug: https://learn.microsoft.com/en-ca/answers/questions/5814272/..., but if you are shipping to Windows you should be working around such bugs.)
The better you are at architecting or even directing a junior developer, the better your output too. Dont let AI make decisions, its supposed to take your decisions and turn those into code. When AI makes decisions, well the unexpected outcome is always on you.
> Dont let AI make decisions, its supposed to take your decisions and turn those into code.
I let the AI make decisions all the time. I often approve them, and I sometimes revert them. Most of the time they’re really good decisions based on my initial intent, but followed by analysis I didn’t make but agree with.
I think there's a spectrum of where to draw the line.
There's clearly some level where you want a human making decisions for even the most vibey of project, because without some kind of a spec about what you're trying to build and what features you want you'd get nonsense.
But like... maybe don't stress the details too much.
> clearly some level where you want a human making decisions
Yes, clearly. There was a meme out there, "just make something cool idk".
Statements like "Don't let AI make decisions" are made because of the loss of control we experience as mechanical parts of our work (such as writing to files) gets automated.
i always found it to be easier to write code myself than to direct a junior developer.
the level of teaching involved would always mean the overall velocity of work slowed down.
some people say you can throw them the drudge work but i find that if you're doing coding right (e.g. you dont let your code base degenerate into a mess of boilerplate), there is barely any drudge work to do.
You're missing the real goal of directing a Junior, which is you're teaching them to be a team player, Junior devs will surpass your expectations, the rate at which they goof or are about to goof should decrease over time the more you mentor them. If you do it right, you not have a strong ally and coder under your belt, or would you rather someone else teach them their bad habits?
i always found it to be easier to write code myself than to direct a junior developer.
Me, too. But that doesn't mean I'm a great developer, just a shitty manager.
Perhaps but at least when you are directing a junior developer, even if badly, you'll eventually get a non-junior developer on the other side. With an AI agent, you'll get ... what?
With current models, you're right, there will be nothing to show for the effort except the code itself.
I suspect that will change sooner or later. Models will be cultivated over time the way we cultivate full-time employees now, with an acquired awareness of what they're building, new skills picked up in the process, and insight into how the larger system works.
How do you reckon?
There's a long-running instance of the model on the provider that's allocated to my organisation? Or are you thinking more of a server-side memory system, similar to the (currently very fallible) ones like Honcho and Mem0?
What happens when my org stops paying that provider? Do I get to take the now senior agent with me to the next provider? Does the provider have to delete it (and all that learning is lost forever)? Does that now become a free agent that can be hired by the next organisation like an employee (one that probably doesn't know how to keep industry secrets to itself)?
> If you haven’t use LLMs at all, a weekend would be enough to be on par with everyone else in the industry
Disagree. It takes a lot of experimenting to find the right balance between sufficient guardrails and insane halluciations. And it'll be different depending on work domains.
I'm still refactoring AI workflows every week after more than a year or so and still working on it. Will probably be a perpetually ongoing effort as models change.
But does this translate as "one year of cumulative work" or rather "one year of rearranging your workflow and discarding obsolete ideas"?
If you spend a year walking in circles, someone can easily close the gap with one step. Especially if models and harnesses are supposedly getting more powerful all the time.
Firmly disagree. Learning how to use these tools effectively is unintuitively difficult.
They're great at some stuff and terrible at other stuff in ways that are very hard to predict.
I'm figuring out new and better ways to use them in a daily basis, and I've been an almost daily user for nearly three years.
can you share any examples of these "new and better ways to use them"? because the only way I've used LLM and seen other people use it is to literally just talk to it, which doesn't require any skills beyond basic conversational abilities.
They're difficult and hard to predict because they're still primitive, despite what their companies say. When (or if) they get advanced enough to deliver consistently, there will be no chance of being left behind, because even a kid will be able to use them effectively. Right now they're still at the gimmick level, although a very impressive one.
If the models get to a point of total consistency there's still a LOT that we need to figure out and learn about how to use them.
Let's say models can exactly and correctly write any code you ask of them.
- How do you break down a project into a sequence of requests to models?
- How can you most effectively parallelize the work - models will never be instant, so there will always be benefits in working out how best to use several agents at once
- Now that the models can handle the details of Lean, and Swift-UI, and Oracle stored procedures, and thousands of other technologies that you never got around to learning in the past... what can you do with those and how do you pick which projects to go after?
- How do you collaborate with other engineers and designers and product people in a world where you can churn out the right code reliably in a few minutes?
The models we have today are already effective enough to change the shape of our work as software engineers. As the models continue to improve figuring out and adapting to whatever that new shape is becomes even more complicated.
If these tools stopped drastically improving, what justifies the crazy valuations?
Not much. The valuations are wild.
Just learn, sure. But the difference between my efficiency of using it on my day 2 and month 6 is significant. Yet I feel I am barely scratching the surface of it.
> a weekend would be enough to be on par with everyone else in the industry
I kind of agree in general that it is a learned skill, but considering how unclear people generally are when they communicate, I'm guessing it'll take longer than a weekend to be able to catch up, especially catch up to people who've been working on precise and careful communication and language for years already in a professional environment.
A weekend is enough to get going, but not nearly enough to 'be on par' with everyone else.
That said - what we have learned in the last year could be compressed quite a lot - there are a lot steps we could skip, and 'learn by failure' that need not be repeated.
It takes a while to get the subtleties of it, it's among the most highly nuanced things we've ever encountered.
/thread
If one has been reading a wide variety of books/papers/articles/whatever their whole life, and one has been mindful of how to communicate with the "written word" as it were, it takes about 3 hours to be wildly effective with this technology. I think it took longer to learn google-fu than it did to learn how to use this technology effectively.
The unspoken (and utterly antisocial) subtext is "we are aiming for an exponential leveraging of our labor and complete domination of the market."
The statement is absurd because the skill curve for AI tooling is so small you can can mess around for a day or two and get "caught up" with the zeitgeist. And what you need to know to get started is actually far less these days than it was 1.5 years ago thanks to all the product refinement that took place in the space.
The only real risk is that today there's an expectation from employers that you've got some AI experience under your belt you can articulate. But you can get that experience today.
6-12 months ago I felt like i was constantly behind the curve with all the different things people were doing to get more out of their claude code. as the year has progressed though, all of those features keep making their way into vanilla claude code, at a faster and faster rate. Now someone working on the bleeding edge is using things that i'll be using without having to think about them a month from now. It has really reduced my anxiety of being left behind.
That's the thing, any "advancement" you might discover will be integrated into main tools soon enough, I am going to say that in fact, you probably shouldn't even learn them before they are integrated. Helps you filter through all the noise and avoid wasting time on learning something that isn't going to take off.
You're discounting the "being able to write properly and put ideas into inteligible text" skill piece here.
Most people who have been programming for a while should have those skills. If they don't then learning AI is not the issue but communication is.
Most good programmers are good at writing. If you’re capable at simultaneously writing instructions for a dumb abstract machine and have those instructions being understandable for humans, you’re clearly good at expressing at least technical ideas.
Yeah, never had a problem with explaining to AI what I want from it. That doesn't mean AI always follows what I tell it to do ...
Which AI are you using?
Black-and-white thinking like this is not healthy.
You can still do creative thinking while using AI as a powerful tool at your disposal.
Some mathematicians like Terence Tao are comfortable doing this, for example.
I feel like I use AI this way, but a majority of my peers lean too much into it. There used to be the sentence "we don't think, we google", and I see that with ai usage. As soon as a roadblock appears, the situation is pasted in GPT without further engaging with it, then they pick up the phone and open an app while GPT does its thing 0_o
I have a coworker like that too, my pet theory is that they're not passionate about their job to begin with. It's just something that can pay their bills.
While waiting for Claude to finish, we talked about our hobbies outside of work, and the same guy will go into deep details on how steroids and the HPG axis works, and even gave me a spreadsheet with several NCBI PubMed links on the topic.
I think we are all naturally be more creative and opinionated in things we are interested in.
We don't think - sounds like the same crowd as the people who think creativity doesn't exist.
An orthogonal observation: Bearblog seems to have become an anti-AI echo chamber. Their community responds very positively to posts exactly like this one [1] [2] [3]
I think it's just important context to keep in mind that these sorts of takes are very typical to top https://bearblog.dev/discover/ in the same way that certain types of posts are designed to rank well here. I considered migrating my blog there earlier this year and ended up deciding that, while I loved the product, the community was not healthy.
[1] https://forkingmad.blog/ai-summary-blog-post/
[2] https://blog.spu.io/you-dont-want-to-make-things-you-want-to...
[3] https://blog.happyfellow.dev/simulacrum-of-knowledge-work/
People also used to say that Google or calculators will make you dumber. Neither happened. Won't happen with this either.
People are worse at mental arithmetic than they were in the recent past, so it's not clear that they aren't "dumber" in the sense people meant at the time.
And did our thinking about the importance of being good at arithmetic change in response? I think so.
We also used to be much better remembering things, when we relied on oral histories, our memory skills have degraded quite a bit. And there's a quote from Socrates criticizing how writing is a crutch that degrades our skill (https://www.perseus.tufts.edu/hopper/text?doc=Perseus:text:1... , the last bit). Over time, we've just moved to valuing other things more.
Well, with anything, practice is key. When I was in school, I was in a math competition where you had to do everything in your head. There was no scratch paper, you could not modify your answer once written, and erasing was obviously not allowed either. I wasn't the greatest at it, but I didn't suck at it either. That was decades ago, and I no longer do math in my head that way. What I used to do in seconds for a result now takes a couple of seconds to think about what needs to be done and then the time to come up with the result.
Students score lower on standardized tests in the 2020s than those in the 1990s. So your stance feels misguided. Although I don’t think Google and Calculators are the main culprits, I do think it’s due to larger technology/internet landscape.
> Although I don’t think Google and Calculators are the main culprits, I do think it’s due to larger technology/internet landscape.
That's extremely speculative, especially given there was a major event in 2020 which massively disrupted education worldwide.
It was declining since 1990s, not since 2020s.
I once worked for a guy who typed 7 + 4 into a calculator, after freezing for 1.5 secs trying to work it out in his head. It was in a "stressful" situation (not something extreme, we just were in a hurry), and I'm sure the guy could add those numbers in his head, generally... he owns his own business, after all. It took so much out of me to not move a face muscle.
Sounds like you haven't used it much. It starts small with you forgetting the arcane params to commonly used tools that you don't need to type anymore. Where it will stop nobody knows.
Well, I forget arcane params all the time before AIs too. I rely on terminal history search and google.
it clearly did make "people" dumber because now "people" believe in AI ;)
"The reasonable man adapts himself to the world: the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man."
Richard Stallman was unreasonable. So was Linus Torvalds. I'm hoping that something wonderful and entirely human-centered will come out of the anti-AI movement, up to and including a bifurcation of the internet.
But...
Take electricity adoption for example, with your adage then it means unreasonable pro-electricity people vs unreasonable anti-electricity people. We know how this turns out, I don't see a lot of people joining the Amish.
It's okay to have strong opinions (be "unreasonable"), but in the end humanity as a whole (the "reasonable" people) will judge whether your opinion is a good one or not. Only time will tell.
> up to and including a bifurcation of the internet.
Enforced how?
Once you write something and publish it, its just out there, it doesn't really get healthy or unhealthy. I do not think all writing is meant to be, or needs to be, the representation of someone's mind and its health. We write to have the opinion exist outside of ourselves. Why would we even read things if what we read didn't have strong beliefs or opinions? It sounds so boring!
Well I'm not saying that the blog writer shouldn't have written the article.
I've read the article and to me it reads like a very angry rant, which is why I commented with something akin to "bro calm down"
> You can still do creative thinking while using AI as a powerful tool at your disposal.
It remains the case that AI _erodes_ your ability to that.
So, eventually, after a few years, no, you can't.
Edit: meanwhile you're making yourself disposable. So, have fun with that.
Thanks for the warm wishes, stranger
I see no point in denying the technology, it's best to do what we humans do best: adapt with it.
Most people aren't anywhere close to Terence Tao on intelligence scale. Even most of HN commenters aren't that close to Terence Tao.
I don't think balancing AI use with creativity and thought is a matter of IQ. It comes down to how you use the tool.
In my practice, I found AI are more useful in adversarial mode ("criticize this concept, "find a possible bug in this code", "challenge me", "quiz me on the knowledge"), because the knowledge found adds up to your own skills.
100%. One of the things I use it for most is to steel-man an argument I hate or criticize a conviction I have.
You don't need super big brain IQ to be creative and expressive, all you need is simply a strong opinion on something, and you don't let AI (or other people) dictate otherwise.
Now the skill issue lies in whether your opinion is a good idea or not lol.
Some people who don't use AI will be left behind - those who work on things where LLM's are capable of a substantial amount of the tasks will be left behind if they just refuse to leverage the superhuman properties that LLMs have.
I don't think it's hard to catch up if such a person changes their mind, though.
Some people who do use AI will be also left behind - those who use it to replace their skills without developing new ones themselves, and those who use it to do the same or worse work more cheaply. They will be left behind in a competitive world where others will work out how use it to do more or better work with no reduction in effort.
>those who work on things where LLM's are capable of a substantial amount of the tasks will be left behind
It sounds more like there is no chance that most of those people will stay employed, regardless of how "ahead" they try to stay.
If LLMs mean I never have to open a PowerPoint from a client to pull out their "data" again, that'd be great. I gain nothing from being a manual data entry monkey for people who don't understand the concept that presentation-ready output formats are not data transmission formats.
But if I'm to be expected to employ vibecoding in my day to day job as a software engineer, I'll dismantle my house and go live off grid somewhere in Alaska. I have enough power tools and knowledge to do it. Probably massively healthier for my kids.
For now at least, I think it really depends on what type of coding that is.
I don't have any particular predictions going forward about it, but something I think about right now is, do I want to focus my time where the interesting decisions, the valuable contributions I make, are product-level thinking about what to build and what problems to solve? Or do I want to focus my time where the interesting decisions are technical ones, fully wrapping my head around a technical problem and coming up with a solution?
I do think both options are still available, and personally I love them both. But I don't know what types of coding would involve significant amounts of both activities anymore.
There’s still a lot of place for both. Because they are just a shift of perspective around the same thing: Solving a problem for someone.
Product is when you’re seeing things as the one who have the problem and designing the solution in a way that is usable. Technical is when you shift to see how the solution can be implemented and then balancing tradeoffs (mostly costs in time and monetary resources).
While the code is valuable (as it is the solution). Building it is quite easy once you have a good knowledge on both side.
The issue with AI is not in their capabilities, but in people rushing to accept the first version when there are still unknowns in the project. And then, changes costs almost as much as redoing the project properly.
Weird fallacy that if you use a tool you can't use your brain anymore
Like people who use machines are always physically strong enough to do the job the machine does, right?
This is not what a fallacy is.
I think it's pretty obvious that people who offload their thinking to an LLM will eventually get used to not thinking hard about things. Anything you do that you stop doing regularly eventually atrophies. Thinking hard about things and performing on work is as much a skill as it is an innate property of being smart, as evidenced by the many "prodigy" sort of folk who languish in obscurity later in life.
- https://publichealthpolicyjournal.com/mit-study-finds-artifi...
If you a tool that replaces your brain then you won't use your brain.
A lot of people do outsource their thinking to AI, so it's not that weird to bring up. That's effectively how many AI companies are marketing the technology.
But it's definitely possible to use AI without letting it think for you. OP should at least acknowledge that.
Those who dogmatically refuse AI outright may be disadvantaged for some things in the future. But it's also probably hyperbolic to say they will be "left behind".
I agree with OP it's the other way around, while some will gradually lose basic skills by relying more and more on AI for productivity sake and laziness, those "people who don't use AI" value will go up by choosing to simply keep "learning the hard way"
> Why wouldn't you aim to be better, to learn how to be or do something that AI would never?
Because it doesn't make sense to be better than a tool. A woodworker could use a hand saw and take an hour to cut wood... Or he could use a buzz saw and cut it in a few minutes. Is the woodworker any less of a woodworker when he uses a buzzsaw vs a hand saw?
Outsourcing thinking to AI is not healthy, and certainly if everyone used AI like this we're doomed.
I still think it's true that those who don't use AI will be left behind, but it's a bit tautalogical because the thing they're behind left behind on is AI. A lot of the biggest companies on earth are putting a lot of money in AI, but if you're OK with working for a company that is not putting all their money in AI that's perfectly fine.
Just like block chain was everywhere ten years ago and now is just kinda _there_. If you got in before the hype you could have made a lot of money. If you didn't, you were left behind. I was left behind and I'm OK with that.
I find that good people get better with AI, but I'm not sure more average people really do.
I've seen some produce stuff without really understanding it, barely review anything, and pretty much suffer from imposter syndrome.
I think what we're seeing is it's just an amplification of whatever intrinsic motivations people have; the whole mirror to the self thing, on steroids.
Obviously people who are motivated by curiousity will have a different view and those who value creativity will end up thinking otherwise.
Also, it's basically impossible to separate the technical capabilities with the big money fascists pushing it.
When on the road to hell, it's OK to be left behind.
But what if all the good jobs are only in hell?
Then it seems you have a flawed concept of what constitutes the "good jobs".
A job from hell is a bad job by definition.
a good job is one that brings you joy and improves your creativitiy, they by definition can't be in hell. if you mean well paying, that's a different thing entirely, ditch the fancy car and adjust your lifestyle
I have a fancy car? News to me. I'm just trying to pay my bills and live a sensible and reasonably comfortable life. These days that requires a lot of money.
Leave the city.
I'd rather be poor in the city than rich and bored outside of it.
What is sensible in a literal economic recession if not “boring”?
Anyway..
I'll be in my home for longer than the recession lasts. Seems a little silly to sell it and walk away from my hard-earned lifestyle (which I quite enjoy) to save some money for just a few years.
mind you given the topic of the article I took the jobs in question to mean largely in the software industry, not talking about minimum wage workers here.
But as a programmer I am quite baffled when peers my age, often without kids struggle in this kind of way. My essentials are rent, food, metro card, library card. When I hear people who make what I make say that living requires a lot of money that usually includes two dozen subscriptions, a few grand in useless electronics per year and ordering food most of the week.
So, are you suggesting that an acceptable lifestyle would be an empty studio apartment with nothing inside of it, no pets, no partner, no meaningful possessions?
Personally, I have pets, a partner, and thoughtfully selected and meaningful possessions. I don't collect crap, and nothing I own or do is particularly extravagant. But I'm not exactly living an ascetic life either. A pretty typical lifestyle, I'd say. And I don't consider any of this a moral failure, if that's what you're getting at. Though admittedly it's not as affordable as living like a monk.
I don't advocate living like a monk. I play in a band and play football every weekend, play chess, you can go to church, most of that is practically free.
I'm very much in favour of participating in culture, real culture though. You can ditch the 1k Taylor Swift tickets, Disneyland visits, and high end gym for the 20 bucks local jazz festival.
I think the people who live socially like ascetic monks are ironically the people who build themselves a home gym for 10k and then complain about not having any time for friends because everything is too expensive
Well, I'm neither of those extremes. Like most people, I'm somewhere between two ends of a spectrum. ;-)
That's an excuse you made for yourself to feel better.
If that's the case, so be it.
Organizations and communities that embrace both AI users and AI abstainers will be the true winners.
A bimodal strategy gets you the best of both worlds: the ability to rapidly explore and develop ideas, and the ability to critically and cautiously assess them.
Diversity fuels evolution.
I have a feeling that a big risk of using AI all the time is that our own neurological capacity starts to dwindle.
Just as many people leading sedentary lifestyles have to make a deliberate effort to exercise, because inactivity is really bad for our bodies, I think we're going to realise that a similar process is necessary for our minds.
You really want to be spending a bit of time every day operating at your cognitive limits - trying to fully engage your System 2 - if you want to avoid brain atrophy. Coding used to kind of give you this exercise for free, but you can go really far with just your System 1 nowadays - literally get things done while scrolling Reddit.
I'm trying to allocate 30-60 minutes a day to doing something difficult, like writing code by hand for an unfamiliar problem or reading and summarising difficult papers without AI.
People who can only use AI will be left behind. It is easy to shut off your brain when using AI and then get overwhelmed by the amount of code it produces. Worse is though when people replace programming experience with AI. I have seen a lot of really bad AI code. I can spot and repair it. Others can not. And that is a problem. And I am not talking about purist principles. I am talking about bad unoptimized code that I can spot with just one look.
It is a tool just like syntax highlighting, code completion and refactoring tools before it. You need to know how to use them, where their usefulness ends and you should probably have an idea how to do it yourself without the tool. It is okay if you will be less efficient, but it's bad if you just can't.
Perhaps programming will once again become primarily a hobby
I don't (yet) use AI in the way I am expected to. I have not integrated it into my IDE -- can't be bothered, plus I code in notepad++. Rather I use in a browser and have raging arguments with it over the course of three days about a design. After we come to an agreement I write the code.
This energy would be better directed anywhere else.
The author chose to take offense by connecting with a false dichotomy presented vaguely in a way that serves no purpose other than dividing and poorly labeling everyone in an area where much nuance applies.
I think this is perhaps a side effect of consuming too much content and feeling overwhelmed with it.
Engaging with stuff like this only amplifies its effects, how about do anything else instead? Maybe learn something new, like how to channel your anger.
Trading practice of primary skills for indirect skills like AI is like a writer deciding they should stop writing directly and get really good at Microsoft Word.
That's why I write my own assembly language. Compilers just atrophy your skills!
Personally I just accept that all technologies are great and must be embraced! This way I do not have to think about ethics and potential implications for society.
The author is making a very coarse reasoning mistake.
Using AI is and "think, how to write, how to do a simple reliable search, how to tell fact from fiction" are not mutually exclusive.
Yeah I don't think this line of reasoning is to be taken any seriously at all. It can be... left behind.
If your job can't easily be done by AI, then you can pick it up and get "up to speed" any time you like.
If it can be done by AI, then you have no hope of competing with the quantity of AI output that anyone can trigger in very little time.
As my job seems pretty secure, I can ignore AI for as long as I like.
My take on it is I would rather code than ask the machine to code. It's frustrating though how many open source projects now are overrun with massive PRs and nobody to code review them. This feels like fallout from too much reliance on AI.
> My take on it is I would rather code than ask the machine to code.
Same. I don't really care about productivity or if AI is so much more productive, tbh. I'd rather just change careers at this point. I'd prefer not to just be a full time code reviewer while my agents go do the actual work.
But I'm also tired of this in between state. Either rip the bandaid off already, fire everyone, and force governments to implement UBI so I can finally be free, or finally admit that the productivity gains have been vastly oversold and the LLM apocalypse is only a half truth, half grift and get on with our lives.
> But I'm also tired of this in between state. Either rip the bandaid off already, fire everyone, and force governments to implement UBI so I can finally be free, or finally admit that the productivity gains have been vastly oversold and the LLM apocalypse is only a half truth, half grift and get on with our lives.
I'm also tired. I wish this would happen too.
I just don't think it will because it would devastate the market.
Can't you do both? Use AI and still learn and strive to be better.
People who don’t use online search engines will be left behind.
No, it is not an issue that they might „forget“ how to search for the information in a book from a public library.
People who don’t use Online search engines will be left behind.
No, it is not an issue that they might „forget“ how to get this information from a book in a public library.
That reminds me of an old Fry and Laurie sketch.
Well of course too much is bad for you, that's what "too much" means you blithering twat. If you had too much water it would be bad for you, wouldn't it? "Too much" precisely means that quantity which is excessive, that's what it means. Could you ever say "too much water is good for you"? I mean if it's too much it's too much. Too much of anything is too much. Obviously. Jesus.
The sketch in question: https://www.youtube.com/watch?v=XewVicFzRxw&t=2m43s
AI was always slowing me down, only recently it has become somewhat useful.
This is a HN comment reply masquerading as a novel submission.
I think not using AI is a manifestation of one's inability or unwillingness to LEARN. To your point, if you can't learn, you will fall behind.
Or a manifestation of having risk aversion that isn't easily swayed by peer pressure...
Everyone seems to know you can't trust the AI output, and that it is on you to review it. But whenever I talk to people who claim to be getting big benefits, there is always a moment they reveal that they are not really reviewing the output. They are just going with it.
Similarly, so many who claim to use AI as a search index eventually seem to just trust the summary instead of checking the references to figure out whether it is regurgitating fact or fiction.
I don't really know if these users always had low quality standards or low diligence, or whether the tool usage degrades them. But I see the correlation among the friends-of-friends network I can observe.
Yes, but it goes both ways. Using AI can be a great way to be productive while purposefully NOT learning how the sausage is made—say, boilerplate code in some devops system that you don't care about—allowing your attention to be focused on the part of the stack you actually care about.
On the contrary, using AI is like outsourcing your DIY to a professional joiner.
Sure, he'll get it done twice as fast and you might notice some tricks as you look over his shoulder. But when you need a second door hung, you'll either have to start learning from scratch or call him again.
I think of AI as just another abstraction layer, somewhat similar to what high-level programming languages provide compared to writing machine code. Deciding how deep to understand the abstraction layers is a choice the user has to make, which could be optional if they don't really need to.
Nevertheless, the responsibility of whatever a human produces with AI is still on the human.
With that said, knowing how to use AI the way it's right for you can give you a huge advantage. You don't have to though. And there is not a standard way of doing it.
What I recommend to everyone is give it a try and see if and how it could help you. At the end, you have to make the decision based on your constraints and what you're aiming to and can sacrifice, including but not limited to speed, accuracy, learning, etc.
Just like every single trend that came before, they said you would be left behind:
If you didn't embrace OOP Test driven development Behavior driven development Events driven development Pants in head driven development SOLID DRY Cloud first Virtualization everything Microsservices Serverless Everything js Everything ts Everything Microsoft
This will never stop.
You either let someone be in the middle of you and what you want to accomplish, or you will be left behind.
Think about the most mediocre person you know. Now remember 50% of people around you is dumber than that
Friendly reminder that we're still in the hype phase, even if it's the late stages.
To me the idea that a GPU which costs as much as a car must read its entire VRAM just to output a word sounds incredibly wasteful. I'm exaggerating here, but it is literally reading gigabytes of data and processing it to produce relatively little information.
Some data is truly worth the effort, but the majority won't be able to afford this long term - especially when those who capture the market increase prices.
> a GPU which costs as much as a car must read its entire VRAM just to output a word sounds incredibly wasteful
Kind of how I feel about Bitcoin at this point. The coins take so incredibly long to mine if you aren't in a pool that it could cost hundreds of dollars in electricity to own a fraction of the coin months later.
I'm just retired and more than happy to be left behind. I'll carry on writing software the way I love until I die
If the crowd is running towards a cliff, I'd rather be left behind.
This is the only advice.
It’s gonna be painful and many firms will go bust - not because of being left behind. But they got so deep in pushing llm’s their competitors came in and offered their customers what they wanted - the customers don’t care how you produce it. They want the thing to that leaves them in the best economical state. And this llm mania is gonna cause many firms to forget this and go down paths they will later regret.
The author makes a great point about learning. Learning is what increases your intelligence and if we substitute learning for AI lookup we will literally get dumber. That said, AI models have a lot of information and can assist in learning. It's a tool, how will people use it? My fear is they won't use it to help learn.
Yes, I will be left behind. Left behind with my copyrights,
https://news.ycombinator.com/item?id=47932937
Left behind with my money,
https://news.ycombinator.com/item?id=47933355
Left behind with my intact data,
https://news.ycombinator.com/item?id=47911524
Oh, the horror. I am being left behind.
Don't forget your critical thinking skills, your unique voice you painstaking developed over your entire life, and your dignity.
I'm no AI-hypeman (nor the opposite, I guess), and I agree that replacing AI for critical thinking and writing will only turn out bad in the long-term.
But "your dignity"? You mean like "I feel shameful over that people saw that my writing was actually AI?" or something else?
I meant the indignity of trying to have a conversation with someone who at first seems like a reasonable professional, but who at some point in the conversations insults you with something like "I asked Claude and ..."
> You mean like "I feel shameful over that people saw that my writing was actually AI?" or something else?
Well if you don't have dignity in the first place, its hard to have any shame over losing it
I suppose. So I still don't understand why users of AI should feel like they've lost their dignity, does it matter where the AI runs or using AI is just shameful regardless?
If you had dignity, you'd probably not want to have an AI write for you was my point
and your ability to stop screwing around and sit down and actually get into something in depth instead half-assing everything
I sympathise with the author and the argument. I know the text is a rant. As such, I can understand that the proposed consequences might not make sense. Yet still, there is a fun game you can play, where you replace AI by "chess engine" and you get a text that would be fitting for a late 90s chess grandmaster but seen as totally anachronistic today:
"Chess players who don't use engines will be left behind", they say. I can't emphasize enough how much I hate it when I hear/read shit like that because I'm pretty sure, in fact, that what will happen is the exact opposite.
People who rely on engines are the ones who will be left behind. They'll forget how to think, how to move the pieces, how to solve a simple straightforward mate in 3, how to tell victory from stalemate... they'll forget how to fucking LEARN. I think that's the part that makes me the saddest. What a beautiful thing it is just to play chess.
If you think Deep Blue can do better than you, why would you just let it? Why wouldn't you aim to be better, to learn how to be or do something that a chess computer would never do?
The problem is AI is being pushed and used as the equivalent of using a chess engine to tell you the best move during a match.
Maybe there's a way AI can be used to make developers better but it mostly just seems to be the equivalent of grand masters saying how great vibe playing is because now they can play 1000x more games every day. But don't worry, they're still steering the games.
Sounds like something Magnus Carlsen might say. I hear he's doing quite well out of the game of chess, and pointedly not playing how a computer would play, even though Deep Blue is clearly capable of winning more than he is and from more difficult positions.
Also, the world isn't as trivially solved by computation as a game of chess, so maybe delegating your job or how to be a better human to ChatGPT isn't as much of a winning strategy as getting the computer to suggest chess moves.
> "Chess players who don't use engines will be left behind"
Unfortunately this is absolutely true for classical chess at the professional level, w.r.t. preparation.
Not detracting from your point though, for the other 99.9% of chess players.
That is a really accurate analogy.
Deeper reasoning, longer term planning, and more efficient solutions have always separated amateurs from experts. That experience cannot be applied asynchronously or reduced to supervision. It has to be "in the loop" and there is always a lot of out-of-band information that only an experienced eye would notice and can't be trained into a model.
Both of these can be true.
And I'm sorry to nitpick - but "People who rely on AI are the ones who will be left behind" is NOT the opposite of "People who don't use AI will be left behind".
> "People who don't use AI will be left behind", they say. I can't emphasize enough how much I hate it when I hear/read shit like that because I'm pretty sure, in fact, that what will happen is the exact opposite.
> [...] they'll forget how to fucking LEARN. I think that's the part that makes me the saddest. What a beautiful thing it is just to learn stuff.
I love learning. My life of self-education is so much richer with LLMs to help me.
There are dozens of other arguments for not engaging with AI. If your reason is "I love learning" I recommend at least dipping your toes in before you declare that AI is a hindrance, not a help, to people who love to learn new things.
Seriously, these are an autodidact's dream. I've been having an absolute blast learning about stuff from government structures and the different approaches to fusion power to what types of electrical conduit are used for what applications and appropriate connectors, heat pump sizing, etc. It's so ridiculously empowering. All this info that you had to use an enormous amount of time synthesizing and studying is now available at everyone's fingertips. I think we're going to see an explosion in productivity on all sorts of fronts, not just writing code.
Maybe it's a generational thing, but I'm old enough to remember when personal and office computers were really hitting mainstream in the late 70s and 80s, the messaging was a lot more friendly and how they will save you time, help you, etc. Even though practically speaking it reduced a lot of manual jobs.
This AI/LLM push from leadership is so damn tone deaf, like "you better do this", "ai layoffs", etc. I feel like they are jumping way too hard and fast into the "post-employee" thinking and deserve every bit of scorn from laymen.
I disagree. People who use too much AI will not learn anything and will not contribute significantly to new developments.
You clearly missed that the statement in quotes. And didn't actually read the linked post.
"People who use a calculator will forget how to think"
It's always obvious that LLMs are bullshit. It's blockchain, but far far worse. US invests too much in it and the collapse has already begun. Half of planned data center builds have been delayed or canceled across the country.
I just don't get even the presumed risk here. How can something be so revolutionary in its capacity to increase productivity but still so esoteric or specialized that there is a risk of being "left behind"? Like all these things people talk about are, at the end of the day, products that want you to use them; they aren't gonna make it hard for someone to onboard in the future. Sure if all coding became ecommerce overnight and I'd never "learned" Salesforce, there might be brief friction there, but I could still just, like, learn Salesforce. It's gonna be a lot easier than learning good software engineering in general.
Why spend your life "learning" something whose whole deal is about not needing to learn? Even if you gamble incorrectly, its not going to be hard to get into!
Like, what, if I don't start practicing now I am not going to be able to... express concepts with natural language as well?
Doing fine so far, thanks!
In the 1950s, COBOL was introduced with the idea that programming could be written almost as if one were speaking English. But eventually people realized that writing COBOL well, in a style that resembles English conversation, was itself difficult.
Today, we are hearing a similar claim: “If you can describe the program in natural language, programming is basically finished.” But the industry is now discovering that describing the program well is the hard part.
This is also why ideas like harness engineering are appearing: methods for controlling the range of outputs, from poor to excellent, that can emerge from minimal input.
And honestly, I do not think the “vibe coding” phenomenon is entirely bad. The essence of programming is automation. Many people were previously limited because they did not know programming languages. Now, through AI, they can express themselves and turn that expression into working apps. Seeing this, I understand how deeply people have wanted to create.
I write industrial software that runs in large factory environments, and because of the nature of my work, it is difficult for me to use AI directly. These environments are usually closed networks, so AI does not really benefit my own production work. Even so, I still defend AI, because it functions as a new kind of voice that allows more people to express themselves..
Of course, capitalism distorts this. Many people use AI to chase money and capital, and as a result, a lot of low-quality apps are being produced. But on the other hand, what is wrong with the motivation of wanting to make something one wants to make?
I have been studying the history of programming, and I like Dijkstra’s famous line:
> Computer science is no more about computers than astronomy is about telescopes..
To me, this means that computing is fundamentally about automation.
AI has existed as a research topic almost since the birth of computers. We tend to think of it as recent, but it is a field with a history of more than sixty years. Starting from early work such as the Perceptron, there have always been people claiming that AI was a fraud or an illusion.
But now a new seed has germinated. The amount of complexity that a single human can handle has increased. Historically, the techniques for managing that complexity were things like programming patterns and software architecture. And even people who strongly argued for software architecture also warned that if architecture becomes detached from code, then something has gone wrong.
Memes always damage the essence of ideas. As information circulates, it degrades, and eventually the original meaning disappears.
The Dunning-Kruger effect is a good example. The original paper was not simply saying, “ignorant people show off, while knowledgeable people do not.” It was more about how both less competent and more competent people can have difficulty accurately assessing their own metacognition. But the idea became distorted.
The same thing happens to many famous ideas in programming. Knuth’s statement about premature optimization is also constantly distorted as it circulates.
In that situation, can we really say it is always bad to step away from online communities and learn through AI while cross-checking against books?
When I see people making extreme claims about this, I sometimes find it absurd. Of course, many people may flag or downvote my comment. But this is how I see it.
"People who drive cars will forget how to walk".
>"People who drive cars will forget how to walk".
People who drive are in worse shape than people who walk
more like they'll slowly lose muscle and the will to walk if they rely entirely on cars to move themselves around
More like "people who wear shoes will forget how to run"[0]
[0]https://www.youtube.com/watch?v=7jrnj-7YKZE
No, it's more like they'll just get really, really fat and die early of preventable diseases.
If you learn how to walk, then you'll forget how to crawl
this discussion is so stupid. no one who isn't a moron is offloading all work and thought to LLMs. no one who isn't a moron is seriously afraid of their thinking and learning skill "atrophying", whatever tf that means.
it's clear that LLMs are unique in that you actually do have the capability to turn your brain off and blindly trust whatever it does for you. but it should be equally clear that that's a stupid approach. people will still use their minds, and this use gets empowered with proper use of LLMs. it's that simple. ffs, we take the fact that they pass the Turing Test routinely for granted now. let's not forget that this technology is legitimately incredible. it stands to reason that you are seriously handicapping yourself by not trying to use it.
Came here to write something like that his. It’s so sad we still have these stupid conversations with people thinking black and white in 2026.
People not using AI will 100% get left behind as sure as those refusing to 'cars' or 'computers'.
There is absolutely not doubt; and it will be impossible to avoid as using 'plastic' or 'electricity'.
The narrow challenges of 'AI aided development' or 'AI aided creative work' are legitimate - that part is real and fair, but it'd be an over-statement to contemplate 'not using it'.
The cyclists who keep their muscles strong the 'hard way' ... will win the delivery war vs. cars!?
The carpenter who hammers every nail and saws every plank by hand 'the hard way' ... will win over the guys using power saws and nail guns!?
No - AI is changing the landscape.
What is 'hard and easy' are changing.
We won't need some skills, we will need others.
It maybe harder to maintain some critical skills, but the upside is obvious.
What is fundamentally missing from this treatise is that 'there is always a hard way'.
Personally - I have never been more 'cognitively overloaded' than ever. The AI 'amplifies' the depth of complexity one can reach, it's just at 1/2 a layer of abstraction above the code.
Driving a 'race car' at the highest speeds - is as challenging - and perhaps more so - than riding a horse.
The 'instinct to push back' is fair and there are innumerable legit criticisms ...
... but AI is just a new part of the stack and it will be as horizontally applied as 'software or the transistor' - it's not reasonable to think one could or should avoid it entirely.
this is definitely not true.
with AI agents, you're obtaining a mildly lossy perspective of the code itself. whereas if you wrote it by hand, you'd have a more concrete understanding.
This is not too different from an engineering manager directing junior developers.
The stereotype of the engineering manager who forgot to write a line of code is not wrong.
That's a fair point, but you're i) radically underrepresenting the broader impact of AI ii) under estimating the power it will have over the short horizon, and iii) missing the fact that 'abstractions are real'.
i - AI is going to interject in so many things and so many ways beyond 'helping you write some modules' so consider that.
ii - AI 2 years ago was useless for code, you can see how well it works not, and this progress is still very real. By this time next year, the power will be more evident, making the position harder to take.
iii - to your point - the real answer is 'abstractions'. We used to write machine code by hand as well, until someone came up with FORTRAN and C etc. Now, people have 'forgotten' how to do that, largely, because we don't need people to do it.
AI is crudely that abstraction. You don't have to know a lot about some things.
Now - it's very fair to highlight the fact that the abstraction isn't very clean (!!!) but that will come over time.
So yes - for writing software today, we're '1/2 a layer abstraction up' - and it's 100% essential to keep an eye on the code, the architecture etc. - it's 'not fully there' but it's better to look at this through the lens of growing capabilities because over the horizon, the argument starts to tilt.
all those words and not one concrete claim made. in other words, FUD. the whole point of the article.
I made very concrete claim: that AI will be universal and widespread - embedded within all of the technology and systems we use.
It's so completely obvious, that anyone denying it has to be living in some kind rhetorical bubble.
It's truly a feature of 'online rhetoric' like HN/Reddit where people can consider these asymptotic postures and take themselves seriously.
We will use AI like you use plastics, cars, electricity, computers etc..
That's it.
I'm sure there were a few people who thought that 'hand writing machine instructions' was the 'one true way' of writing software, but hey, what would we call them in hindsight?
There are so many legitimate ways to be curmudgeon or wary of AI, but this reactionary stuff is anti-reason. It's not an argument, it's guttural, we should just ignore it.
"Hand writing" your own thoughts is the only true way, though. If some entity does your thinking, then it's no longer you.
Yes, now that's a reasoned though on how AI will affect us, but fortunately - the AI is not 'doing our thinking for us' any more that 'calculators did', and, that's not going to stop us from using AI.
People not using AI will be about as useful as those refusing to use e-mail or computers.
It's absurd.
Doing our thinking for us is the purpose of AI, isn't it? It's called artificial intelligence for a reason.
AI is a broad term and ML aglos for playing chess fall under that since the 1920s.
AI may replace some cognitive activity, it also required cognitive intelligence to use 'slide rules' - which have been replaced and we have not looked bad.
It's not a bad rhetorical question - but it's moot in the face of the question of 'should we use it or not'.
It will do a lot of things for us - that part is inevitable and unavoidable.
We'll have plenty to think about.
This is an abacus-to-calculator situation. Some people still use an abacus. The vast majority do not. It's wild living through one of these technological transitions. People just eschew all common sense and critical thinking as it relates to the adoption of new technologies.
If it's good, lots of people will use it commercially. If it's generationally good, everybody will use it commercially because commercial use is about competition. It either gets banned outright, like steroids, or — if it doesn't get banned — those who use it will have a clear advantage and that will lead to a very small number of people who don't use it (in business).
This is not really something that opinions are required for because if you think LLMs are going away, your opinion is historically incorrect. Things that reduce toil and increase output do not go away.