This is such a lazy argument. Every tool that displaces old tools causes skills to be lost when those skills are no longer needed.
To the extent that people still need to be able to critically assess what AI delivers to achieve their goals, they will still pick up those skills or fail. They will then need to either invest the time to learn, or they'll fail to find employment, or fail in other aspects of life.
When we see people lamenting lost skills like this, it is usually a result of them overestimating the continued necessity of certain skills in the face of new technology.
You won't suddenly have a generation of software developers (for example) who don't know the necessary skills to do their work, but you may get a generation of software developers who don't have the skills you think are necessary to do their work.
Skills no longer needed… as long as you have access to an AI model provided by a handful of companies at an arbitrary rate; with training cost so high that only huge corporations have the funds to pull it off, building an ever-growing moat over time.
This sounds like a great future! Nothing worrying here at all.
A car that can self-drive 100% of the time is a new tool that could make driving an obsolete skill. A car that can self-drive successfully 99% of the time is dangerous because it trains people to not be ready to take over for the 1% they need to.
What actually happens is that the 1% is ignored or outlawed. The shovel doesn't do 100% of human excavating tasks better than hands, but we rightly realized that the space of possibilities involving a shovel was much greater than the 1% of hand powered excavation.
Not withstanding that knowledge of history still doesn't allow you to predict the future. In those cases we automated methods and tools, now we're automating humans, don't you think that possibly might be a significant departure from what happened in history?
It’s essentially about if your skills are “Turing complete”. If you know only Java, you may not be able to build an app that requires assembly tier efficiency but you can do it. With vibe coding you just have to hope and pray. It’s not really a skill. Your skills are not Turing complete.
So vibe coding won't be sufficient to replace a skilled Java developer, and won't obsolete that skill, and if there aren't alternatives that more completely replaces a skilled Java developer, this then isn't a relevant comparison.
> To the extent that people still need to be able to critically assess what AI delivers to achieve their goals, they will still pick up those skills or fail.
Or, the people who evaluate them will be suffering from the exact same self-inflicted cognitive limitations, and promote them, or at least not fire them.
The quality of this firm's product suffers perhaps, but it doesn't matter. The consumer will again, in all likelihood, be limited in the same way.
It aligns with my experience and what I have seen. Looking at this through the lenses of writing software; much of "learning" to write software comes down to experience.
When you see an error like, "error: expected ‘,’ or ‘;’ before ‘include’" you know what happened and where to look because you've seen it a hundred times before.
AI takes that away. It's not inherently bad, it's great that it can solve those sort of things for you. However, the second order effects are terrible. You end up never developing that experience. Is this simply evolution of the craft? Is that experience no longer necessary?
I could be wrong, but I believe that experience is necessary and losing it will be a net negative. Furthermore, the reduction of experience will increase dependency on these tools and the companies that provide them.
AI is a tool to help you see the forest from the trees.
You reading articles the old fashion way can be akin to seeing the trees but not seeing the actual forest.
Young minds tend to learn. How they do it, the old fashion way, the new AI way, they will learn.
Many blank out in school on different subjects and the cognitive overload byproduct follows them all their life making them wary of new things.
And finally, maybe you, personally, are reaching a limit in your comprehension of the modern world, and you show it by fighting the wrong battle with the wrong arguments.
I don't know where you get the 'cognitive overload' term from (it's not in the article). But it general; cognitive effort is what drives our brains to learn in the first place.
As an organ in an organism, the brain is very adverse to using energy, because the organism might need it later to run or fight some danger. Learning costs energy, and the brain rather doesn't if it doesn't need to. The only reason that the brain will ever learn anything, is if you repeatedly expose it to 'cognitive effort', because in this case the effort of learning will save energy in the long run.
If you use AI so that most things don't require cognitive effort, your brain will not use those learned neural pathways, and they will atrophy over time.
The only thing that the brain learns from using AI is that the most efficient way of doing anything is having the AI do it for you.
One, if cognitive overloading is not in the article, then it's good, it means I actually put some thought in the responding effort.
Two, AI expands possibilities for those that want that, and offer shortcuts for those that want that. No different from any other learning process: you could actually learn something or you could just do it. It makes sense, not all humans seek learning, but most humans look for results and answers.
I don't know what your interpretation of 'expanding possibilities' is, but I suspect that those are shortcuts in some way too. If you only use AI to help you search the internet, you'll become less adept at searching yourself. If AI allows you to do something that you aren't able to do yourself, it is allowing you the shortcut to not have to learn that thing.
Mastering a certain form of internet search does not mean you are learning, it means you are mastering, to some degree, a tool to search. Shortcuts are OK here, per me. Learning comes when you actually go beyond tool-use skills.
To be more explicit, the time learning a tool is not time used learning, it's time used preparing for learning. AI cuts that, if you want, and you get straight to actually learning something instead of tripping over tools and infrastructure, becoming too overloaded to be able to see the forest from the trees.
You mean the state of affairs humans have enjoyed for the last four millennia? The status quo that led to all of the technology you seem to think we now can't live without?
> Many blank out in school on different subjects and the cognitive overload byproduct follows them all their life making them wary of new things.
They should try putting their phones down before we double down on solving tech problems with more tech.
There is only a loose connection between technological advances and progress. Depending on how you define "progress". Technological advances have held back or reversed progress in many areas, just as it has advanced progress in others.
To talk about "progress" as if it were some sort of simple, objective thing is misleading.
The real issue isn't about progress. It's about what sort of lives we want to be living.
Unless you're being willfully obtuse, I'm sure you can come up with your own examples of how some changes in society, culture or politics could massively improve the lives of everyone.
Perhaps some cultural phenomenon convinces people to start taking washing their hands after the bathroom very seriously, preventing tens of thousands of deaths every year.
It's a stupid example, but no tech would be needed. There's loads of problems in the worlds (wars, disease, famine, etc) that can be massively improved (progress) without any change in tech.
Should we go into how much tech is involved with making soap? Or how much tech is involved with running water facilities? Or how much tech is involved with transitioning from outside toilets to modern toilets? Cultural phenomenons are helping none, without the tech to back it up. In fact, cultural phenomenons not backed by tech are the one making our society a regressive one.
You are assuming slop is only an AI feature. You know the anecdotal aunt in the old days, confidently hallucinating answers? One had to carry half-truths all the time, and things are still the same now. Only now, the mitigation can come sooner than later. From an AI model near you.
Skills have a life cyle. That's something you learn as you get older. You are inevitably a part, an expression, of the time you grow up in. We become obsolete by the time we die. We die knowing our exact knowledge can't be replaced, it dies with us.
I think you're right, and I think that's in large parts perfectly fine. As long as the important skills that we need continually keep replenishing themselves in young people.
The problem is identifying which skills those actually are. Without an true answer, I'm going to be prudent, and assume that things involving learning and critical thinking are some of the important ones.
My skills was forged in the other type of school, I know how to operate a lathe and milling machine but I don't think it's a good thing now and very dangerous too. The time dictates the skills. But understanding of the basic life/ physical principles was fired in me by my father, so I don't rely to school on that, it's the parent who is responsible.
i feel like people should be focusing on the damaging things that aren't just "ai" (like what he hell does that even mean, it's too broad?).
dark app patterns, gambling, etc. like seriously, i know we all want to hate on llms or whatever stealing our jobs or making us stupider but has this been any different from the past in that regard?
whether it be radio, tv, computer, internet, video games, etc. all of these claimed to be doing something "to the children" but i agree with another comment said kids will figure out a way to learn and utilize the tools given to them.
did me "offloading" my thinking to google or some computer instead of cracking open some library book or doing calculations by hand damage my thinking at the time? no... because a sufficiently motivated person will learn regardless, figure out why things work the way they do, and rather it's better access to said information that helps.
we should be fixing the motivation problem rather than the tools which we've been trying to do for decades. teach people the framework for solving problems and critical thinking. kids nowadays have way more things demanding their attention and it's been on a decline far before this AI wave (cough social media). we literally sound like old farts lol.
Perhaps if you're the highly motivated type who would excel even without ai. But it's far too easy to become like maths students who learn only how to use a calculator instead of how to actually add fractions.
Do you think that's how most students are using it? Teachers would quickly disabuse you of that notion [0]:
> In study hall, I watched a kid use Snapchat to take pictures of his computer screen. He was working on IXL skills. His Snap A.I. friend sent an immediate reply. He then clicked the answer on his screen. The next question popped up, he took a picture and got an answer. He swiftly went through the whole session this way. His right hand held the phone, he tapped the camera button, glanced at the reply, and his left hand entered the answers on his laptop. He didn’t know I was watching, but I saw the gold medal of 100 percent mastery bloom on his screen. I told the teacher who assigned the IXL. She didn’t realize Snapchat had an A.I. that would do her homework. It can answer all the questions.
... Now, can you use AI to learn things? Sure. But what the article is talking about it is critical thinking:
> Adults using AI mostly just sound generic. But for a child who never formed independent reasoning, "generic" is a major identity problem. The model’s reasoning doesn’t compete with the child’s reasoning but becomes the child’s reasoning. For children still building out the cognitive skills for evaluating the world, the effect will not be temporary but have a foundation impact on their thinking.
American's performance with critical thinking is already mixed at best. A new generation with even lower independent thinking ability combined with AI painstakingly engineered to suffer from severe bias is a powerful recipe for (even more) horrors beyond human comprehension. Paid for by our tax dollars.
Learning and suffering seem to be linked to some degree. It takes a lot of up front pain to get to a point where you can become an effective autodidact. You have to develop an appreciation for the game. AI can accelerate aspects of this, but it often alleviates too much suffering for a novice to develop the fundamentals.
If you go into AI as a way to get your school work done more quickly, you won't experience the friction you need to. AI should be used to make the work longer and deeper. More engaging and adapted to the individual. Not quicker and easier.
The problem is that AI is the most effective dual use technology we have ever created with regard to education and cheating at education. The monkey brain doesn't like to suffer, so on average I think we find most people tend toward the shittier use case.
One could have said the same things when calculators were invented. Is routine suffering by adding numbers by hand required? Or is it more important to delegate simpler things and focus on complex problems.
I learned math long after the advent of the calculator and went on to study math-heavy fields (physics, mechanical engineering, and data science).
I wasn't ever able to really develop deep intuition about/understanding of a calculation until I did it by hand once or twice. I often just plugged in new models and algos just to see if performance was above a threshold, but when I wanted to productionize a new winner, I'd have to run through the algo by hand for a few steps to understand and tune it. And through doing it by hand, the complex became the simple.
Certainly practising mental arithmetic helps in capability of doing mental arithmetic. Doing adding by hand probably also improves mental arithmetic.
The again we are not that far off from time when your AI glasses will read the price label. And then automatically add up total for you. Hopefully you then each time ask what does that total mean in context of your finances...
The point is not to make the suffering permanent. It is a temporary phase. A lesson. Once you complete it you can go on to do the automated thing without as much concern.
The skills I've lost are no longer valuable. No one with a brain will ever spend another minute writing HTML/CSS by hand anymore. But I spent a decade of my career doing that all day long every day. It's time to move on and up. The horizon for software is limitless now that we've been freed of the drudgery.
Research papers are already summarized, at the top there is a section called "Abstract" which includes the summary. Usually the first and last sentences are the relevant abstraction layer for most people.
When automation comes along it gives humans the time to actually think about what they are doing and whether it even makes sense. Is your goal to motivate some research? That likely requires a conversation with one or more authors. Otherwise it's an exercise in narcissism. I am the elite who will bestow this sacred knowledge unto the commoners and cross-disciplinary researchers who cannae understand it without me.
With automation more people are unfit, but some people are better in every metric that exists. What's important is that everybody has the freedom, if they wish, to achieve those top metrics. Insofar as those metrics don't involve direct control overs resources, since those will always be gatekept and require the approval of others.
I swear to god, people heard the story about how Socrates was against books and reurgitate this as argument against any critical view on AI usage. If this is the level of reasoning people have, nothing will be lost when cognitive skills decline through AI usage anyway.
There's an irony to people repeating this claim without even having read the Phaedrus. If they had, they'd understand that the concern with writing was that it was not able to respond as a human in dialogue. One could think that LLMs are an improvement in this regard, but for the fact that LLMs are actually autonomous sophists.
Socrates would have been against LLMs, and for good reason. Writing isn't unequivocally bad, but it is simply not a substitution for real dialogue and thought. We use books as a means by which to have more things to discuss with humans. LLMs can supplant the desire to even have dialogue with others, which is perhaps the more insidious thing.
>I swear to god, people heard the story about how Socrates was against books and reurgitate this as argument against any critical view on AI usage.
It's something we all learn in freshman english class. But it comes up over and over again because the general idea is true. You have to temper the unbridled optimism that comes with any new technology by contemplation of what may be lost. Otherwise we're spinning in circles.
I’m not a hater. LLMs on search is the best research tool I’ve ever used because it’s read everything and can find minutia buried in places it would take me a long time to find.
But there’s a huge difference between using it to assist focus, or as a study aide, and offloading the whole act of thinking itself.
This is such a lazy argument. Every tool that displaces old tools causes skills to be lost when those skills are no longer needed.
To the extent that people still need to be able to critically assess what AI delivers to achieve their goals, they will still pick up those skills or fail. They will then need to either invest the time to learn, or they'll fail to find employment, or fail in other aspects of life.
When we see people lamenting lost skills like this, it is usually a result of them overestimating the continued necessity of certain skills in the face of new technology.
You won't suddenly have a generation of software developers (for example) who don't know the necessary skills to do their work, but you may get a generation of software developers who don't have the skills you think are necessary to do their work.
Skills no longer needed… as long as you have access to an AI model provided by a handful of companies at an arbitrary rate; with training cost so high that only huge corporations have the funds to pull it off, building an ever-growing moat over time.
This sounds like a great future! Nothing worrying here at all.
"i like money and sex, do you like money and sex too,? maybe we can be friends!" - Idiocracy
A car that can self-drive 100% of the time is a new tool that could make driving an obsolete skill. A car that can self-drive successfully 99% of the time is dangerous because it trains people to not be ready to take over for the 1% they need to.
This is only a problem if regulators and/or courts and/or consumers all fail to recognise that said 99% car isn't safe enough.
What actually happens is that the 1% is ignored or outlawed. The shovel doesn't do 100% of human excavating tasks better than hands, but we rightly realized that the space of possibilities involving a shovel was much greater than the 1% of hand powered excavation.
What is your argument actually based on? It seems you're just assuming this to be the case.
All of human history.
Not withstanding that knowledge of history still doesn't allow you to predict the future. In those cases we automated methods and tools, now we're automating humans, don't you think that possibly might be a significant departure from what happened in history?
It’s essentially about if your skills are “Turing complete”. If you know only Java, you may not be able to build an app that requires assembly tier efficiency but you can do it. With vibe coding you just have to hope and pray. It’s not really a skill. Your skills are not Turing complete.
So vibe coding won't be sufficient to replace a skilled Java developer, and won't obsolete that skill, and if there aren't alternatives that more completely replaces a skilled Java developer, this then isn't a relevant comparison.
> To the extent that people still need to be able to critically assess what AI delivers to achieve their goals, they will still pick up those skills or fail.
Or, the people who evaluate them will be suffering from the exact same self-inflicted cognitive limitations, and promote them, or at least not fire them.
The quality of this firm's product suffers perhaps, but it doesn't matter. The consumer will again, in all likelihood, be limited in the same way.
Everyone's happy.
More realistically, a company that fails to properly evaluate this in ways that reflect actual market needs will fail in the marketplace.
It aligns with my experience and what I have seen. Looking at this through the lenses of writing software; much of "learning" to write software comes down to experience.
When you see an error like, "error: expected ‘,’ or ‘;’ before ‘include’" you know what happened and where to look because you've seen it a hundred times before.
AI takes that away. It's not inherently bad, it's great that it can solve those sort of things for you. However, the second order effects are terrible. You end up never developing that experience. Is this simply evolution of the craft? Is that experience no longer necessary?
I could be wrong, but I believe that experience is necessary and losing it will be a net negative. Furthermore, the reduction of experience will increase dependency on these tools and the companies that provide them.
Why would cognitive overload work better?
AI is a tool to help you see the forest from the trees.
You reading articles the old fashion way can be akin to seeing the trees but not seeing the actual forest.
Young minds tend to learn. How they do it, the old fashion way, the new AI way, they will learn.
Many blank out in school on different subjects and the cognitive overload byproduct follows them all their life making them wary of new things.
And finally, maybe you, personally, are reaching a limit in your comprehension of the modern world, and you show it by fighting the wrong battle with the wrong arguments.
Or maybe you are onto something.
> Why would cognitive overload work better?
I don't know where you get the 'cognitive overload' term from (it's not in the article). But it general; cognitive effort is what drives our brains to learn in the first place.
As an organ in an organism, the brain is very adverse to using energy, because the organism might need it later to run or fight some danger. Learning costs energy, and the brain rather doesn't if it doesn't need to. The only reason that the brain will ever learn anything, is if you repeatedly expose it to 'cognitive effort', because in this case the effort of learning will save energy in the long run.
If you use AI so that most things don't require cognitive effort, your brain will not use those learned neural pathways, and they will atrophy over time.
The only thing that the brain learns from using AI is that the most efficient way of doing anything is having the AI do it for you.
I'm going to comment on two subjects.
One, if cognitive overloading is not in the article, then it's good, it means I actually put some thought in the responding effort.
Two, AI expands possibilities for those that want that, and offer shortcuts for those that want that. No different from any other learning process: you could actually learn something or you could just do it. It makes sense, not all humans seek learning, but most humans look for results and answers.
I don't know what your interpretation of 'expanding possibilities' is, but I suspect that those are shortcuts in some way too. If you only use AI to help you search the internet, you'll become less adept at searching yourself. If AI allows you to do something that you aren't able to do yourself, it is allowing you the shortcut to not have to learn that thing.
AI is expanding my thinking possibilities.
Mastering a certain form of internet search does not mean you are learning, it means you are mastering, to some degree, a tool to search. Shortcuts are OK here, per me. Learning comes when you actually go beyond tool-use skills.
To be more explicit, the time learning a tool is not time used learning, it's time used preparing for learning. AI cuts that, if you want, and you get straight to actually learning something instead of tripping over tools and infrastructure, becoming too overloaded to be able to see the forest from the trees.
> Why would cognitive overload work better?
You mean the state of affairs humans have enjoyed for the last four millennia? The status quo that led to all of the technology you seem to think we now can't live without?
> Many blank out in school on different subjects and the cognitive overload byproduct follows them all their life making them wary of new things.
They should try putting their phones down before we double down on solving tech problems with more tech.
https://news.ycombinator.com/item?id=47456153
You are over simplifying and sending confusing signals.
There was/is constant progress that constantly demanded/demands more tech. Without more tech, the progress would have/would stalled.
There is only a loose connection between technological advances and progress. Depending on how you define "progress". Technological advances have held back or reversed progress in many areas, just as it has advanced progress in others.
To talk about "progress" as if it were some sort of simple, objective thing is misleading.
The real issue isn't about progress. It's about what sort of lives we want to be living.
There are multiple ways that people and society can progress, and most of them have nothing to do with tech.
More details please.
Unless you're being willfully obtuse, I'm sure you can come up with your own examples of how some changes in society, culture or politics could massively improve the lives of everyone.
You are willfully evasive, so I'm going to take this as a sign not to waste any more time.
Perhaps some cultural phenomenon convinces people to start taking washing their hands after the bathroom very seriously, preventing tens of thousands of deaths every year.
It's a stupid example, but no tech would be needed. There's loads of problems in the worlds (wars, disease, famine, etc) that can be massively improved (progress) without any change in tech.
Should we go into how much tech is involved with making soap? Or how much tech is involved with running water facilities? Or how much tech is involved with transitioning from outside toilets to modern toilets? Cultural phenomenons are helping none, without the tech to back it up. In fact, cultural phenomenons not backed by tech are the one making our society a regressive one.
The last four words would have sufficed.
I always am open to learning, even by antithesis.
I’m sorry what? Look at the world, overrun in slop and say this again with a. straight face
You are assuming slop is only an AI feature. You know the anecdotal aunt in the old days, confidently hallucinating answers? One had to carry half-truths all the time, and things are still the same now. Only now, the mitigation can come sooner than later. From an AI model near you.
You really should read the linked article if you're going to comment this much.
If you have meaningful commentaries, I'm ready to learn from them.
Skills have a life cyle. That's something you learn as you get older. You are inevitably a part, an expression, of the time you grow up in. We become obsolete by the time we die. We die knowing our exact knowledge can't be replaced, it dies with us.
I think you're right, and I think that's in large parts perfectly fine. As long as the important skills that we need continually keep replenishing themselves in young people.
The problem is identifying which skills those actually are. Without an true answer, I'm going to be prudent, and assume that things involving learning and critical thinking are some of the important ones.
My skills was forged in the other type of school, I know how to operate a lathe and milling machine but I don't think it's a good thing now and very dangerous too. The time dictates the skills. But understanding of the basic life/ physical principles was fired in me by my father, so I don't rely to school on that, it's the parent who is responsible.
i feel like people should be focusing on the damaging things that aren't just "ai" (like what he hell does that even mean, it's too broad?).
dark app patterns, gambling, etc. like seriously, i know we all want to hate on llms or whatever stealing our jobs or making us stupider but has this been any different from the past in that regard?
whether it be radio, tv, computer, internet, video games, etc. all of these claimed to be doing something "to the children" but i agree with another comment said kids will figure out a way to learn and utilize the tools given to them.
did me "offloading" my thinking to google or some computer instead of cracking open some library book or doing calculations by hand damage my thinking at the time? no... because a sufficiently motivated person will learn regardless, figure out why things work the way they do, and rather it's better access to said information that helps.
we should be fixing the motivation problem rather than the tools which we've been trying to do for decades. teach people the framework for solving problems and critical thinking. kids nowadays have way more things demanding their attention and it's been on a decline far before this AI wave (cough social media). we literally sound like old farts lol.
Lots of people gaining skills with AI too.
dont you gain skills with ai? it teaches you how to do stuff, you ask it questions, etc like a tutor?
Perhaps if you're the highly motivated type who would excel even without ai. But it's far too easy to become like maths students who learn only how to use a calculator instead of how to actually add fractions.
Do you think that's how most students are using it? Teachers would quickly disabuse you of that notion [0]:
> In study hall, I watched a kid use Snapchat to take pictures of his computer screen. He was working on IXL skills. His Snap A.I. friend sent an immediate reply. He then clicked the answer on his screen. The next question popped up, he took a picture and got an answer. He swiftly went through the whole session this way. His right hand held the phone, he tapped the camera button, glanced at the reply, and his left hand entered the answers on his laptop. He didn’t know I was watching, but I saw the gold medal of 100 percent mastery bloom on his screen. I told the teacher who assigned the IXL. She didn’t realize Snapchat had an A.I. that would do her homework. It can answer all the questions.
... Now, can you use AI to learn things? Sure. But what the article is talking about it is critical thinking:
> Adults using AI mostly just sound generic. But for a child who never formed independent reasoning, "generic" is a major identity problem. The model’s reasoning doesn’t compete with the child’s reasoning but becomes the child’s reasoning. For children still building out the cognitive skills for evaluating the world, the effect will not be temporary but have a foundation impact on their thinking.
American's performance with critical thinking is already mixed at best. A new generation with even lower independent thinking ability combined with AI painstakingly engineered to suffer from severe bias is a powerful recipe for (even more) horrors beyond human comprehension. Paid for by our tax dollars.
0 - https://www.nytimes.com/2026/02/26/learning/teachers-on-how-...
Learning and suffering seem to be linked to some degree. It takes a lot of up front pain to get to a point where you can become an effective autodidact. You have to develop an appreciation for the game. AI can accelerate aspects of this, but it often alleviates too much suffering for a novice to develop the fundamentals.
If you go into AI as a way to get your school work done more quickly, you won't experience the friction you need to. AI should be used to make the work longer and deeper. More engaging and adapted to the individual. Not quicker and easier.
The problem is that AI is the most effective dual use technology we have ever created with regard to education and cheating at education. The monkey brain doesn't like to suffer, so on average I think we find most people tend toward the shittier use case.
One could have said the same things when calculators were invented. Is routine suffering by adding numbers by hand required? Or is it more important to delegate simpler things and focus on complex problems.
I learned math long after the advent of the calculator and went on to study math-heavy fields (physics, mechanical engineering, and data science).
I wasn't ever able to really develop deep intuition about/understanding of a calculation until I did it by hand once or twice. I often just plugged in new models and algos just to see if performance was above a threshold, but when I wanted to productionize a new winner, I'd have to run through the algo by hand for a few steps to understand and tune it. And through doing it by hand, the complex became the simple.
I am not expert, but I heard brains learn way better when we actually use our hands to write stuff out.
like the logic sticks deeper in your head that way... using computer is fast, but sometimes it just goes in one ear and out the other
Certainly practising mental arithmetic helps in capability of doing mental arithmetic. Doing adding by hand probably also improves mental arithmetic.
The again we are not that far off from time when your AI glasses will read the price label. And then automatically add up total for you. Hopefully you then each time ask what does that total mean in context of your finances...
The point is not to make the suffering permanent. It is a temporary phase. A lesson. Once you complete it you can go on to do the automated thing without as much concern.
Yes agreed
The skills I've lost are no longer valuable. No one with a brain will ever spend another minute writing HTML/CSS by hand anymore. But I spent a decade of my career doing that all day long every day. It's time to move on and up. The horizon for software is limitless now that we've been freed of the drudgery.
Research papers are already summarized, at the top there is a section called "Abstract" which includes the summary. Usually the first and last sentences are the relevant abstraction layer for most people.
When automation comes along it gives humans the time to actually think about what they are doing and whether it even makes sense. Is your goal to motivate some research? That likely requires a conversation with one or more authors. Otherwise it's an exercise in narcissism. I am the elite who will bestow this sacred knowledge unto the commoners and cross-disciplinary researchers who cannae understand it without me.
With automation more people are unfit, but some people are better in every metric that exists. What's important is that everybody has the freedom, if they wish, to achieve those top metrics. Insofar as those metrics don't involve direct control overs resources, since those will always be gatekept and require the approval of others.
Books also cause loss of skills.
One effect of widespread books is we don’t have poets like Homer. We don’t develop the memorization skills like they did in the past.
And that’s ok.
We can use the bandwidth for other stuff.
I swear to god, people heard the story about how Socrates was against books and reurgitate this as argument against any critical view on AI usage. If this is the level of reasoning people have, nothing will be lost when cognitive skills decline through AI usage anyway.
There's an irony to people repeating this claim without even having read the Phaedrus. If they had, they'd understand that the concern with writing was that it was not able to respond as a human in dialogue. One could think that LLMs are an improvement in this regard, but for the fact that LLMs are actually autonomous sophists.
Socrates would have been against LLMs, and for good reason. Writing isn't unequivocally bad, but it is simply not a substitution for real dialogue and thought. We use books as a means by which to have more things to discuss with humans. LLMs can supplant the desire to even have dialogue with others, which is perhaps the more insidious thing.
>I swear to god, people heard the story about how Socrates was against books and reurgitate this as argument against any critical view on AI usage.
It's something we all learn in freshman english class. But it comes up over and over again because the general idea is true. You have to temper the unbridled optimism that comes with any new technology by contemplation of what may be lost. Otherwise we're spinning in circles.
claude told me to say it.
But they don’t. Seriously, do you read?
Books encode skill.
I’m not a hater. LLMs on search is the best research tool I’ve ever used because it’s read everything and can find minutia buried in places it would take me a long time to find.
But there’s a huge difference between using it to assist focus, or as a study aide, and offloading the whole act of thinking itself.
>We can use the bandwidth for other stuff.
Like fighting on social media...
Seriously, what was the other stuff that we used our bandwidth for when the books caused the loss of skills.
We have lost Homer, but what have we gained? A million social-media warriors?
I think some of them might even value social-media trolls in some ways... At least there is lot of honest dishonest work there...
Now future is to replace those with machines. No more human input. Just endless amount machines fighting with other machines...
[dead]