I work in a creative field, and we've started to get a lot of clients using AI to generate initial concepts for us to build upon. The problem is, they're not actually thinking about these concepts, they're just generating until they see something they like.
Then, we have meetings where we will ask a basic but specific question about what they want us to make, and we're just met with blank stares. They have no answers, because they've never actually thought about it.
And then everyone else needs to do the thinking for them.
This reminds me of what's happened back in the early days of Google Translate. Lots of folks would bring very low quality automatic translations "for correction" only. For many it was a way to get a lower price since in their minds it was cheaper to correct something that is "largely done" rather than do the work from scratch. Oh how wrong they were, haha.
Precisely. I'm not an artist but have worked with some, and I do so with the basic assumption that the artist knows their shit and knows better than me. This client basically made a draft (or think they did) and asked you to fill the gaps, then went blank wondering how is it you're such a noob you can't even do your job. I'd honestly tell them to piss off and find better people to work with/for.
Going ahead without asking is a sure recipe for having the client tell you "Sorry, that's not at all what I want" and then having to start over again. Your creatives ask questions for a reason. What is it that made you pick this specific draft out of the slop pile as a good match for your brand? The color scheme? The composition? The atmosphere? The line art style? If you expect your creatives to just magically guess, and then get frustrated when the output is not what you had in mind, then it's hardly your creatives' fault.
Yeah, realized this the first time I used an LLM to code. I've not used them since. No matter how good it gets, it's dangerous to lose touch of my own intelligence.
I concur. I do use it a fair bit for coding and there is a temptation to have it do as much of it as possible, but there is a very clear line between what I wrote and what "it" wrote. The former I am happy to read, improve, understand. The latter, I only skim over, don't want to touch myself, and get very frustrated when it doesn't "just work".
This is really dangerous. Several models like Grok get worse. Grok-4.2 spews illogical confident sounding propaganda. A reader who does not think might believe it.
On soft topics like politics models say something different depending on the prompt or the latest fine tuning. As Microslop say in its TOS, AI is for entertainment only.
Software is unfortunately dominated by fakers. Paul Graham said in one of his essays that the C students command the A students. Back then it meant MBA > software engineer. Now it means that the bullshitters in software command the intelligent ones.
You have to resist daily and expose the frauds if this profession is to be saved.
Funny, the author of this piece was one of the two on the byline of the Ars article with the AI-fabricated quotes.
The cognitive surrender is the most predictable outcome. Many here will claim they'll rise above the path of least resistance and use AI responsibly, and even if that is true for many here, think about the most typical worker. Those who only want to go home at 5 after putting the least amount of effort into their job. Our society is about to be rewritten by them.
If you were driving on an unmarked, unbarricaded bridge that Google Maps directed you over in a dark and rainy night, are you 100% certain you'd be driving slowly, undistracted, and checking to make sure the bridge isn't collapsed?
This analogy doesn't work because you can assume that by a bridge existing, and not having traffic cones/barriers, it's probably built by humans and is fit for use (ie. isn't half built). The same doesn't exist for LLM outputs, which is wholly generated by AI. If I was in some simulation where the environment is vibecoded by AI, I'd be very careful too.
That's kind of what I was trying to say, or at least it kind of goes along with it. This meme of "somebody drove into a river just because Google Maps told them to" is a grossly distorted retelling of a fatal accident. One could twist any tragedy into a glib soundbite about how the dead stupidly trusted other people. The street could collapse under my feet as I'm crossing it and I drown in the sewer, and people on the internet would be laughing about how I dived into the sewer just because a traffic light told me to. There were some cracks in the asphalt, so obviously I should have known it wasn't safe to walk across, but I wasn't thinking for myself.
I suppose part of the reason so many people are so dangerously trustful of LLMs is because they assume that if the LLM was put out there by decently responsible humans (doubtful, but understandable), then so too should the LLM be decently responsible? The analogy does break down there.
Or you were just reading confabulations, without a way to tell, corrupting your knowledge in the process.
In general, I believe the problem of our time (sociopolitical divide, echo chambers, propaganda, people getting pulled to extreme viewpoints ...) isn't so much about the difficulty to access truthful information (most people would know how to fact check their beliefs and assumptions given enough time and motivation), but about the constant information overload that makes this process impractical. You are essentially softening your brain into accepting large volumes of information as facts, unchecked, and pretend that it's a good thing. You essentially don't know anymore the extent of what you know. Worse, you no longer know how you came to know, because the underlying principles and processes of knowledge (natural laws, models, theories, ...) were not involved in the learning (you could assert that day means light, and that night brings darkness because you have seen it repeated extensively and convincingly, but you wouldn't internalise this knowledge through the model of the earth orbiting the sun, and so you wouldn't know how to generalize from this knowledge into thinking about seasons or do any abstract reasoning on your own).
That is to say, we should be much much more cautious about what and how we read and learn about stuff.
This is most definitely the issue, and I’d would say you can go a step further. There are groups of people who don’t know how to stay safe in the information environment, while others who understand how to shape the environment.
The latter group is able to shape the content available to the former.
Don't know about that research but I certainly have read many HN comments made by those who drank the AI kool-aid (and I write this as someone using Claude Code CLI daily) where any semblance of logical thinking was gone.
Nope, its a well-researched article which shows its sources and qualifies its conclusions. You may not like the conclusions, but that doesn't make it FUD.
> This sounds like FUD to get people to abandon one of our strongest cognitive enhancing toolsof all time
AI's existence is like the mental equivalent of a heavy weighted barbell that also happens to be edible and tastes delicious. You could use it in a way to get in great shape, you could also use it in a way where you get type 2 diabetes.
It is up to you and your own experiences to decide how that is likely to go for most people.
Exactly... I mean the article is "tautological nonsense". Misuse a hammer and you hit your hand, use it well and you drive nails quicker. That's why I just dismiss these posts as FUD from the rich who want people to turn in their hammers so they can move along quicker with less competition.
It’s a report on what looks a very well-researched study. You may not like the results, but calling it nonsense is ridiculous. Did you even read the article?
I’ve just gone through 3 separate papers on the cognitive impact on GenAI, and the points being raised are far more nuanced than what you are assuming them to be.
I mean, you could read the papers themselves, they aren’t inimical to your position by nature.
For example, one of the more salient results is that the more confident you are in AI, the less likely you are to check the output.
When a new invention arrives on the scene, its properties need to be mapped.
I work in a creative field, and we've started to get a lot of clients using AI to generate initial concepts for us to build upon. The problem is, they're not actually thinking about these concepts, they're just generating until they see something they like.
Then, we have meetings where we will ask a basic but specific question about what they want us to make, and we're just met with blank stares. They have no answers, because they've never actually thought about it.
And then everyone else needs to do the thinking for them.
This reminds me of what's happened back in the early days of Google Translate. Lots of folks would bring very low quality automatic translations "for correction" only. For many it was a way to get a lower price since in their minds it was cheaper to correct something that is "largely done" rather than do the work from scratch. Oh how wrong they were, haha.
They're staring at you because you they're paying you to figure it out and youre asking them again
Precisely. I'm not an artist but have worked with some, and I do so with the basic assumption that the artist knows their shit and knows better than me. This client basically made a draft (or think they did) and asked you to fill the gaps, then went blank wondering how is it you're such a noob you can't even do your job. I'd honestly tell them to piss off and find better people to work with/for.
Going ahead without asking is a sure recipe for having the client tell you "Sorry, that's not at all what I want" and then having to start over again. Your creatives ask questions for a reason. What is it that made you pick this specific draft out of the slop pile as a good match for your brand? The color scheme? The composition? The atmosphere? The line art style? If you expect your creatives to just magically guess, and then get frustrated when the output is not what you had in mind, then it's hardly your creatives' fault.
Yeah, realized this the first time I used an LLM to code. I've not used them since. No matter how good it gets, it's dangerous to lose touch of my own intelligence.
I concur. I do use it a fair bit for coding and there is a temptation to have it do as much of it as possible, but there is a very clear line between what I wrote and what "it" wrote. The former I am happy to read, improve, understand. The latter, I only skim over, don't want to touch myself, and get very frustrated when it doesn't "just work".
This is really dangerous. Several models like Grok get worse. Grok-4.2 spews illogical confident sounding propaganda. A reader who does not think might believe it.
On soft topics like politics models say something different depending on the prompt or the latest fine tuning. As Microslop say in its TOS, AI is for entertainment only.
Software is unfortunately dominated by fakers. Paul Graham said in one of his essays that the C students command the A students. Back then it meant MBA > software engineer. Now it means that the bullshitters in software command the intelligent ones.
You have to resist daily and expose the frauds if this profession is to be saved.
Funny, the author of this piece was one of the two on the byline of the Ars article with the AI-fabricated quotes.
The cognitive surrender is the most predictable outcome. Many here will claim they'll rise above the path of least resistance and use AI responsibly, and even if that is true for many here, think about the most typical worker. Those who only want to go home at 5 after putting the least amount of effort into their job. Our society is about to be rewritten by them.
In other, older, news, some years ago, cognitive surrender leads Google search users abandon logical thinking, research found.
In other, moderate older news, cognitive surrender leads TV users abandon logical thinking, research finds.
In other, even older news, cognitive surrender leads newspaper readers abandon logical thinking, research found.
Shall I go on with how cognitive surrender leading to abandon logical thinking spreads out in history, AI being nothing special in this regard?
this is exactly the same as people who drive their car into a river because google maps told them to.
If you don't listen to Google Maps and drive into a river, you're going to be left behind.
If you were driving on an unmarked, unbarricaded bridge that Google Maps directed you over in a dark and rainy night, are you 100% certain you'd be driving slowly, undistracted, and checking to make sure the bridge isn't collapsed?
This analogy doesn't work because you can assume that by a bridge existing, and not having traffic cones/barriers, it's probably built by humans and is fit for use (ie. isn't half built). The same doesn't exist for LLM outputs, which is wholly generated by AI. If I was in some simulation where the environment is vibecoded by AI, I'd be very careful too.
That's kind of what I was trying to say, or at least it kind of goes along with it. This meme of "somebody drove into a river just because Google Maps told them to" is a grossly distorted retelling of a fatal accident. One could twist any tragedy into a glib soundbite about how the dead stupidly trusted other people. The street could collapse under my feet as I'm crossing it and I drown in the sewer, and people on the internet would be laughing about how I dived into the sewer just because a traffic light told me to. There were some cracks in the asphalt, so obviously I should have known it wasn't safe to walk across, but I wasn't thinking for myself.
I suppose part of the reason so many people are so dangerously trustful of LLMs is because they assume that if the LLM was put out there by decently responsible humans (doubtful, but understandable), then so too should the LLM be decently responsible? The analogy does break down there.
This is just being lazy. I like to use Claude and Gemini to have debates and test ideas. If you do it right you can learn new things with every chat.
Or you were just reading confabulations, without a way to tell, corrupting your knowledge in the process.
In general, I believe the problem of our time (sociopolitical divide, echo chambers, propaganda, people getting pulled to extreme viewpoints ...) isn't so much about the difficulty to access truthful information (most people would know how to fact check their beliefs and assumptions given enough time and motivation), but about the constant information overload that makes this process impractical. You are essentially softening your brain into accepting large volumes of information as facts, unchecked, and pretend that it's a good thing. You essentially don't know anymore the extent of what you know. Worse, you no longer know how you came to know, because the underlying principles and processes of knowledge (natural laws, models, theories, ...) were not involved in the learning (you could assert that day means light, and that night brings darkness because you have seen it repeated extensively and convincingly, but you wouldn't internalise this knowledge through the model of the earth orbiting the sun, and so you wouldn't know how to generalize from this knowledge into thinking about seasons or do any abstract reasoning on your own).
That is to say, we should be much much more cautious about what and how we read and learn about stuff.
You will probably love Network Propaganda.
This is most definitely the issue, and I’d would say you can go a step further. There are groups of people who don’t know how to stay safe in the information environment, while others who understand how to shape the environment.
The latter group is able to shape the content available to the former.
https://news.harvard.edu/gazette/story/2018/10/network-propa...
[dupe] Discussion on source 2 weeks ago: https://news.ycombinator.com/item?id=47467913
How I imagine "wololo" would practically work
Don't know about that research but I certainly have read many HN comments made by those who drank the AI kool-aid (and I write this as someone using Claude Code CLI daily) where any semblance of logical thinking was gone.
[dead]
This sounds like FUD to get people to abandon one of our strongest cognitive enhancing toolsof all time
Nope, its a well-researched article which shows its sources and qualifies its conclusions. You may not like the conclusions, but that doesn't make it FUD.
> This sounds like FUD to get people to abandon one of our strongest cognitive enhancing toolsof all time
AI's existence is like the mental equivalent of a heavy weighted barbell that also happens to be edible and tastes delicious. You could use it in a way to get in great shape, you could also use it in a way where you get type 2 diabetes.
It is up to you and your own experiences to decide how that is likely to go for most people.
Exactly... I mean the article is "tautological nonsense". Misuse a hammer and you hit your hand, use it well and you drive nails quicker. That's why I just dismiss these posts as FUD from the rich who want people to turn in their hammers so they can move along quicker with less competition.
It’s a report on what looks a very well-researched study. You may not like the results, but calling it nonsense is ridiculous. Did you even read the article?
I’ve just gone through 3 separate papers on the cognitive impact on GenAI, and the points being raised are far more nuanced than what you are assuming them to be.
I mean, you could read the papers themselves, they aren’t inimical to your position by nature.
For example, one of the more salient results is that the more confident you are in AI, the less likely you are to check the output.
When a new invention arrives on the scene, its properties need to be mapped.
so dear user, how does a non-deterministic black box of bullshit enhance cognition?
The very next entry on the homepage, just below this one: "The danger of military AI isn't killer robots; it's worse human judgement"
https://news.ycombinator.com/item?id=47632016
That sounds correct and straight from The Ironies of Automation.
https://dl.acm.org/doi/10.1145/2448136.2448149