Put a culture of stopping work at 40 hours. Allow the work to be the work and stop deadlines.
Otherwise, people will take every time savings possible. If I'm using AI for anything, it's because it's important enough to someone else for me to do but not important enough to sacrifice my own time.
I don't think it's about people being scared, at least from what I've seen. It's about people being exhausted.
> Otherwise, people will take every time savings possible
And you're saying they won't if you only cap the maximum amount of time they can work?
> It's about people being exhausted
Work can be difficult and exhausting and I don't think that's necessarily a bad thing. When I started my job I was tired often because it was hard. I got better at it over time.
I'm really not into any job where my value is strictly tied to the time I put into it. I much prefer a job that some weeks takes me a lot of time and some weeks takes not as much time.
If people are scared to share their thoughts, then that seems like the problem.
Also, how much of this communication is actually necessary? If someone doesn't care about an issue enough to write their own email, then why are they sending an email about it in the first place?
If the AI does spelling and grammar fixups, I'll root for the AI.
When I was in corporate IT, I got way too much internal email that looked like it had been pecked out by an autistic second-grader, with random spelling, capitalization, and punctuation. Which wasn't as much of a problem as the chain-of-consciousness blather of jumbled words that often made no sense at all.
Some of those people were Ph.Ds in charge of multimillion-dollar subsidiaries and dozens or hundreds of employees, but they may have well have been trying to communicate by interpretive farting and tap-dancing.
I feel like this could have done without the ableist quip, however I also see this in my industry as well, especially among higher-ups. Funnily enough this was the topic of another discussion yesterday[0].
I tried to write my first blog posts using AI. I created dozens of restrictions and rules so that it would produce human-like text, which I then edited. The text contained only my thoughts, but the AI formatted them. However, no matter how much I tried to prohibit constructions such as "It's not X, it's Y!", it still added them. I had to revise 10 drafts before I had the final version. When I stopped using AI for my texts, my productivity increased, and I can now complete an essay in 1-2 drafts, which is 5 times faster than when using AI.
This is strikingly different from development. In development, AI increases my productivity fivefold, but in texts, it slows me down.
I thought, maybe the problem is simply that I don't know how to write texts, but I do know how to develop?
But the thing is, AI development uses standard code, with recognized patterns, techniques, and architecture. It does what (almost) the best programmer in their field would do. And its code can be checked with linters and tests. It's verifiable work.
But AI is not yet capable of writing text the way a living person does. Because text cannot be verified.
I think any use of AI "unrolls" the prompt into a longer but thinner form. This is true of code too I think, but it's still useful because so much of coding is boilerplate and methods that have been written a thousand times before. Great, give me the standard implementation, who cares.
But if you're doing hard algorithmic work and really trying to do novel "computer science", I suspect semantic ablation would take an unacceptable toll.
That's not what was implied by that comment, at least the way I read it. You mention the CEO did the same thing. If the CEO is pushing AI and the employees feel like their job is at risk if they resist change, then they are going to use AI as a means of self-preservation.
How have you seen workplaces keep phone use curtailed? I spent a month visiting an office we had in India. Phones were not allowed in the work area. There were lockers outside where people were supposed to lock up their phones before going through the gates to get into the work area. I locked up my phone at the start, but then realized almost everyone still had their phones, they were just sly about using them. By the end of the trip I stopped using the locker.
I'm guilty of this, mostly for work, but it's a massive time saver for me and gets me out of my own head. I always proof read and modify them to get what I'm looking for as far as overall content, tone , and detail. Now that I'm thinking about it, I really don't do this at all for messaging people I actually want to talk to. I guess that says something...
You answered your own question. People are 'too scared' to share their thoughts so they share AIs instead. I suspect if you scared people about the use of AI, there may be an increase in usage.
Any solution that isn't based on first principles will make the problem worst. And even then, the first principles might show that you can't fix it. But at least you will know.
I mean most of us certainly don't have that kind of authority, and that's not really going to stop AI use when it comes embedded in every service these days.
IMHO, what you are asking for is not feasible, so I gave one of the very few possible technical avenues. If you don't like it there's not much to be done. What's left? Perhaps convincing the boss that using LLMs for correspondence is a bad idea.
With slack and text, "Edit Message" exists. People need to get over their fear.
Email being a send once, what you said persists forever, is a little scarier. It'd be nice to have a messaging protocol used at work where a typo or wrong URL pasted isn't so consequential. I've been at this for 14 years now, and I still re-read emails I send to clients 10+ times to make sure I am not making even the most minor of mistakes.
the worst part isn't even the slop itself - it's that it kills the signal. I used to skim slack threads and get the gist in 30 seconds. now half the messages are these perfectly structured five-paragraph responses to a yes/no question and I just... stop reading.
honestly I think the root cause is that most corporate communication shouldn't exist at all. the AI just makes it easier to produce more of nothing. before chatgpt people would write a two-line email that said the same thing, and that was fine.
what worked for our small team was pretty blunt - we just started replying 'tldr?' to anything that felt padded. no accusation about AI, just 'this is too long for what it's saying.' took about two weeks before people started self-editing. not sure that scales to a big org tho
You’re describing a real coordination problem: over-polished, abstraction-heavy “AI voice” increases cognitive load and reduces signal. Since you don’t have positional authority—and leadership models the behavior—you need norm-shaping, not enforcement.
Here are practical levers that work without calling anyone out:
1. Introduce a “Clarity Standard” (Not an Anti-AI Rule)
Don’t frame it as anti-AI. Frame it as decision hygiene.
Propose lightweight norms in a team doc or retro:
TL;DR (≤3 lines) required
One clear recommendation
Max 5 bullets
State assumptions explicitly
If AI-assisted, edit to your voice
This shifts evaluation from how it was written to how usable it is.
Typical next step:
Draft a 1-page “Decision Writing Guidelines” and float it as “Can we try this for a sprint?”
2. Seed a Meme That Rewards Brevity
Social proof beats argument.
Examples you can casually share in Slack:
“If it can’t fit in a screenshot, it’s not a Slack message.”
Side-by-side:
AI paragraph → Edited human version (cut by 60%)
You’re normalizing editing down, not calling out AI.
Typical next step:
Post a before/after edit of your own message and say: “Cut this from 300 → 90 words. Feels better.”
3. Cite Credible Writing Culture References
Frame it as aligning with high-signal orgs:
High Output Management – Emphasizes crisp managerial communication.
The Pyramid Principle – Lead with the answer.
Amazon – Narrative memos, but tightly structured and decision-oriented.
Stripe – Known for clear internal writing culture.
Shopify – Publicly discussed AI use, but with expectations of accountability and ownership.
You’re not arguing against AI; you’re arguing for ownership and clarity.
Typical next step:
Share one short excerpt on “lead with the answer” and say: “Can we adopt this?”
4. Shift the Evaluation Criteria in Meetings
When someone posts AI-washed text, respond with:
“What’s your recommendation?”
“If you had to bet your reputation, which option?”
“What decision are we making?”
This conditions brevity and personal ownership.
Typical next step:
Start consistently asking “What do you recommend?” in threads.
5. Propose an “AI Transparency Norm” (Soft)
Not mandatory—just a norm:
“If you used AI, cool. But please edit for voice and add your take.”
This reframes AI as a drafting tool, not an authority.
Typical next step:
Add a line in your team doc: “AI is fine for drafting; final output should reflect your judgment.”
6. Run a Micro-Experiment
Offer:
“For one sprint, can we try 5-bullet max updates?”
If productivity improves, the behavior self-reinforces.
Strategic Reality
If the CEO models AI-washing, direct confrontation won’t work. Culture shifts via:
Incentives (brevity rewarded)
Norms (recommendations expected)
Modeling (you demonstrate signal-dense writing)
You don’t fight AI. You make verbosity socially expensive.
Put a culture of stopping work at 40 hours. Allow the work to be the work and stop deadlines.
Otherwise, people will take every time savings possible. If I'm using AI for anything, it's because it's important enough to someone else for me to do but not important enough to sacrifice my own time.
I don't think it's about people being scared, at least from what I've seen. It's about people being exhausted.
> Otherwise, people will take every time savings possible
And you're saying they won't if you only cap the maximum amount of time they can work?
> It's about people being exhausted
Work can be difficult and exhausting and I don't think that's necessarily a bad thing. When I started my job I was tired often because it was hard. I got better at it over time.
I'm really not into any job where my value is strictly tied to the time I put into it. I much prefer a job that some weeks takes me a lot of time and some weeks takes not as much time.
It’s not always exhaustion, though. I work with several people who blindly spout AI responses out of sheer laziness.
If people are scared to share their thoughts, then that seems like the problem.
Also, how much of this communication is actually necessary? If someone doesn't care about an issue enough to write their own email, then why are they sending an email about it in the first place?
If the AI does spelling and grammar fixups, I'll root for the AI.
When I was in corporate IT, I got way too much internal email that looked like it had been pecked out by an autistic second-grader, with random spelling, capitalization, and punctuation. Which wasn't as much of a problem as the chain-of-consciousness blather of jumbled words that often made no sense at all.
Some of those people were Ph.Ds in charge of multimillion-dollar subsidiaries and dozens or hundreds of employees, but they may have well have been trying to communicate by interpretive farting and tap-dancing.
I feel like this could have done without the ableist quip, however I also see this in my industry as well, especially among higher-ups. Funnily enough this was the topic of another discussion yesterday[0].
0: https://news.ycombinator.com/item?id=47038125
Edit: Which I see was just shared by someone else here hours before me, so I guess I'm not all that safe from the brain rot myself haha.
> Some of those people were Ph.Ds
Read any of the Epstein emails? Many are nearly unintelligible, despite the "world-renowned luminaries" that wrote them.
The phenomenon was discussed in a post here yesterday that linked to post titled "Privilege is bad grammar": https://news.ycombinator.com/item?id=47038125
I tried to write my first blog posts using AI. I created dozens of restrictions and rules so that it would produce human-like text, which I then edited. The text contained only my thoughts, but the AI formatted them. However, no matter how much I tried to prohibit constructions such as "It's not X, it's Y!", it still added them. I had to revise 10 drafts before I had the final version. When I stopped using AI for my texts, my productivity increased, and I can now complete an essay in 1-2 drafts, which is 5 times faster than when using AI.
This is strikingly different from development. In development, AI increases my productivity fivefold, but in texts, it slows me down.
I thought, maybe the problem is simply that I don't know how to write texts, but I do know how to develop? But the thing is, AI development uses standard code, with recognized patterns, techniques, and architecture. It does what (almost) the best programmer in their field would do. And its code can be checked with linters and tests. It's verifiable work.
But AI is not yet capable of writing text the way a living person does. Because text cannot be verified.
Verifiability is part of it, but I think the "semantic ablation" article on the front page really captures my problem with AI-washed writing: https://www.theregister.com/2026/02/16/semantic_ablation_ai_...
I think any use of AI "unrolls" the prompt into a longer but thinner form. This is true of code too I think, but it's still useful because so much of coding is boilerplate and methods that have been written a thousand times before. Great, give me the standard implementation, who cares.
But if you're doing hard algorithmic work and really trying to do novel "computer science", I suspect semantic ablation would take an unacceptable toll.
the important word is "scared".
if the incentive / whiff / hint from-the-top is "those not using AI are out"... there's no stopping that..
Agreed... I'm not at the top.
That's not what was implied by that comment, at least the way I read it. You mention the CEO did the same thing. If the CEO is pushing AI and the employees feel like their job is at risk if they resist change, then they are going to use AI as a means of self-preservation.
A noble but essentially Sisyphean goal, you might as well try to get people to stop playing with their phones.
Fair but I have seen workplaces keep phone use largely curtailed. Surely it's not so impossible with AI...right? Right...? :/
How have you seen workplaces keep phone use curtailed? I spent a month visiting an office we had in India. Phones were not allowed in the work area. There were lockers outside where people were supposed to lock up their phones before going through the gates to get into the work area. I locked up my phone at the start, but then realized almost everyone still had their phones, they were just sly about using them. By the end of the trip I stopped using the locker.
I'm guilty of this, mostly for work, but it's a massive time saver for me and gets me out of my own head. I always proof read and modify them to get what I'm looking for as far as overall content, tone , and detail. Now that I'm thinking about it, I really don't do this at all for messaging people I actually want to talk to. I guess that says something...
You answered your own question. People are 'too scared' to share their thoughts so they share AIs instead. I suspect if you scared people about the use of AI, there may be an increase in usage.
Did you mean decrease in your last sentence? Or do you simply mean any solution will make the problem worse?
Any solution that isn't based on first principles will make the problem worst. And even then, the first principles might show that you can't fix it. But at least you will know.
The last thing I want to do is have my emails glossed over with AI to make my boss think I'm MORE replaceable haha
Block *.ai at the router, and all major sites. Someone has probably made a comprehensive blocklist by now.
I mean most of us certainly don't have that kind of authority, and that's not really going to stop AI use when it comes embedded in every service these days.
IMHO, what you are asking for is not feasible, so I gave one of the very few possible technical avenues. If you don't like it there's not much to be done. What's left? Perhaps convincing the boss that using LLMs for correspondence is a bad idea.
Sometimes I ask in chats / emails etc. "are there any new proposals that I missed here, all I'm seeing is AI slop?".
I think it's totally legit to ask, and specify that you are looking for new insights, proposals, etc. and not regurgitated AI summaries.
AI has a way of convincing people they are onto something big, they just can't quite express the idea themselves so here's the output pasted verbatim
Small nudges to steer company culture regarding AI use:
- signal disclosure as a norm: whenever you use AI, say “BTW I used AI to write this”, when you don’t use AI, say “No AI used in this document”
- add an email footer to your messages that states you do not use AI because [shameful reasons]
- normalize anti-AI language (slop, clanker, hallucination, boiling oceans)
- celebrate human craftsmanship (highlight/compliment well written documentation, reports, memos)
- share AI-fail memes
- gift anti-AI/pro-human stickers
- share news/analysis articles about the AI productivity myth [0], AI-user burnout [1], reverse centaur [2], AI capitalism [3]
[0] https://hbr.org/2025/09/ai-generated-workslop-is-destroying-... [1] https://hbr.org/2026/02/ai-doesnt-reduce-work-it-intensifies... [2] https://pluralistic.net/2025/12/05/pop-that-bubble/ [3] https://80000hours.org/problem-profiles/extreme-power-concen...
With slack and text, "Edit Message" exists. People need to get over their fear.
Email being a send once, what you said persists forever, is a little scarier. It'd be nice to have a messaging protocol used at work where a typo or wrong URL pasted isn't so consequential. I've been at this for 14 years now, and I still re-read emails I send to clients 10+ times to make sure I am not making even the most minor of mistakes.
the worst part isn't even the slop itself - it's that it kills the signal. I used to skim slack threads and get the gist in 30 seconds. now half the messages are these perfectly structured five-paragraph responses to a yes/no question and I just... stop reading.
honestly I think the root cause is that most corporate communication shouldn't exist at all. the AI just makes it easier to produce more of nothing. before chatgpt people would write a two-line email that said the same thing, and that was fine.
what worked for our small team was pretty blunt - we just started replying 'tldr?' to anything that felt padded. no accusation about AI, just 'this is too long for what it's saying.' took about two weeks before people started self-editing. not sure that scales to a big org tho
You’re describing a real coordination problem: over-polished, abstraction-heavy “AI voice” increases cognitive load and reduces signal. Since you don’t have positional authority—and leadership models the behavior—you need norm-shaping, not enforcement. Here are practical levers that work without calling anyone out:
1. Introduce a “Clarity Standard” (Not an Anti-AI Rule) Don’t frame it as anti-AI. Frame it as decision hygiene. Propose lightweight norms in a team doc or retro:
TL;DR (≤3 lines) required
One clear recommendation
Max 5 bullets
State assumptions explicitly
If AI-assisted, edit to your voice
This shifts evaluation from how it was written to how usable it is. Typical next step: Draft a 1-page “Decision Writing Guidelines” and float it as “Can we try this for a sprint?”
2. Seed a Meme That Rewards Brevity Social proof beats argument. Examples you can casually share in Slack:
“If it can’t fit in a screenshot, it’s not a Slack message.”
“Clarity > Fluency.”
“Strong opinions, lightly held. Weak opinions, heavily padded.”
Side-by-side: AI paragraph → Edited human version (cut by 60%)
You’re normalizing editing down, not calling out AI. Typical next step: Post a before/after edit of your own message and say: “Cut this from 300 → 90 words. Feels better.”
3. Cite Credible Writing Culture References Frame it as aligning with high-signal orgs:
High Output Management – Emphasizes crisp managerial communication.
The Pyramid Principle – Lead with the answer.
Amazon – Narrative memos, but tightly structured and decision-oriented.
Stripe – Known for clear internal writing culture.
Shopify – Publicly discussed AI use, but with expectations of accountability and ownership.
You’re not arguing against AI; you’re arguing for ownership and clarity. Typical next step: Share one short excerpt on “lead with the answer” and say: “Can we adopt this?”
4. Shift the Evaluation Criteria in Meetings When someone posts AI-washed text, respond with:
“What’s your recommendation?”
“If you had to bet your reputation, which option?”
“What decision are we making?”
This conditions brevity and personal ownership. Typical next step: Start consistently asking “What do you recommend?” in threads.
5. Propose an “AI Transparency Norm” (Soft) Not mandatory—just a norm:
“If you used AI, cool. But please edit for voice and add your take.”
This reframes AI as a drafting tool, not an authority. Typical next step: Add a line in your team doc: “AI is fine for drafting; final output should reflect your judgment.”
6. Run a Micro-Experiment Offer:
“For one sprint, can we try 5-bullet max updates?”
If productivity improves, the behavior self-reinforces.
Strategic Reality If the CEO models AI-washing, direct confrontation won’t work. Culture shifts via:
Incentives (brevity rewarded)
Norms (recommendations expected)
Modeling (you demonstrate signal-dense writing)
You don’t fight AI. You make verbosity socially expensive.
If helpful, I can draft:
A 1-page clarity guideline
A Slack post to introduce it
A short internal “writing quality” rubric
A meme template you can reuse
Which lever feels safest in your org right now?
Very funny
is this written by AI
No sir