I've definitely responded with prompts for feedback with "Stop nagging me for feedback all the time. I just want to (do what the app is for) without interruption."
It's kind of shocking how some people just don't get how insanely insulting it is for an application to constantly ask for feedback.
On top of that, they also don't seem to realize that they're one of now many apps also asking you for feedback.
If it was just one app every now and then. But instead it's (nearly) everything you buy, every restaurant you go to, every app you use, every doctor you see, every hotel you stay at, etc.
And it's not just that. They use dark patterns to inflate their ratings.
When I get done with a Teams call, I'm often asked to rate the call. If I pick 2 stars, I'm asked for written feedback on why it's low. If I pick 4 stars, I'm not.
Given how often they ask this, many will select 4-5 stars to dismiss the window.
That is logical. That’s a rating of call quality. Like network issues and stuff. There is no relevant info to pass along with a high rating because it meanes everything works.
But I do submit 2 star with the feedback window being the reason because it comes up after every call.
Well that and the chat is terrible, and the calls aren't better than anyone else, and if you do the small group calls it does a chat call rather than a meeting so it is silent to the recipient.
I mean, who the fuck thought *that* was a good idea? You normally have to call and put "call here" in the chat to make people aware.
It’s not that they don’t realize, they just don’t care. The people making the decision to throw up a feedback request have nothing to gain from not doing it. I’m sure they get bombarded too and they don’t like it either, but the fact that it’s pervasive is all the more reason to keep doing it, since it effectively gives them cover.
Is this the great filter solution to the Fermi Enshittification Paradox? These game theoretic tragedy of the commons morality plays all end the same way.
On top of that, they also don't seem to realize that they're one of now many apps also asking you for feedback.
It's like when a guy on the street asks you for money. Like you haven't already been asked by everyone else on the block, including the guy standing right next to him.
You're the human feedback in the RLHF part of Reinforcement learning human feedback. You're also their paypig for their $200 Max plan which you also get limited on. You're also their free advertisement, keep at it. I can't wait for the open-source alternatives to catch up if ever.
We asked for Claude coz we really wanted it (as devs). Our security and legal guys evaluated it and the data usage agreements and such, which took ages and they stated that we can use it as long as we never never ever give any kind of bug report or feedback because that might retain data. This was all prior to this new feedback prompt being added to Claude Code.
We were happy. We could use Claude Code, finally!
Then suddenly, about once per day we are getting these prompts and there is NO option to disable them. The prompt just randomly appears. I might be typing part of my next prompt, which can very definitely include numbers from 1 upwards, e.g. because I'm making a todo list and boom I've given feedback. I've inadvertently done the thing we were never never ever supposed to do.
Do you really think our first thought is to double check if maybe the data usage agreement has been changed / hopefully says that this particular random new survey thing is somehow not included in "feedback"?
No, we panic and are mad at Claude Code/Anthropic. Heck if you noticed that you inadvertently gave feedback you might make a disclosure to the security team that will now go and evaluate how bad this was. Hopefully they'd then find your updated data usage agreement.
It would have been so easy to just include an option to opt out, which just set the equivalent of the env var you now provide into the settings itself. In fact, I see the first comment a few days after the bug was filed already asked for that and was thumbs'd up by many people. And you ignored it and all we got was a manual env var or settings entry that we also need to first find out even exists.
Hey HN, Boris from the Claude Code team here. Sharing a bit about why we have this survey, how we use it, and how to opt out of it-
1. We use session quality feedback as a signal to make sure Claude Code users are having a good time. It's helpful data for us to more quickly spot & prevent incidents like https://www.anthropic.com/engineering/a-postmortem-of-three-.... There was a bug where we were showing the survey too often, which is now fixed (it was annoying and a misconfiguration on our part).
2. Giving session quality feedback is totally optional. When you provide feedback, we just collect your numerical rating and some metadata (like OS, terminal, etc.). Giving feedback doesn't cause us to log your conversation, code, or anything like that. (docs: https://code.claude.com/docs/en/data-usage#session-quality-s...)
3. We don't train on quality feedback data. This is documented in the link above.
4. If you don't want to give feedback, you can permanently turn it off for yourself by setting CLAUDE_CODE_DISABLE_FEEDBACK_SURVEY=1 in your env or settings.json file. (docs: https://code.claude.com/docs/en/settings)
5. To permanently turn off the feedback survey for your whole company, set CLAUDE_CODE_DISABLE_FEEDBACK_SURVEY=1 in the settings.json checked into your codebase, or in your enterprise-managed settings.json (docs: https://code.claude.com/docs/en/settings)
6. You can also opt out of both telemetry + survey by setting CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC=1, or you can more granularly opt out with DISABLE_ERROR_REPORTING=1, DISABLE_TELEMETRY=1, etc. (also documented in the settings docs)
Security and privacy are very important, and we spend a lot of time getting these right. You have full control over data usage, telemetry, and training, and these are configurable for yourself, for your codebase, and for all your employees. We offer all of these options out of the box, so you can choose the mechanism that makes the most sense for you.
If there is a setting or control that is missing, or if anything is unclear from the docs, please tell us!
> We use session quality feedback as a signal to make sure Claude Code users are having a good time
Your entire answer ignores the fact that this is irritating behavior that ensures users are not having a good time. We don't want to chase down secret config values. We want to click "stop bothering me" and be done with it.
Thank you for the feedback. I honestly believe that Claude Code is trying to be more privacy aware than other products, but it takes vigilance on both sides to get there. When you say the survey sends back more than the numeric value and add the 'etc' can you be very specific about the 'etc' is? The information you mention seems at odds with the data-usage documentation [1] which says only a numeric value is sent. Is the documentation going to be fixed to be explicit that more than a numeric value is sent? Is the feedback prompt going to ask the user if it is ok that more than a numeric value is sent back with the survey data?
Again, I think you are making the best product out there. I want to keep using it. Privacy is my #1 feature request to keep using it so transparency is crucial.
Thank you for taking the time to respond here. Thank you also for sharing your point of view, use case, and where you are coming from. With that said, would you mind sharing a few words on a couple questions?:
1. Does it take as much effort to opt-in to your feedback mechanism as it takes to opt-out? If not, why not?
2. If you want a thing ('feedback', 'a signal', data that is helpful to YOU), but getting it has this negative effect on others, what would happen if you preferenced others over yourself, and did with less of the thing?
I had one of these pop up in Android Auto earlier while I was driving. I'd used the Google voice assistant to change the destination I was navigating to, and a few seconds it threw a feedback popup over the top of the map, obscuring the name and distance to the next junction. Very annoying and distracting, potentially unsafe since I was going around 70 mph while surprised by this.
It should at least wait until I've finished my journey and parked up. Not that I'm going to bother giving ad hoc feedback then either.
No current employee of a corporation can promise that data collected today will not in the future be used for any particular purpose. It's simply not up to them, and companies change massively over time.
Get it in an air-tight legal agreement with some kind of audit provision, actual enforcement and penalties, or don't give out data you care about.
This means that, possibly, I can sue if my data is misused: it does not prevent its misuse. It doesn’t offer any protection if the law changes, or any real protection against constant TOS changes, etc.
In the US, privacy policies change every week. And they don't gather consent - they just say "you're using our thing? Okay you consent". They'll send you an email with their new privacy policy. Which you don't read, because it's awful to read. And even if you do read it, it doesn't matter, because it's completely fake. The policy doesn't tell you how they actually use your data. It just says they can use any data for any purpose.
The fines keep increasing for repeated violators, up to 4% of global revenue. The idea is to force companies to change behaviour, the fines getting larger is the mechanism for it, they might not be high enough yet but keep violating the law and it will definitely be high.
Have you seen GDPR penalties? It's like they fine you 300% of your revenue and then have all your children executed by firing squad.
Anthropic in particular is a PBC founded by weenies who thought the other labs weren't being safe enough. I believe up until a few months ago they never used any user data for training, even if you opted in.
This nag often comes up at the very beginning of a session, before you could realistically evaluate whatever A/B test they’re running with any accuracy. I hope they’re not taking the data too seriously.
Despite what they say in that GH comment that it was reduced by 10x my anecdata is that it has been hyper-aggressive in the last few days. I had it popup twice during a single context window session. It makes me feel like they are really concerned about model regressions and in turn their lack of confidence makes me really not confident about what's going on behind the scenes.
My garbage disposal service sends me a survey once or twice a year asking if I would recommend them to my friends.
Where I live, garbage disposal is a county contract. You get get whatever company your county has engaged. Do they think people would to move to another county for better garbage disposal?
The endless misapplications of net promotor score are hilarious. My ISP does the same thing despite being the only one available.
The purpose of the tool is to infer customer loyalty. What's the point of that in a captive market? I suppose whatever 3rd party is facilitating the survey gets paid and that's something.
I always answer no to these type of situations, under the slight hope that maybe enough people will say "no" that it forces the county or city to get bids for the contract and investigate why people don't like the service, and try to do better.
Very occasionally these types of arrangements end up with an enthusiastically high performing company that does the right thing, but usually it's dumpster fires all the way down.
Has anyone in the history of mankind ever clicked "yes" to a website enabling desktop notifications? I feel like browsers should just adopt an "automatically say no to this bs" setting.
The problem is that in any kind of customer facing scenario this feedback maps to a KPI that, often, an individual employee's or team's performance is assessed on. That KPI might impact pay reviews, bonuses, promotions, or even ongoing employment at the extreme.
Which sucks.
It's to the point where, when we broke down in a live lane on a dual carriageway the other day (flat tyre - actually shredded a run flat, newer car so no spare, all lay-bys closed so nowhere to pull off road and couldn't make it to next exit), the police came out and cordoned off the lane and then the AA guy who came and rescued us asked if we could write him a review when the feedback request came through.
Of course, on this occasion I did write him an absolutely glowing review (which he very much deserved, and which I was more than happy to do), because this was an incredibly dangerous situation - potentially life or death. I also sent a thank you to the local police force that helped us out.
But that's the point: it was life or death. It really mattered. So of course I wanted to say thank you, and the feedback mechanism provided a decent way to do that.
But most of these feedback requests are for things that don't matter that much, if at all, and are no better than spam, because of course everybody asks for it for every little interaction nowadays... and it's just endlessly tiresome.
So, yes: please stop.
(Btw, as someone who worked in market research for 7 years I can tell you that CX reviews skew towards the extremes - either very positive or very negative - and that you're much more likely to get a review if someone has a bad experience than if they have a good one. As a result, whilst these reviews can be good for qualitatively highlighting specific problems that might need to be solved, deriving any kind of aggregate score from them and expecting that to be representative of the average customer's experience is a fool's errand. Please don't do it. [Aside: I know, I know - this will stop no-one but I'd feel remiss if I didn't point it out, especially on this site where a lot of you will - I hope - get the point and apply it in your own businesses.])
The other day, NewRelic insisted on full screen pop up dialogs prompting me for some form of feedback for I'm not even sure what.
Multiple times within a few minutes
During a damn incident I was trying to deal with
I left critical feedback. I wish someone would see it and feel ashamed, but it is rather clear that there haven't been decision makers in our industry capable of shame in many years.
It would be really funny if it wasn't a microcosm of the problems of our industry.
It's very clear they didn't even take a moment to think through "Why do users access our tool, under what circumstance, and how does our tool treat them in that situation", where it would have been pretty clear that "interrupt and block access to the user until they provide feedback" is not a good UX for an engineer trying to do literally anything.
Did you know there's a new Gantt chart? Did you know there's a new Gantt chart? Did you know there's a new Gantt chart? Did you know there's a new Gantt chart? Did you know there's a new Gantt chart? Did you know there's a new Gantt chart? Did you know there's a new Gantt chart? -Wrike
The opposite: it's mandated that you not do data collection and tracking for reasons except those essential for the company to provide the product or service, absent informed consent. (And this is purely for tracking: cookies used for maintaining preferences or other state are fine.)
The banners are a fig leaf for behavior that violates the spirit of the GDPR, creating an aggravation where the simplest way to dismiss them is by agreeing.
Any site that doesn't offer a button to reject the tracking (with no more stops than angreeing) and still function as expected without the tracking, is in violation of the law.
Only if you use cookies; I think not everyone will need to use cookies. I think if you use cookies to login, then only the login form should hopefully need to mention the cookies. (However, there are better ways to do user authentication, such as basic HTTP auth or X.509 auth; neither of which requires cookies.)
The banner is required every time there is processing of personal data where consent of required, whether that processing happened thanks to cookies or thanks to any other technical means (1px gifs, JavaScript fingerprinting, etc)
Most websites do not need to process personal data (typically for analytics reasons); it's perfectly fine to run without that and only use personal data for transactional reasons, which AIUI doesn't require that sort of consent.
You don't need a cookie consent banner for strictly necessary cookies, such as those used for user authentication. You don't see any cookie banners on HN for example. Cookie banners are only needed for sites that track their users.
Depending on whether it's a review of the item or the seller, that sounds like a good reason to rate it as one star, "item did not arrive". They did ask for that…
It's rather as if a passive inbox with no notifications and where deleting implies lower prioritization of similar messages would be quite handy. Everything in that is Email except email has no concept of "hot".
There's a fundamental difference between just providing/using a mechanism for user feedback, and interrupting someone's workflow with frequent unsolicited nagging requests for feedback.
All this hassle is because you people dont just use Deepseek paid API + Cline. 128.0k context window, the joy of refactoring gigantic applications by paying only $0.05 etc. An average 128.0k context api call costs $0.0008-$0.005.
I've definitely responded with prompts for feedback with "Stop nagging me for feedback all the time. I just want to (do what the app is for) without interruption."
It's kind of shocking how some people just don't get how insanely insulting it is for an application to constantly ask for feedback.
On top of that, they also don't seem to realize that they're one of now many apps also asking you for feedback.
If it was just one app every now and then. But instead it's (nearly) everything you buy, every restaurant you go to, every app you use, every doctor you see, every hotel you stay at, etc.
And it's not just that. They use dark patterns to inflate their ratings.
When I get done with a Teams call, I'm often asked to rate the call. If I pick 2 stars, I'm asked for written feedback on why it's low. If I pick 4 stars, I'm not.
Given how often they ask this, many will select 4-5 stars to dismiss the window.
I've never given them a rating myself. I always just close the window.
That is logical. That’s a rating of call quality. Like network issues and stuff. There is no relevant info to pass along with a high rating because it meanes everything works.
The point remains that you get inflated results by the vast majority who just seek the shortest path to close the dialogue.
“All happy families are alike; each unhappy family is unhappy in its own way.”
You can dismiss without a rating.
But I do submit 2 star with the feedback window being the reason because it comes up after every call.
Well that and the chat is terrible, and the calls aren't better than anyone else, and if you do the small group calls it does a chat call rather than a meeting so it is silent to the recipient.
I mean, who the fuck thought *that* was a good idea? You normally have to call and put "call here" in the chat to make people aware.
It’s not that they don’t realize, they just don’t care. The people making the decision to throw up a feedback request have nothing to gain from not doing it. I’m sure they get bombarded too and they don’t like it either, but the fact that it’s pervasive is all the more reason to keep doing it, since it effectively gives them cover.
Yeah, I know. I was being "generous" in my criticism.
Is this the great filter solution to the Fermi Enshittification Paradox? These game theoretic tragedy of the commons morality plays all end the same way.
On top of that, they also don't seem to realize that they're one of now many apps also asking you for feedback.
It's like when a guy on the street asks you for money. Like you haven't already been asked by everyone else on the block, including the guy standing right next to him.
TBH, at this point, I find the guys on the street less annoying.
"will you sleep with me?"
"will you quit asking that?"
"but many of my dates like being asked!"
You're the human feedback in the RLHF part of Reinforcement learning human feedback. You're also their paypig for their $200 Max plan which you also get limited on. You're also their free advertisement, keep at it. I can't wait for the open-source alternatives to catch up if ever.
Boris from the Claude Code team here.
We actually don't train on this survey data. It's just for vibes so we can make sure people are having a good experience.
See https://code.claude.com/docs/en/data-usage#session-quality-s...
Now think about this from a user's perspective:
We asked for Claude coz we really wanted it (as devs). Our security and legal guys evaluated it and the data usage agreements and such, which took ages and they stated that we can use it as long as we never never ever give any kind of bug report or feedback because that might retain data. This was all prior to this new feedback prompt being added to Claude Code.
We were happy. We could use Claude Code, finally!
Then suddenly, about once per day we are getting these prompts and there is NO option to disable them. The prompt just randomly appears. I might be typing part of my next prompt, which can very definitely include numbers from 1 upwards, e.g. because I'm making a todo list and boom I've given feedback. I've inadvertently done the thing we were never never ever supposed to do.
Do you really think our first thought is to double check if maybe the data usage agreement has been changed / hopefully says that this particular random new survey thing is somehow not included in "feedback"?
No, we panic and are mad at Claude Code/Anthropic. Heck if you noticed that you inadvertently gave feedback you might make a disclosure to the security team that will now go and evaluate how bad this was. Hopefully they'd then find your updated data usage agreement.
It would have been so easy to just include an option to opt out, which just set the equivalent of the env var you now provide into the settings itself. In fact, I see the first comment a few days after the bug was filed already asked for that and was thumbs'd up by many people. And you ignored it and all we got was a manual env var or settings entry that we also need to first find out even exists.
> We were happy. We could use Claude Code, finally!
So you are happy to use Claude.
> Then suddenly, about once per day we are getting these prompts and there is NO option to disable them.
And now you are not happy. Why are you not happy ?
You're holding it wrong. /s
> It's just for vibes so we can make sure people are having a good experience.
Are you sure that people are having a good experience ?
(after some random time) Are you sure that people are having a good experience ?
(after some random time) Are you sure that people are having a good experience ?
(after some random time) Are you sure that people are having a good experience ?
(after some random time) Are you sure that people are having a good experience ?
I already got that up and running.
Vscodium, and Claude replacement. And its abliterated to boot, so no censorship and garbage.
What model?
huggingface.co/DavidAU/Openai_gpt-oss-20b-CODER-NEO-CODE-DI-MATRIX-GGUF
Hey HN, Boris from the Claude Code team here. Sharing a bit about why we have this survey, how we use it, and how to opt out of it-
1. We use session quality feedback as a signal to make sure Claude Code users are having a good time. It's helpful data for us to more quickly spot & prevent incidents like https://www.anthropic.com/engineering/a-postmortem-of-three-.... There was a bug where we were showing the survey too often, which is now fixed (it was annoying and a misconfiguration on our part).
2. Giving session quality feedback is totally optional. When you provide feedback, we just collect your numerical rating and some metadata (like OS, terminal, etc.). Giving feedback doesn't cause us to log your conversation, code, or anything like that. (docs: https://code.claude.com/docs/en/data-usage#session-quality-s...)
3. We don't train on quality feedback data. This is documented in the link above.
4. If you don't want to give feedback, you can permanently turn it off for yourself by setting CLAUDE_CODE_DISABLE_FEEDBACK_SURVEY=1 in your env or settings.json file. (docs: https://code.claude.com/docs/en/settings)
5. To permanently turn off the feedback survey for your whole company, set CLAUDE_CODE_DISABLE_FEEDBACK_SURVEY=1 in the settings.json checked into your codebase, or in your enterprise-managed settings.json (docs: https://code.claude.com/docs/en/settings)
6. You can also opt out of both telemetry + survey by setting CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC=1, or you can more granularly opt out with DISABLE_ERROR_REPORTING=1, DISABLE_TELEMETRY=1, etc. (also documented in the settings docs)
Security and privacy are very important, and we spend a lot of time getting these right. You have full control over data usage, telemetry, and training, and these are configurable for yourself, for your codebase, and for all your employees. We offer all of these options out of the box, so you can choose the mechanism that makes the most sense for you.
If there is a setting or control that is missing, or if anything is unclear from the docs, please tell us!
> We use session quality feedback as a signal to make sure Claude Code users are having a good time
Your entire answer ignores the fact that this is irritating behavior that ensures users are not having a good time. We don't want to chase down secret config values. We want to click "stop bothering me" and be done with it.
Ah yes the
"Never ask me again"
(asks again the next day)
2: is not optional as the pop-up has occurred and the interruption done, and 4-6: are not obvious nor easy for almost everyone.
I recommend people always respond with the lowest possible score (1, not 0) when presented with popups like this.
Thank you for the feedback. I honestly believe that Claude Code is trying to be more privacy aware than other products, but it takes vigilance on both sides to get there. When you say the survey sends back more than the numeric value and add the 'etc' can you be very specific about the 'etc' is? The information you mention seems at odds with the data-usage documentation [1] which says only a numeric value is sent. Is the documentation going to be fixed to be explicit that more than a numeric value is sent? Is the feedback prompt going to ask the user if it is ok that more than a numeric value is sent back with the survey data?
Again, I think you are making the best product out there. I want to keep using it. Privacy is my #1 feature request to keep using it so transparency is crucial.
[1] https://code.claude.com/docs/en/data-usage
Thank you for taking the time to respond here. Thank you also for sharing your point of view, use case, and where you are coming from. With that said, would you mind sharing a few words on a couple questions?:
1. Does it take as much effort to opt-in to your feedback mechanism as it takes to opt-out? If not, why not?
2. If you want a thing ('feedback', 'a signal', data that is helpful to YOU), but getting it has this negative effect on others, what would happen if you preferenced others over yourself, and did with less of the thing?
> If there is a setting or control that is missing, or if anything is unclear from the docs, please tell us!
The setting is "leave me alone and don't ask again".
I had one of these pop up in Android Auto earlier while I was driving. I'd used the Google voice assistant to change the destination I was navigating to, and a few seconds it threw a feedback popup over the top of the map, obscuring the name and distance to the next junction. Very annoying and distracting, potentially unsafe since I was going around 70 mph while surprised by this.
It should at least wait until I've finished my journey and parked up. Not that I'm going to bother giving ad hoc feedback then either.
No current employee of a corporation can promise that data collected today will not in the future be used for any particular purpose. It's simply not up to them, and companies change massively over time.
Get it in an air-tight legal agreement with some kind of audit provision, actual enforcement and penalties, or don't give out data you care about.
> No current employee of a corporation can promise that data collected today will not in the future be used for any particular purpose.
Yes they can, because data privacy laws forbid collecting data for one purpose and then using it for new ones without notice or consent.
This means that, possibly, I can sue if my data is misused: it does not prevent its misuse. It doesn’t offer any protection if the law changes, or any real protection against constant TOS changes, etc.
That argument is unreasonably strong. How many other illegal things are you worried about in case they become legal again later?
Quite a lot when it comes to privacy, actually.
Depends on where you are.
In the US, privacy policies change every week. And they don't gather consent - they just say "you're using our thing? Okay you consent". They'll send you an email with their new privacy policy. Which you don't read, because it's awful to read. And even if you do read it, it doesn't matter, because it's completely fake. The policy doesn't tell you how they actually use your data. It just says they can use any data for any purpose.
That would be meaningful if there were ever any enforcement.
https://cppa.ca.gov/announcements/
And of course the EU does nothing but fine tech companies bazillions of euros for GDPR violations.
If they keep having to do it the fines are obviously not high enough.
Having to do it? Why would they want to stop?
The entire point of laws and fines is to get bad actors to NOT DO THE THING IN THE FIRST PLACE. That is literally the entire point of the exercise.
The fines keep increasing for repeated violators, up to 4% of global revenue. The idea is to force companies to change behaviour, the fines getting larger is the mechanism for it, they might not be high enough yet but keep violating the law and it will definitely be high.
Do you honestly expect a company like Anthropic to take data privacy laws seriously? I do not.
Have you seen GDPR penalties? It's like they fine you 300% of your revenue and then have all your children executed by firing squad.
Anthropic in particular is a PBC founded by weenies who thought the other labs weren't being safe enough. I believe up until a few months ago they never used any user data for training, even if you opted in.
> Have you seen GDPR penalties?
No, i would have love to see them (Hello Microsoft, Google) but, no.
And we can be 100% certain your data is safe because nobody ever breaks the law. /s
How would you even know your data is being used in a way you didn’t authorize?
"To permanently turn off the feedback survey for yourself, set `CLAUDE_CODE_DISABLE_FEEDBACK_SURVEY=1`"
Per recent comment from Anthropic at https://github.com/anthropics/claude-code/issues/8036#issuec...
There is already ~/.claude/settings.json
What is going on over there at Anthropic?
Boris from the Claude Code team here.
You can set CLAUDE_CODE_DISABLE_FEEDBACK_SURVEY either in your env, or in your settings.json. Either one works.
Dogfooding
This nag often comes up at the very beginning of a session, before you could realistically evaluate whatever A/B test they’re running with any accuracy. I hope they’re not taking the data too seriously.
This was a bug, should be fixed as of a few days ago.
I have "Participate in surveys" turned off in Teams, and yet I am presented a survey to rate the Teams audio/video performance after every meeting.
Despite what they say in that GH comment that it was reduced by 10x my anecdata is that it has been hyper-aggressive in the last few days. I had it popup twice during a single context window session. It makes me feel like they are really concerned about model regressions and in turn their lack of confidence makes me really not confident about what's going on behind the scenes.
Can you confirm you're on the latest version? You should not be seeing it more than once every few days.
Claude Code v2.0.35 — No exaggeration I've had it popup 4x today already.
Yes but imagine they hadn't applied this "fix." It could have been 40x. :P
Also, and to generalize:
Stop asking me about accepting cookies.
Stop asking me to subscribe to your email list
Stop asking me to review my last purchase.
Stop telling me I need to subscribe to view this content.
Stop asking for my phone number
Stop asking for my income
If I want to do or tell you any of these things I will initiate that myself.
My garbage disposal service sends me a survey once or twice a year asking if I would recommend them to my friends.
Where I live, garbage disposal is a county contract. You get get whatever company your county has engaged. Do they think people would to move to another county for better garbage disposal?
The endless misapplications of net promotor score are hilarious. My ISP does the same thing despite being the only one available.
The purpose of the tool is to infer customer loyalty. What's the point of that in a captive market? I suppose whatever 3rd party is facilitating the survey gets paid and that's something.
I always answer no to these type of situations, under the slight hope that maybe enough people will say "no" that it forces the county or city to get bids for the contract and investigate why people don't like the service, and try to do better.
Very occasionally these types of arrangements end up with an enthusiastically high performing company that does the right thing, but usually it's dumpster fires all the way down.
Asking the likelihood I would recommend a service to my friends is a sure fire way to get a 0.
You forgot about websites wanting you to enable desktop notifications. That's always a nice one.
Has anyone in the history of mankind ever clicked "yes" to a website enabling desktop notifications? I feel like browsers should just adopt an "automatically say no to this bs" setting.
Yes, for things I care(d) about (eg google mail/calendar, FB Messenger / WhatsApp web, etc).
The problem is I'm pretty any one of us has (at most) 2-3 sites we actually want notifications from, and dozens asking.
Your bank or brokerage may be required to ask about your income, every so often. KYC/AML laws.
If it's anyone other than your bank or brokerage, that seems pretty weird and sketchy.
The only time you must provide your income is as part of a credit approval.
Every other time they ask it's voluntary on your part, and you should decline. They just use the information for advertising at that point.
The problem is that in any kind of customer facing scenario this feedback maps to a KPI that, often, an individual employee's or team's performance is assessed on. That KPI might impact pay reviews, bonuses, promotions, or even ongoing employment at the extreme.
Which sucks.
It's to the point where, when we broke down in a live lane on a dual carriageway the other day (flat tyre - actually shredded a run flat, newer car so no spare, all lay-bys closed so nowhere to pull off road and couldn't make it to next exit), the police came out and cordoned off the lane and then the AA guy who came and rescued us asked if we could write him a review when the feedback request came through.
Of course, on this occasion I did write him an absolutely glowing review (which he very much deserved, and which I was more than happy to do), because this was an incredibly dangerous situation - potentially life or death. I also sent a thank you to the local police force that helped us out.
But that's the point: it was life or death. It really mattered. So of course I wanted to say thank you, and the feedback mechanism provided a decent way to do that.
But most of these feedback requests are for things that don't matter that much, if at all, and are no better than spam, because of course everybody asks for it for every little interaction nowadays... and it's just endlessly tiresome.
So, yes: please stop.
(Btw, as someone who worked in market research for 7 years I can tell you that CX reviews skew towards the extremes - either very positive or very negative - and that you're much more likely to get a review if someone has a bad experience than if they have a good one. As a result, whilst these reviews can be good for qualitatively highlighting specific problems that might need to be solved, deriving any kind of aggregate score from them and expecting that to be representative of the average customer's experience is a fool's errand. Please don't do it. [Aside: I know, I know - this will stop no-one but I'd feel remiss if I didn't point it out, especially on this site where a lot of you will - I hope - get the point and apply it in your own businesses.])
The other day, NewRelic insisted on full screen pop up dialogs prompting me for some form of feedback for I'm not even sure what.
Multiple times within a few minutes
During a damn incident I was trying to deal with
I left critical feedback. I wish someone would see it and feel ashamed, but it is rather clear that there haven't been decision makers in our industry capable of shame in many years.
> During a damn incident I was trying to deal with
I feel bad for you but... this is also kind of hilariously absurd/unaware of them.
It would be really funny if it wasn't a microcosm of the problems of our industry.
It's very clear they didn't even take a moment to think through "Why do users access our tool, under what circumstance, and how does our tool treat them in that situation", where it would have been pretty clear that "interrupt and block access to the user until they provide feedback" is not a good UX for an engineer trying to do literally anything.
That wasn't their job I guess.
Yeah. Totally agree about it being an issue in our industry and the rest of your comment.
[dead]
Ok... How was your call quality? -Teams
Did you know there's a new Gantt chart? Did you know there's a new Gantt chart? Did you know there's a new Gantt chart? Did you know there's a new Gantt chart? Did you know there's a new Gantt chart? Did you know there's a new Gantt chart? Did you know there's a new Gantt chart? -Wrike
I believe the first one at least is mandated by law.
No. What's mandated by law is to not track users for reasons not core to delivering your service OR disclose that you are doing so.
The opposite: it's mandated that you not do data collection and tracking for reasons except those essential for the company to provide the product or service, absent informed consent. (And this is purely for tracking: cookies used for maintaining preferences or other state are fine.)
The banners are a fig leaf for behavior that violates the spirit of the GDPR, creating an aggravation where the simplest way to dismiss them is by agreeing.
Any site that doesn't offer a button to reject the tracking (with no more stops than angreeing) and still function as expected without the tracking, is in violation of the law.
Only if you use cookies; I think not everyone will need to use cookies. I think if you use cookies to login, then only the login form should hopefully need to mention the cookies. (However, there are better ways to do user authentication, such as basic HTTP auth or X.509 auth; neither of which requires cookies.)
Common misconception.
The banner is required every time there is processing of personal data where consent of required, whether that processing happened thanks to cookies or thanks to any other technical means (1px gifs, JavaScript fingerprinting, etc)
Most websites do not need to process personal data (typically for analytics reasons); it's perfectly fine to run without that and only use personal data for transactional reasons, which AIUI doesn't require that sort of consent.
You don't need a cookie consent banner for strictly necessary cookies, such as those used for user authentication. You don't see any cookie banners on HN for example. Cookie banners are only needed for sites that track their users.
I read their comments as knowing that.
Imagine a world where you don't need to click on anything because cookies are no longer being used for large scale tracking.
you don't have to ask for cookies if you don't use them
I find it interesting how even if I accept cookies many sites still continue to ask
Please rate your delivery
My favorite is when you order something online, and the company asks for feedback and a review before they even ship the item.
Depending on whether it's a review of the item or the seller, that sounds like a good reason to rate it as one star, "item did not arrive". They did ask for that…
It's rather as if a passive inbox with no notifications and where deleting implies lower prioritization of similar messages would be quite handy. Everything in that is Email except email has no concept of "hot".
I find it a bit ironic that bcherry is clearly annoyed with the feedback he's getting when he asks for feedback.
Funny.. you're giving some feedback right there
There's a fundamental difference between just providing/using a mechanism for user feedback, and interrupting someone's workflow with frequent unsolicited nagging requests for feedback.
All this hassle is because you people dont just use Deepseek paid API + Cline. 128.0k context window, the joy of refactoring gigantic applications by paying only $0.05 etc. An average 128.0k context api call costs $0.0008-$0.005.
If I'm not saying anything, it's probably OK. I'll give you feedback when that is not the case. Stop fishing for compliments.