For me, it's not even cost necessarily. If they decide to change the product they offer, the old one is gone. I refuse to use anything for personal use that's not at least _available_ as model weights.
Local models are not comparable to the FOTA models at all. I know what I'm saying because I do have 4 local H100's in my server, and could run the very best local models. It's night and day. They are unusable and stupid.
I get perfectly acceptable results from a Strix Halo PC the size of a shoebox, man. An APU that uses ~150w, has 0 discrete GPUs, and a bill of $0/m. What's more, it doesn't go down every week, limit use, or change the terms at a whim.
I'll burn/discard 'frontier' tokens (at work) only because they're mandated and they foot the bill. I'd rather resell them; meet the asinine requirement from $EMPLOYER, provide cover for outsourcing to my equipment, and get a return for the hassle.
TLDR: perhaps you're holding it wrong or haven't tried the latest, as we so often hear. That's a lot of GPU for not much utility.
Well, my python and typescript folks are also happy with the simplier local models. But I'm using more advanced stuff, C/C++ embedded real-time, vision AI, and compilers.
I keep telling them and they still want to spend money on tokens at the Anthropic casino, even though they are egregiously price gouging and applying upper limits so you spend more on tokens.
Sometimes you can't help gamblers who want to gamble on tokens to hit the jackpot on fixing a typical issue which can be done by local models or even reading the documentation.
You don't have to use the most recent bleeding edge model to succeed. A local FOSS coding agent coupled with a reasonably priced LLM could yield the optimal ROI.
I’m sorry but that’s just dumb. An LLM is a tool. Your brain is not a substitute for an LLM in the same way your fingers are not a substitute for a wrench.
The year is 2026 and if you are using your brain on chore work like one-off scripts, refactoring, boilerplate test code, then you are wasting time and money and I don’t want to work with you.
Local models are fine for this and can do it in a fraction of the time your brain will take to even get bootstrapped
The year is 2026 the average RAM for the most common type of developer’s (web) machine is 16GB. 8 will be the lower end. Tell me which model can one run on this machine locally?
He mentions Max as another place that they didn't properly predict plan and pricing relative to usage. I'd bet the farm that it's the next to be 'A/B' tested.
They've been folding under government pressure for months. I think they lost control of their own company. Still they have a nice writing voice, but I think Google will be the last man standing when this is all over.
Maybe you should adapt your expectations for your 20$ monthly subscription. That's by far not what it's worth
I burned already through over 100$ in a single day with per token payment for heavy usage. I don't have any issues with Claude not working or being "nerfed". It just works
This should be a warning to those who feel that it's ok to offload your creativity to a subscription service. Always need a local model in some form.
You could judge the costs of the AI products you're using by the standard API pricing, not promotional subscription offers.
For me, it's not even cost necessarily. If they decide to change the product they offer, the old one is gone. I refuse to use anything for personal use that's not at least _available_ as model weights.
Not even that way, given that the price is still highly subsidized by investors and circular deals.
Local models are not comparable to the FOTA models at all. I know what I'm saying because I do have 4 local H100's in my server, and could run the very best local models. It's night and day. They are unusable and stupid.
Not all tasks require a frontier model
For what do you use the 4 local H100s then?
For training our AI model of course. Inference is for the cheaper machines.
I get perfectly acceptable results from a Strix Halo PC the size of a shoebox, man. An APU that uses ~150w, has 0 discrete GPUs, and a bill of $0/m. What's more, it doesn't go down every week, limit use, or change the terms at a whim.
I'll burn/discard 'frontier' tokens (at work) only because they're mandated and they foot the bill. I'd rather resell them; meet the asinine requirement from $EMPLOYER, provide cover for outsourcing to my equipment, and get a return for the hassle.
TLDR: perhaps you're holding it wrong or haven't tried the latest, as we so often hear. That's a lot of GPU for not much utility.
Well, my python and typescript folks are also happy with the simplier local models. But I'm using more advanced stuff, C/C++ embedded real-time, vision AI, and compilers.
Fair point. I treat them like the forgetful junior we often hear about. The things I don't care to do, it can.
Easier to spawn another terminal pane/browser tab than hire a contractor.
There is very little vendor lock. We can keep using subsidized model until it’s not. Then switch to next subsidized model.
It's like chairs!
I keep telling them and they still want to spend money on tokens at the Anthropic casino, even though they are egregiously price gouging and applying upper limits so you spend more on tokens.
Sometimes you can't help gamblers who want to gamble on tokens to hit the jackpot on fixing a typical issue which can be done by local models or even reading the documentation.
Are there local models that are anywhere near as good at coding as opus 4.6?
People will insist otherwise, but I haven't seen anything close to sonnet 4.6 that can be run locally.
I don't think anyone can honestly say a huge frontier model is actually going to be matched by something running on 64gb locally?
You don't have to use the most recent bleeding edge model to succeed. A local FOSS coding agent coupled with a reasonably priced LLM could yield the optimal ROI.
I have read many comments saying Qwen3.5 various ~30B models, Gemma 4 ~30B models and now Qwen3.6 "better than sonnet".
I don't know how large sonnet and opus are but the rumor is 1T and 5T respectively.
Not really. Qwen 3.5 and Gemma and a couple of others are quite good though, and the quants are _very_ runnable on a good gpu.
This doesn’t affect existing users.
This is a simple supply and demand curve.
Higher demand means the price goes up .. this has been true of things since before SaaS and before computers
Thanks for all the logical fallacies in one comment.
The 'local model' is called your brain.
I’m sorry but that’s just dumb. An LLM is a tool. Your brain is not a substitute for an LLM in the same way your fingers are not a substitute for a wrench.
The year is 2026 and if you are using your brain on chore work like one-off scripts, refactoring, boilerplate test code, then you are wasting time and money and I don’t want to work with you.
Local models are fine for this and can do it in a fraction of the time your brain will take to even get bootstrapped
The year is 2026 the average RAM for the most common type of developer’s (web) machine is 16GB. 8 will be the lower end. Tell me which model can one run on this machine locally?
You can use free models on opencode. Minimax whatever is just free. It's more than enough for these tasks.
He mentions Max as another place that they didn't properly predict plan and pricing relative to usage. I'd bet the farm that it's the next to be 'A/B' tested.
They've been folding under government pressure for months. I think they lost control of their own company. Still they have a nice writing voice, but I think Google will be the last man standing when this is all over.
Dup: https://news.ycombinator.com/item?id=47854477
Maybe those already on $20 a month plans won't be nerfed much more?
It's yet another austerity move, pretty much in line with the recent ones.
Maybe you should adapt your expectations for your 20$ monthly subscription. That's by far not what it's worth
I burned already through over 100$ in a single day with per token payment for heavy usage. I don't have any issues with Claude not working or being "nerfed". It just works
[dupe] https://news.ycombinator.com/item?id=47854477