I like chutes. I think I get about 5K prompts per day for $20/m, though they may have stricter limits for new customers.
This gives you practically unlimited usage of frontier models like kimi, deepseek, glm.
Their models are always fullsize, never quantised except where the lab themselves provides an 4bit or 8bit model. You can see from the model config exactly which hf model it pulls and the serving co figuration used.
Prompts are encrypted using Trusted Execution Environment (TEE). So neither a model host or neighbour can view your prompts. That's as close as you can get to local level privacy in the cloud.
I get Kimi through OpenCode Zen (kind of like openrouter for the OpenCode harness), periodically top up $20 and laugh every time I see my balance go down by 3 cents for something I would have happily paid someone $30.
nous portal or openrouter with a harness that uses intelligent multi provider requests,a local memory system, and pre-sub context compaction on input. if you do similar stuff often your token usage will drop after awhile of using a memory subsystem like hindsight or honcho quite a bit, and even more if you're using your harness to build relevant skills for the repeated tasks.
Which DeepSeek plan did you use? I been trying to find a DeepSeek for a while but with no success. I tried to use Claude $20 plan before, token burn like it is air, would be quite hard to believe anything else would burn so fast?
I like chutes. I think I get about 5K prompts per day for $20/m, though they may have stricter limits for new customers.
This gives you practically unlimited usage of frontier models like kimi, deepseek, glm. Their models are always fullsize, never quantised except where the lab themselves provides an 4bit or 8bit model. You can see from the model config exactly which hf model it pulls and the serving co figuration used.
Prompts are encrypted using Trusted Execution Environment (TEE). So neither a model host or neighbour can view your prompts. That's as close as you can get to local level privacy in the cloud.
I get Kimi through OpenCode Zen (kind of like openrouter for the OpenCode harness), periodically top up $20 and laugh every time I see my balance go down by 3 cents for something I would have happily paid someone $30.
nous portal or openrouter with a harness that uses intelligent multi provider requests,a local memory system, and pre-sub context compaction on input. if you do similar stuff often your token usage will drop after awhile of using a memory subsystem like hindsight or honcho quite a bit, and even more if you're using your harness to build relevant skills for the repeated tasks.
not good. I use DeepSeek's plan, Kimi AI, OpenRouter and it seemly consumes more tokens, than Claude's.
I consume Claude ~30% per day in of, 1 week, Max,x20. Equivalent in Kimi Ai, is I consume 60% in one day, in one week.
DeepSeek/Latest, 95% discount, with cache, I rack up ~$60/day before I stopped.
I don't know how Claude compute their daily limits, it seems much cheaper.
Which DeepSeek plan did you use? I been trying to find a DeepSeek for a while but with no success. I tried to use Claude $20 plan before, token burn like it is air, would be quite hard to believe anything else would burn so fast?
I'm using the deepseek-v4-pro model is currently offered at a 75% discount. My bad it's 75% discount, via OpenRouter.
I use the Claude-Max-20 ($200) plan. I manage to max it out 2 weeks. Planning to move to maybe multiple accounts.
I use C++ and Claude for big code-base.
Antigravity?