I just pushed Claude Code Agent Swarm with 20 coding agents on my desktop GPU workstation.
With local AI, I donβt have /fast CC switch, but I have /absurdlyfast: - 100β499 tokens/second read, yeah 100k, not a typo | 811 tok/sec generation - KV cache: 707β200 tokens - Hardware: 5+ year old GPUs 4xA6K gen1; Itβs not the car. Itβs the driver.
Qwen3 Coder Next AWQ with cache at BF16. Scores 82.1% in C# on 29-years-in-dev codebase vs Opus 4.5 at only 57.5%. When your codebase predates Stack Overflow, you don't need the biggest model; you need the one that actually remembers Windows 95.
My current bottleneck is my 27" monitor. Can't fit all 20 Theos on screen without squinting.
SEO spam has also become a lot less noticeable. I'm hoping that the next step will be to crack down on storage and traffic abuse, and maybe that will mean more generous storage limits.