productivity (tokens per second per hardware unit) increases at the cost of output quality, but the price remains the same.
both Anthropic and OpenAI quantize their models a few weeks after release. they'd never admit it out loud, but it's more or less common knowledge now. no one has enough compute.
There is no evidence TMK that the accuracy the models change due to release cycles or capacity issues. Only latency. Both Anthropic and OpenAI have stated they don't do any inference compute shenanigans due to load or post model release optimization.
Tons of conspiracy theories and accusations.
I've never seen any compelling studies(or raw data even) to back any of it up.
but of course, this isn't a written statement by a corporate spokespersyn. I don't think that breweries make such statements when they water their beer either.
I think that the idea is each action uses more tokens, which means that users hit their limit sooner, and are consequently unable to burn more compute.
I'm curious, how does using more tokens save compute?