If you have ever read the “thought” process on some of the reasoning models you can catch them going into loops of circular reasoning just slowly burning tokens. I’m not even sure this isn’t by design.
This kind of stuff happens on any model you train from scratch even before training for multi step reasoning. It seems to happen more when there’s not enough data in the training set, but it’s not an intentional add. Output length is a whole deal.
I think many of them do, but there are also many “AI” tools that will automatically add a ton of stuff to try and make it spit out more intelligent responses, or even re-prompt the tool multiple times to try and make sure it’s not handing back hallucinations.
It really adds up in their attempt to make fancy autocomplete seem “intelligent”.
If you have ever read the “thought” process on some of the reasoning models you can catch them going into loops of circular reasoning just slowly burning tokens. I’m not even sure this isn’t by design.
This kind of stuff happens on any model you train from scratch even before training for multi step reasoning. It seems to happen more when there’s not enough data in the training set, but it’s not an intentional add. Output length is a whole deal.
I dunno, let’s waste some water
They are trying to get rid of us by wasting our resources.
So, it’s Nestlé behind things again.
Why would it be by design? What does that even mean in this context?
You have to pay for tokens on many of the “AI” tools that you do not run on your own computer.
Hmm, interesting theory. However:
We know this is an issue with language models, it happens all the time with weaker ones - so there is an alternative explanation.
LLMs are running at a loss right now, the company would lose more money than they gain from you - so there is no motive.
it was proposed less as a hypothesis about reality than as virtue signalling (in the original sense)
Of course there’s a technical reason for it, but they have incentive to try and sell even a shitty product.
I don’t think this really addresses my second point.
Dont they charge be input tokens? E.g. your prompt. Not the output.
I think many of them do, but there are also many “AI” tools that will automatically add a ton of stuff to try and make it spit out more intelligent responses, or even re-prompt the tool multiple times to try and make sure it’s not handing back hallucinations.
It really adds up in their attempt to make fancy autocomplete seem “intelligent”.
Yes, reasoning models… but i dont think they would charge on that… that would be insane, but AI executives are insane, so who the fuck knows.
Compute costs?
I’m pretty sure training is purely result oriented so anything that works goes
Exactly why this shit is and will never be trustworthy.