There’s some technical reasons this is 100% accurate:
Some tokenizers are really bad with numbers (especially some of OpenAI’s). It leads to all sorts of random segmenting of numbers.
99% of LLMs people see are autoregressive, meaning they have once chance to pick the right number token and no going back once it’s written.
Many models are not trained with math in mind, though some specialized experimental ones can be better.
99% of interfaces people interact with use a fairly high temperature, which literally randomizes the output. This is especially bad for math because, frequently, there is no good “synonym” answer if the correct number isn’t randomly picked. This is necessary for some kinds of responses, but also incredibly stupid and user hostile when those knobs are hidden.
There are ways to improve this dramatically. For instance, tool use (eg train it to ask Mathematica programmatically), or different architectures (like diffusion LLMs, which has more of a chance to self correct). Unfortunately, corporate/AI Bro apps are really shitty, so we don’t get much of that…
There’s some technical reasons this is 100% accurate:
Some tokenizers are really bad with numbers (especially some of OpenAI’s). It leads to all sorts of random segmenting of numbers.
99% of LLMs people see are autoregressive, meaning they have once chance to pick the right number token and no going back once it’s written.
Many models are not trained with math in mind, though some specialized experimental ones can be better.
99% of interfaces people interact with use a fairly high temperature, which literally randomizes the output. This is especially bad for math because, frequently, there is no good “synonym” answer if the correct number isn’t randomly picked. This is necessary for some kinds of responses, but also incredibly stupid and user hostile when those knobs are hidden.
There are ways to improve this dramatically. For instance, tool use (eg train it to ask Mathematica programmatically), or different architectures (like diffusion LLMs, which has more of a chance to self correct). Unfortunately, corporate/AI Bro apps are really shitty, so we don’t get much of that…
Exactly, a lot of the “AI Panic” is from people using ClosedAI’s dogshit system, non-finetuned model and Instruct format.