Recently, there has been a lot of buzz around a seemingly simple question that even state-of-the-art large language models (LLM) fail to answer correctly: "Which is bigger? 9.9 or 9.11"
Despite various attempts and variations of prompting techniques, most frontier models still struggle to make an accurate comparison of the two numbers. This highlights a broader issue many of today's models encounter: they have limited mathematical reasoning capabilities[1]. While there are multiple conjectures of why this is the case, including the composition of pretraining data and the model architecture itself[2], we investigate one of the most fundamental processes in LLMs, tokenization, and how it affects a model's ability to do math, specifically arithmetic problems.
In this blog post, we discuss:
- Our detailed approach in comparing different methods of number tokenization
- Why reading from right to left is sometimes better than from left to right
- A clear frontrunner of tokenization methods for arithmetic in LLMs
A Brief History of Number Tokenization
Back in 2019, The GPT2 paper detailed its use of BPE (byte-pair encoding) as a tokenization method for language models [3]. This approach works by merging frequently occurring subwords into single units until the vocabulary reaches a target size.
Because of how this algorithm operates, the resulting vocabulary depends heavily on the training data fed into the tokenizer. This led to inconsistencies in how numbers are encoded [4]. Commonly seen numbers (i.e. 1-100, years like 1945, etc.) in the training data will likely be represented as a single token, while less frequently seen numbers are split into multiple tokens like below:
Four years later, the herd of llamas began their stampede! Llama and Llama 2 used SentencePiece's BPE implementation with a notable tweak for numbers: they split all numbers into individual digits [5][6]. This meant there were only 10 unique tokens to represent any number, simplifying numerical representation for LLMs. Deepseek released a model much later (DeepSeek-V2) with a similar single-digit tokenizer [7].
Later on, Llama 3 took a different approach for handling numbers, tokenizing them in groups of three digits [8]. As a result, numbers from 1 to 999 each have unique tokens, while numbers from 1000 onward are composed of these tokens.
A New Paradigm: Right-to-Left Tokenization
So far, the tokenization methods we've seen "processed" text from left to right. For instance, if the three-digit tokenizer encounters the sequence "12345," it will scan from the beginning, breaking it down into segments like "123" and "45".
Right-to-left (R2L) tokenization, on the other hand, processes text from the end to the beginning in groups of three. Using R2L, the sequence "12345" would be tokenized by scanning from the right, first splitting off "345" and then moving to "12." Recently, there has been some exploration too of forcing this R2L tokenization behaviour in frontier closed-source models, which has shown to benefit certain arithmetic operations since the R2L representation prevents the misalignment of the operands [9]. It has also been rumored that Claude uses this R2L tokenization method [10].
To better understand what misalignment looks like, let's take 3789 + 8791 as an example:
Three-digit L2R Tokenization
Three-digit R2L Tokenization
In the three-digit L2R example, 9 + 1 should map to the digit 0 but ends up grouped together with 8 to form 80, since the first three tokens (125) were already grouped together. This 'shift' in the tokenization boundary produces additional complexity in the learning process which has been shown to be detrimental to accuracy.
In the three-digit R2L example, each digit of 580 aligns neatly with its corresponding sub-operands 789 and 791, which is a more intuitive grouping for the model to learn.
This insight suggests that three-digit R2L tokenization could potentially be improved over the standard three-digit L2R tokenization used by Llama 3.
To recap, here's an overview of the techniques used to handle number tokenization:
How numbers are tokenized | tokenizer (model) |
---|---|
pure BPE; no special handling | gpt2 |
split to single digits | llama, llama2, deepseek |
1-999 has unique tokens | llama3 |
split to groups of three digits (R2L) | Claude (?) |
Creating a fair comparison of different methods
The goal of this investigation is to compare these tokenizers and their different ways of processing numbers in a way that minimizes the influence of external factors such as model architecture, training configurations, and pre-training data in evaluation results.
Thus, one important design decision we made to address this goal was to evaluate models trained from scratch, where each model has the same data mixture, training configs, and a roughly equal compute budget (number of model parameters and training tokens). The only meaningful difference that each model should have with one another is the tokenizer used to tokenize the training data.
Experimental Setup
We picked 3 tokenizers mentioned previously, namely GPT2's BPE tokenizer, Llama 3's three-digit tokenizer, and Deepseek's single-digit tokenizer.
To test right-to-left tokenization, we created R2L versions of the Pure-BPE and three-digit tokenizers, where numbers would be chunked into groups of 3 digits from the right before being tokenized. We didn't create a R2L version for single-digit tokenization since it would produce the same result since numbers are tokenized to individual digits 1. To achieve this, we added an extra preprocessing step which forces the R2L behaviour without producing additional tokens during inference:
from transformers import AutoTokenizer
from tokenizers import pre_tokenizers, Regex
# Initialize all tokenizers
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Meta-Llama-3-8B")
# Add an extra step to the existing pre-tokenizer steps
tokenizer._tokenizer.pre_tokenizer = pre_tokenizers.Sequence(
[
# Added step: split by R2L digits
pre_tokenizers.Split(pattern = Regex(r"\d{1,3}(?=(\d{3})*\b)"),
behavior="isolated", invert = False),
# Below: Existing steps from Llama 3's tokenizer
pre_tokenizers.Split(pattern=Regex(r"(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\r\n\p{L}\p{N}]?\p{L}+|\p{N}{1,3}| ?[^\s\p{L}\p{N}]+[\r\n]*|\s*[\r\n]+|\s+(?!\S)|\s+"),
behavior="isolated", invert=False),
pre_tokenizers.ByteLevel(add_prefix_space=False, trim_offsets=True, use_regex=False)
]
)
print(tokenizer.tokenize("42069")) # [42, 069]
Model Configuration
The models we train use the original Llama architecture. To account for the differences in each tokenizer's vocabulary size, we modified the number of hidden layers so that each model has roughly the same number of parameters (~1.45B).
Visualizing embedding matrix size (vocab_size × hidden_dim)
Pretraining Data
We used a mixture of general web data and math & code data for model pretraining:
- 42% Fineweb
- 40% InfiMM-WebMath
- 10% open-web-math
- 8% Proof-Pile-2 (algebraic-stack subset)
The relative weights are based on the token counts of each dataset. The resulting mixture amounted to ~120B tokens. We also had R2L-tokenized version of these datasets for the corresponding R2L tokenizers.
Evaluation
It's important to note that an LLM's training process is non-deterministic. Even after controlling the pretraining data mix, model architecture, and compute budget, different model initialization settings can still produce noise in evaluation results[11].
To account for this, we trained three additional models with the tokenizer that produced the highest variation in model scores on the same data mixture using different seeds. For each task, we then take the standard deviation of model scores and assume that this value is the upper-bound of variability across all model architectures and training datasets.
Select model with highest standard deviation across evals
Train same model with different random seeds
For each task, evaluate all models
Calculate task-specific standard deviations
For task A, we report σ = 0.034 as the standard deviation for all models
We divide our evaluation suite into two main categories:
Word-based problems
For a model of this size, it is quite difficult to obtain any meaningful signal using traditional mathematical reasoning benchmarks, so we opted for easier benchmarks with relatively straightforward solutions that only require a few simple mathematical operations, namely MAWPS and SVAMP. For evaluation, we standardized a question-answer format and parsed a single numerical answer from the model's output to compute the accuracy.
Simple arithmetic problems
We created a custom benchmark containing simple arithmetic problems (+, - , x, ÷). These problems are split across various axes, which allows us to perform a more detailed assessment of the pitfalls of LLMs when doing math. We create different splits based on:
- The difficulty of the problem (harder problems = more decimals, more digits)
- Whether it was a float / int operation
- Whether or not commas were used to demarcate digits into groups of 3
- The arithmetic operator used in the problem
Given the relatively fixed-form answers in the evaluation tasks above, the evaluation metric we chose is a quasi-exact match (QEM) of the gold answer and the model generation, with normalizations to account for minor differences in floating point equality (127.60 and 127.6 should be a match) as well as usage of commas (1234 and 1,234 should be a match).
Results
Arithmetic Problems
Single-digit tokenization outperforms all other tested tokenizers in arithmetic problems
Average accuracy; Evaluted with 5-shot arithmetic questions
Number of Digits \ Tokenizer | Pure BPE | Three-digit L2R | Three-digit R2L | Single-digit |
---|---|---|---|---|
1 digit | 0.72 | 0.72 | 0.75 | 0.76 |
2 digits | 0.37 | 0.43 | 0.46 | 0.44 |
3 digits | 0.06 | 0.07 | 0.11 | 0.26 |
4 digits | 0.03 | 0.04 | 0.06 | 0.18 |
5 digits | 0.02 | 0.03 | 0.05 | 0.14 |
While the differences for easier problems are less pronounced, as the complexity of the problems increases, there is an increasing gap between the best-performing tokenizer (single-digit) and the rest. This suggests that single-digit tokenization is more robust to variations in input data length and can better capture intricate patterns, leading to improved performance in challenging scenarios where other tokenization methods struggle.
Additionally, while it's intuitive to assume that integer problems will have better performance (which is also confirmed by the results in the grid), we also found that the performance gap between floats and integers is similar across all tokenizers. This shows that there aren't any inherent tradeoffs when choosing tokenizers in these two categories (i.e. the optimal tokenizer for integers is also optimal for floats).
Three-digit R2L tokenization has better performance over standard three-digit L2R tokenization
We found that a model trained on R2L tokenized data resulted in mostly significant improvements (except for multiplication) when compared to training on the default L2R tokenized data. This shows that it is an optimal setting for arithmetic operations compared to the typical left-to-right encoding.
Pure-BPE tokenizers show inconsistent performance when numbers are chunked in groups of 3 from right to left
Evidently, pure BPE-based tokenizers without any additional number preprocessing don't benefit much from using R2L tokenization. A possible explanation of why this might be the case is the lack of 'structure' in how digits are grouped together in these tokenizers.
Unlike the three-digit tokenizer, due to the peculiarities of pure BPE-based tokenization mentioned above, there are other numbers that are also grouped less consistently. This inconsistency undermines the advantages of R2L, which we have shown to work best when numbers are uniformly tokenized in 3-digit chunks from least to most significant digit.
Word-based problems
In addition to analysing arithmetic performance, we also wanted to see if a tokenizer's performance in pure arithmetic also translates to word problems.
While the performance gap between tokenizers is less pronounced in word-based problems, we see that single-digit and three-digit tokenizers generally outperform the BPE-based tokenizers, which indicates that the trend is consistent across both types of problems.
Bonus: R2L inference on Llama 3
One of the other things we wanted to test was how existing pretrained/instruct models performed when it was subjected to a tokenization scheme that was different to what it was originally trained on without having to re-train or fine-tune it. So, we took Llama3 8B Instruct and used the same code above to modify its tokenizer to perform R2L tokenization during inference, instead of re-training a new model with R2L data.
One important thing to note when adding two numbers in a three-digit tokenization scheme: the result can sometimes produce more tokens than the input numbers. One such example is when we add 999 and 111, which individually only require a single token, but when added together produce 1110, which requires two tokens (1 and 110). We wanted to explore how much of a difference this makes when performing addition with both L2R and R2L tokenization on varying token lengths.
(Going forward, we will refer to additions that result in an extra token as "carry" additions, and those that do not as "without carry")
We performed few-shot arithmetic tasks of varying digit lengths and carry settings with Llama3 8B Instruct. We did not find any significant performance difference for subtraction, multiplication, or division, so we only show the results for addition.
A few things to note from the results:
- For non-carry additions, digits that are multiples of 3 produce the exact same results, given that numbers like 528 491 have the same tokens regardless of tokenization direction2
- We see an interesting cyclic trend every three digits when comparing L2R and R2L carry additions. For multiples of three, L2R slightly outperforms R2L, but in the next digit, R2L performance shoots up and overtakes L2R, further widening the performance gap in the digit after that before dropping again in the next multiple of three.
- We found that the model's output tokens perfectly formatted numbers as R2L with just a few R2L examples, despite being trained on L2R formatted numbers. This is super important because it shows that these models aren't just "memorizing" patterns seen from its training data. When given an entirely new distribution of R2L number tokens, it was still able to reason and perform even better than the data it was originally trained on.
When averaging across all digit lengths and carry settings, there was a ~10% improvement in accuracy over the standard L2R tokenization used in Llama 3, just by swapping the tokenizer with the few lines of code above.
We tried to perform a similar experiment with single-digit tokenization, but performance dropped off really quickly after a few digits. This makes intuitive sense since the shift in token distribution of the training data and single-digit tokenized data is way larger than the difference in token distribution between the training data and the R2L data.
So, which tokenization method is best for math?
- 🔄 While Byte-Pair Encoding remains a popular tokenization method, there should be preprocessing steps to fix the inconsistencies and improve numerical reasoning.
- 📝 If you have to use a tokenizer that has a token for numbers up to 3 digits, make sure you tokenize your data R2L (with the code snippet above)
- 🚀 If you already have a trained model where the data was tokenized L2R, you can get better math performance by running inference with R2L
- 📚 Word-based problems have less apparent performance differences between tokenizers, but we found that single-digit and three-digit tokenizers outperform the BPE-based tokenizers.
- 👑 Most importantly, for arithmetic operations, single-digit tokenization has significantly better performance than other methods
Average Accuracy Evaluated on 5-shot Questions
Our work shows that tokenization significantly impact arithmetic performance in language models. With careful selection, we can optimize tokenization strategies based on problem type, improving LLM performance on mathematical tasks. We hope to see more research in the ways that tokenization affects other aspects of mathematical reasoning 🤗.