Introduction
A recent Mastodon thread by Mark Dominus (@mjd) posed a delightfully provocative claim:
“Computers can’t perform multiplication of numbers. This is mathematical and engineering reality. People who claim they do are confused, deceived, or merely ignorant.”
His evidence? That 4,000,000,000 × 4,000,000,000 returned 1,983,905,792 on a standard machine. The thread that followed is a clean collision between two definitions of what an algorithm even is — and the argument is sharper than it first appears.
It took me back to memory lane, when I was cashier at my relative’s xerox shop, in school days and didn’t need a calc to tell the cost on nix paper xerox@75 p/ per side. But the shop had a calulatior anyway, so Ignorant of the politic industrial revolution and hence R&D, in my idel time I d pick the calc , print any number n ,integer or not and ll keep taking sqrt of it,ie sqrt(n), sqrt(sqrt(n))…..1, this d end soon. it was unproductive, but so was gameboy, neother was useless in long run , depending on the direction, I d take and what d influenced me how and here we are!
The Theorem Behind the Trolling
Dominus’s argument rests on a genuine theorem from computability theory:
- Addition can be performed by a finite-state machine (FSM). The carry at each digit position depends only on a bounded amount of local state.
- Multiplication cannot. The memory required to compute \(a \times b\) correctly grows with the size of the inputs and cannot be bounded in advance.
Since any physical computer has a fixed, finite amount of memory — \(2^{16}\) bits, \(2^{64}\) bytes, whatever — it is, by definition, a finite-state machine. Therefore, by this theorem, no physical computer possesses a true multiplication algorithm.
This is the Myhill-Nerode / pumping lemma argument applied to arithmetic transducers. The language of correct multiplication pairs \(\{(a, b, a \cdot b)\}\) over \(\mathbb{N}\) is not regular, so no FSM can compute it for all inputs.
Two Definitions of “Algorithm”
This is the real crux. The thread exposes two competing standards:
The theoretical definition (Dominus’s implicit standard): an algorithm must work for all natural numbers — the full infinite domain \(\mathbb{N}\). Under this view, a multiplication algorithm is a function \(f : \mathbb{N} \times \mathbb{N} \to \mathbb{N}\), and any procedure that fails on even one pair of valid inputs is not an algorithm for multiplication.
The engineering definition (shachaf and others): an algorithm’s memory need not be bounded in advance — it just needs to scale as a function of input size. Hand the schoolbook long-multiplication method two \(n\)-digit numbers and it uses \(O(n^2)\) space. The computer isn’t a pure FSM; it’s closer to a Turing machine with a very large but finite tape.
The Julia counterexample landed squarely in the engineering camp:
julia> BigInt(4_000_000_000) * BigInt(4_000_000_000)
16000000000000000000
The computer did get it right — when given the appropriate type. The wrong answer Dominus observed was a 32-bit integer overflow, not a fundamental failure of computation.
What the Computer Is Actually Computing
This is where the argument gets philosophically precise. When a C program computes uint32_t multiplication and silently returns 1,983,905,792 for 4B × 4B, it is not computing a function over \(\mathbb{N}\). It is correctly computing:
\[ a \times b \pmod{2^{32}} \]
That is, arithmetic in \(\mathbb{Z}/2^{32}\mathbb{Z}\) — the integers modulo \(2^{32}\). As shachaf noted: elements of \(\mathbb{Z}_{2^{32}}\) are numbers; they just aren’t natural numbers. The machine is not broken. It is computing a different mathematical object than most programmers assume.
The dangerous part is the silence. Overflow in most languages produces no exception, no flag, no warning. The program continues with a subtly wrong value as if nothing happened. Dominus’s real point — “all models are wrong, but some are not useful” — is precisely about this silent divergence.
The Space Complexity Concession
Dominus himself acknowledged the engineering rebuttal cleanly: “The schoolbook method requires only \(O(n^2)\) memory.” This is the standard long multiplication algorithm, which in terms of memory is:
\[ \text{space}(a, b) = O((\log a + \log b)^2) \]
Shachaf’s question — “Presumably it can be bounded by a function of the size of the inputs?” — is exactly the distinction between an FSM and a Turing machine. An FSM’s memory is constant; a TM’s tape grows with input. Real computers programmed with arbitrary-precision integers behave like the latter.
And for multiplication specifically, we can do better than schoolbook. The FFT-based algorithm (Schönhage–Strassen, and the more recent Harvey–Hoeven) multiplies two \(n\)-digit numbers in:
\[ O(n \log n) \]
— which is essentially optimal. One participant in the thread (Skewray Research) mentioned encountering FFT-based integer multiplication in a college class in the early 1980s, and spending a year exploring its variations. It is a genuinely beautiful result.
The Hierarchy This Exposes
The thread implicitly touches the full computability hierarchy:
| Model | Memory | Can multiply? |
|---|---|---|
| FSM | Fixed, bounded | Only for inputs within state budget |
| Pushdown Automaton | Stack (unbounded one direction) | Still no — multiplication isn’t context-free |
| Turing Machine | Unbounded tape | Yes, for all \(\mathbb{N}\) |
| Physical computer (fixed RAM) | Bounded but large | Yes, up to a hardware limit |
| Physical computer (BigInt/GMP) | Bounded by RAM, grows with input | Yes, practically unbounded |
The primitive recursive definition of multiplication — mult(0, y) = 0; mult(S(x), y) = add(mult(x, y), y) — is exactly what a Turing machine executes. It always terminates, uses memory proportional to input size, and is correct over all of \(\mathbb{N}\). This is the theoretical algorithm Dominus’s FSM argument concedes to.
Conclusion
Dominus’s provocation is a valid theorem dressed in rhetoric. The theorem is real: multiplication over \(\mathbb{N}\) cannot be computed by any machine with a fixed memory bound. The rhetoric is that this makes physical computers fraudulent — which elides the fact that computers, when programmed correctly, are not FSMs.
The more useful takeaway is his sharper point: *“all models are wrong, but some are* not useful.” Integer overflow that silently returns wrong answers is a model that is not useful. Arbitrary-precision arithmetic is a model that is. The question of whether a computer”can multiply” is less interesting than whether the programmer knew which mathematical object they were actually computing.
\(\mathbb{Z}/2^{32}\mathbb{Z}\) is a perfectly good ring. It just isn’t \(\mathbb{N}\).
Leave a comment
Comments are verified via IndieAuth. You will be redirected to authenticate before your comment is published.