What's the biggest number you've ever seen? Think about it fondly for a moment, because it's going to be blown away. Yes, you can always add one. Yes, you can multiply by ten, or a million. Yet the number you're thinking of is a speck on the knee of the world's smallest flea compared to what's coming.

**Pebbles**

Before we get to the really big numbers, let's start with a look at what passed for big a few thousand years ago. The earliest numbers were collections of pebbles, or fingers held up, or notches on a bone. These methods of representation impose practical limits on number size. For example, a thousand dots is becoming unwieldy. As a little warm-up exercise, try to estimate how long it would take to draw a million dots by hand. It's still feasible, but not much fun.

As civilization developed, marks were arranged into groups, usually of five or ten. Each size group was given its own symbol. The most familiar of these systems is the Roman numeral system, where groups of 1, 5, 10, 50, 100, and 500 were represented by I, V, X, L, C, and D. The symbol for 1000 was , which evolved into M. Lesser known are the symbols and for 10,000 and 100,000. But then the system stops -- there was no symbol for one million. The evidence for this is a commemorative column, the *columna rostrata,* erected in 260 BC to celebrate a naval victory over the Carthaginians. It lists the loot, over 3 million bronzes, by repeating the symbol thirty-two times.

3.2 million is a remarkably large number for a system using grouping. The Egyptians had a grouping system which used a picture of Hh, the god of space. Originally, it was used to represent the infinite, then took on the precise meaning of one million, and finally as the culture shrank the symbol fell off to mean "many." The Greek grouping system stopped at 10,000, a *myriad,* which has also metamorphosed to mean "many" in modern English.

**Powers of Ten**

The Indians, however, developed a spectacular tower of numbers. Big enough, in fact, that we need a brief digression. From high-school math, 10^{n} means ten to the nth power. For example, the number 10^{6} means ten to the sixth power, which is 10x10x10x10x10x10, or 1,000,000. One key point to remember is that when the base is 10, the exponent (here 6) tells you how many zeros the number will have.

In ancient India, enormous numbers took on special religious significance, and among all civilizations (even to this day) they were unique in having names for many higher powers of ten. The earliest written records of Indian culture are the Vedas, from between 1500 BC and 800 BC, and these already contain names for powers beyond a billion.

The names appear in a number of texts and vary considerably, but the best known is from the Lalitavitsara (circa 100 BC), which tells the story of Buddha's life. In this book, Buddha competes with five other suitors for the hand of Gopa, Prince Dandipani's daughter. He defeats them in contests of writing, wrestling, archery, running and swimming, and number skills. Then, Buddha is given a final test by the mathematician Arjuna, who asks "O young man, do you know the counting which goes beyond the koti (10^{7}) on the centesimal scale?" Buddha's response begins, "Hundred kotis are called ayuta, hundred ayutas niyuta, hundred niytas kankara" and continues to tallaksana, which is 10^{53}. But this is only the first count, and Buddha continues with eight more counts of 23 names leading to 10^{7+9x46}, which is a one followed by 421 zeros, a truly transcendent number.

Even more than Buddhism, the Jainist religion was interested in gigantic numbers. In the Anuyogadvara-sutra from the first century BC, the total number of human beings in the world is given as 2^{96}, which has 29 digits. Another huge number in the Jaina works is the Sirsaprahelika, a measurement of time. One purvi is 75600000000000 years, and it takes (8400000)^{28} purvis to make one Sirsaprahelika. That works out to a 196 digit number of purvis, which is an extremely long time by any standard.

From working with such large numbers, the Jains developed a notion of infinity which was quite involved, though not mathematically precise. Numbers were classified as enumerable, unenumerable, and infinite, and to get a feel for these they gave directions:

"Consider a trough whose diameter is of the size of the earth. Fill it up with white mustard seeds counting them one after another. Similarly fill up with mustard seeds other troughs of the sizes of the various lands and seas. Still it is difficult to reach the highest enumerable number."

Indian culture is exceptional in its use of huge numbers, and it is astonishing how long it took western culture to embrace a number system that could exceed even the millions -- not until the end of the dark ages. In Greek and Roman times, bigger numbers were unavailable because they were never or rarely needed. The Coliseum, when full, held 55,000 spectators. We know from the *columna rostrata* that a good pillaging expedition could net millions. But when would you ever see a billion things, much less need to count them?

That wasn't a rhetorical question. When could you see a billion things? If you said "At the beach" and are thinking of sand, you are in good company: Archimedes, legendary for his feats of engineering, and considered one of the greatest mathematicians in history, was interested in counting grains of sand. Though the Greek enumeration system stopped at the myriad, Archimedes' treatise Sand Reckoning blows away all previous records and earns him the crown for king of large numbers.

The myriad is 10^{4}, or 10,000. By continuing with "one myriad," "two myriad," etc., Archimedes gets up to "one myriad myriads," which is 10^{8}, or a hundred million. He calls the numbers 10, 10^{2}, 10^{3}, 10^{4}, 10^{5}, 10^{6}, 10^{7}, and 10^{8} the "first arithmon." The "second arithmon" takes him up to 10^{16}, and so on. For example, 10^{26} would be the hundredth of the third arithmon, as 10^{26} = 100 x 10^{8+8+8}. After a myriad-myriad arithmons, he runs out of numbers again and calls this the end of the "first period." Adding more "periods," he finally stops at the fantastic number 10^{80,000,000,000,000,000}, which is a one followed by 80 quadrillion zeros. Archimedes calls it the "myriakis-myriostas periodu myriakis-myriston arithmon myriai myriades."

Having wowed you with his system, he proceeds to estimate the number of grains of sand that would fill the universe. In contrast to his feats of number naming, this estimate comes out to less than 10^{64}. Archimedes is not, in fact, particularly interested in number of grains of sand in the universe, but wants to demonstrate that such a number is easy to express and also easy to exceed.

Nevertheless, Archimedes' 10^{64} is astronomically large -- it comes from astronomy! More current astronomers put the number of particles in the universe at somewhere between 10^{72} and 10^{87}, while the age of the universe is estimated at a mere 10 to 20 billion years.

If you need a little help grasping numbers this big, don't panic. The Hitchhiker's Guide to the Galaxy has this to say about space:

"Space is big. Really big. You just won't believe how vastly hugely mind-bogglingly big it is. I mean, you may think it's a long way down the road to the chemist, but that's just peanuts to space. Listen. . ."

and then begins to tell you things you really need to know. Here, however, we'll press on to contemplate some numbers that make space look dinky.

**Towers of Powers**

Astronomical numbers are beyond the realm of counting, because there simply aren't 10^{87} things in the universe to count. Yet these numbers are still quite easy to write, even without exponential notation. For example,

10^{100 }= 1 0000000000 0000000000 0000000000 0000000000 0000000000 0000000000 0000000000 0000000000 0000000000 0000000000.

This big guy was named a "googol" by the 9-year-old nephew of a mathematician, around 1940. An easy step from there is the "googolplex," a one followed by a googol zeros, and probably the largest number many people have heard of. Despite the similar names, a googol is not at all comparable to a googolplex. We just wrote out a googol in full, but if you put a tiny zero on every particle in the universe you wouldn't be close to writing down a googolplex. If Buddha were to count googols for a Sirsaprahelika, he would make insignificant progress towards a googolplex, and Aristotle could throw in a myriostas periodu for good measure.

Even with the overwhelming jump from googol to googolplex, we can write the googolplex very simply as 10^{10100}. To read a stack of exponents like this, start at the top and work down, first computing 10^{100} (a googol), and then taking 10 to that power. For a little practice with exponents, try this series of problems:

- What's the largest number you can write using only four 1's?
- What's the largest number you can write using only four 2's?
- What's the largest number you can write using only four 3's?
- What's the largest number you can write using only four 4's?

Working with towers of powers shows the way to even bigger numbers. You can follow a one with a googolplex zeros to get 10^{1010100}, and so on, adding another 10 to the tower at each step. At each stage, the number you get is far, far bigger than all that came before, bigger than the sum of all previous, bigger than even their product. So we're done, right? All the big numbers we need, ready for the taking.

If you believe that, we should have stopped at the beginning, marking notches on a stick. In theory, you will certainly reach a googol counting by ones. Similarly, if you start naming powers of ten, a googolplex is out there at the horizon. The problem is that you're moving much too slowly, and the horizon never gets significantly closer. There is no reason to expect that towers of 10s do any better -- something really big is out there, big enough to put a giant tower of 10s to shame.

**Busy Beavers**

To get to the next level, we'll need the help of a machine, known as a Turing machine. Alan Turing was a British mathematician, whose short and turbulent life is best remembered for his work cracking the German "Enigma" codes during World War II. But in 1937, he founded the theory of computability with an elegant idea: that any "effective" computation could be performed by a particularly simple machine, which he described. His thesis was that any computing device or algorithm could be replaced by a Turing machine, so that a Turing machine is a "universal" computer.

The machine is a box, with a strip of ticker tape fed through it. Think of the tape as divided into squares, and assume each square starts with a 0 marked inside. We'll assume the tape is infinitely long for simplicity (really!). The box is always focused on one square on the tape, can read the mark on that square, and can mark that square with a 0 or 1. It can also move the tape one square to the left or right.

The machine is always in a particular "state," and with each tick of an imaginary clock the machine changes from state to state. The current state, together with the one visible mark on the ticker tape, tells the machine what to do next. At each tick of the clock, the machine decides:

- whether to write a 0 or 1 on the tape
- if it will move the tape left or right
- what its next state will be
- if it should stop the computation (halt)

The list of states and these behaviors form the machine's program, and allow it to perform different functions.

As a simple example, consider a Turing machine with two states, A and B. State A will always write a 1 to the tape, move right, and change to state B. State B will always write a 0 to the tape, move right, and change to state A. In the picture, these instructions are shown with circles for the states, and arrows for the state changes. The labels on the arrows indicate the new mark (0 or 1) and the direction to move the tape (L or R). As you can see, the machine never halts, and writes 10101010. . . . on the tape forever. This particular Turing machine is very simple, but fails to illustrate a key point: most Turing machines will read the values previously written to the tape, and change their behavior accordingly. We'll see more interesting examples shortly.

Now let's get back to numbers. How many ones can a Turing machine write on tape before halting? It is easy to design a machine that writes 1's forever. Give it just one state A, whose instructions are to write a 1, move right, and stay in state A. The challenge is to make a machine that actually halts. Again there is an easy answer, and it will work to write as many 1's as you like. I'll illustrate with instructions that make the machine count to three. The machine has three states, A, B, and C. Each state will write a 1 and move right. State A will change to state B, state B will change to state C, and state C will halt. Here is the "state diagram" for this simple machine:

It should be clear how to make a machine that writes 30 or 50 or a googol 1s on the tape. So to make things interesting, put a limit K on the number of states you can use, and ask the following question: What is the largest number of 1s a Turing machine with K states can write on the tape (and still come to a halt)? The answer is called the Kth "busy beaver number." That is, if K is four the machine has four states and we're talking about the 4th busy beaver number. If K is five, the 5th.

When K is small, the machines are simple and the busy beaver numbers are not hard to calculate. A machine with one state can write at most one 1 before halting (or it will run forever), so the first busy beaver number is one. The second busy beaver number is 4, and the third busy beaver number is 6. Here is the diagram for the three state busy beaver:

This machine cares what is written on the tape, as indicated by the little 0s and 1s. For example, state A will switch to state B if the machine sees a 0 on the tape, and will switch to state C if it sees a 1. Trace through the operation of the three-state busy beaver yourself -- you should end up with six ones in a row, with the machine halted near the middle. Now, as a challenge, design the two-state Turing machine that writes four ones on the tape. Here's the answer, if you're stuck.

Now the busy beaver numbers start to grow. Here's a table of the first few:

Number of states: | 1 | 2 | 3 | 4 | 5 | 6 |

Busy beaver number: | 1 | 4 | 6 | 13 | at least 4098 | at least 10^{865} |

Notice that for machines with 5 or 6 states, the table gives only "at least. . .". This is because nobody knows the actual value. To find these numbers, you need to test all possible Turing machine programs, and there are a lot of programs to try. Worse, testing a machine requires running it until it halts, and for some machines you can't tell if it will run forever, or stop after some unimaginable length of time. For example, the six state machine that prints 10^{865} ones was discovered only last month, and it runs for more than 10^{1730} steps before halting. The table of busy beaver numbers stops at six because for busy beaver seven and beyond, they are simply too big to compute.

Being "too big to compute" is not just a practical issue: A major result of theoretical computer science implies that the sequence of busy beaver numbers is uncomputable. No matter how fast our computer, or how clever our software, there is no program that can calculate these numbers!

Contrast this with the towers of powers of 10. Equip yourself with the computer of your dreams, running faster than physics allows, and with more memory than there are atoms in the universe. Then you could easily calculate a huge tower of 10s just by repeated multiplications. So the towers of 10s are theoretically computable. The busy beaver numbers are not.

A remarkable consequence is that the sequence of busy beaver numbers grows faster than any computable sequence of numbers. They grow faster than powers of 10. They grow faster than towers of powers of 10. They grow faster than towers of googolplexes! Essentially any sequence of numbers built on a pattern will be completely overwhelmed by the busy beaver numbers.

We have found a sort of holy grail for large numbers: even busy beaver twelve is bigger than everything else in this article put together. This is the mathematician's equivalent of deep space. It's the arrow to the right off the end of the number line. These numbers are so enormous that they're beyond comprehension, yet there is always an infinite horde waiting to make them look tiny. The busy beaver numbers are far exceeded by an even larger uncomputable sequence called the "super busy beaver" numbers, which are beaten by the "super super busy beavers," and so on.

But big numbers never get boring, because "and so on" isn't good enough -- the path you're on is always too slow. Piles of pebbles are dwarfed by powers of 10, which are dwarfed by towers of exponents, which are dwarfed by busy beavers. Each significant increase required a new idea, and always will. The big number hunt is still on!

Bryan Clair is a professor of mathematics at Saint Louis University. His previous appearance in *Strange Horizons* was "Habitrails and Asteroids: Topology from the Inside."

**References**

For more on number naming and ancient number systems, see Number Words And Number Symbols: A Cultural History Of Numbers by Karl Menninger.

The history of math archive has more fascinating information about Indian mathematics and Jaina mathematics in particular.

The current state of knowledge for busy beaver numbers is on the web here, and if you'd like to make your own busy beaver, here's a java-based Turning machine.

Isn't the number of investigations into Scott Pruitt bigger?