In Douglas Adams's The Hitchhiker's Guide to the Galaxy, published in 1978, Deep Thought finally lets the universe in on the secret, after a seven and a half million year wait, that the great answer to life, the universe, and everything is 42.
Since computers began to intrude on popular consciousness in the second half of the twentieth century, authors have been quick to latch onto their superhuman calculating capacities as a mechanism for narrative advancement, and as a means of justifying near-future speculative fiction. Indeed, the powers of the computer, particularly as a source of alternative and/or augmented reality, have spawned whole subgenres which have risen, fallen, and been reinvented.
However, contrary to popular conception, modern computers (and even conceivable future ones) are not without limitations. There are some questions out there that are really too hard to solve, even for a computer. In this article, we'll take a look at some of those limits and suggest proposed and speculative ideas on what a universe might look like in which such limitations were relaxed.
A Brief History of Computation
In 1900, the famous German mathematician David Hilbert published his list of 23 important unsolved problems in mathematics, many of which have continued to occupy the attention of mathematicians for decades afterwards. The search for the solution to one problem in particular, a variant of #10 called the Entscheidungsproblem, led directly into the creation of the modern theory of computing. The problem, simply stated, is:
Is it always possible, given a complete set of logical rules and a proposition, to determine if the proposition is true in a finite number of steps?
The (negative) answer would not be discovered until the late 1930s, when Alan Turing (an Englishman) and Alonzo Church (an American) independently discovered equivalent formal models of computation (the Turing machine and the lambda calculus, respectively). They demonstrated that such formal models cannot determine the truth of arbitrary arithmetic statements in finite time.  Turing's model, in particular, lent itself to mechanical implementation and became the launching point for modern computer development. Both men's works are enshrined in one of the central tenets of computer science: the Church-Turing Thesis. An unprovable statement akin to a natural law, the thesis holds that there is no model of computation that can solve in finite time a problem that cannot be solved by a Turing machine (or, equivalently, by lambda calculus).
The Halting Problem
The Entscheidungsproblem was the first problem that proved to be unsolvable by a computer, and later research gave rise to a whole class of problems, known formally as "undecidable." These problems, then, are the fundamental limits of computation. No matter how clever we are, no matter how fast our computers or how smart our programmers, it is simply not possible to solve one of these problems, in general, in finite time.
The most commonly referenced undecidable problem, and the first one taught to computer science students, is the Halting Problem, first described by Turing:
Given a Turing machine T and an input I, determine in finite time if running T on I will run forever or not.
For those unfamiliar with computer programming, this asks for a Turing machine that can determine if another Turing machine contains infinite loops. This kind of recursion (Turing machines reasoning about other Turing machines) is common in undecidable problems.
To understand why this problem is undecidable, imagine that we have a machine (which we'll call H) that could solve the problem. In particular, when run on a program/input pair that halts, H returns "true." Otherwise, it returns "false."
Additionally, we introduce another machine called K, which uses H as a component (which is allowed since Turing machines can be composed). If H returns "true" on an input pair, then K runs forever. If H returns "false," then K halts immediately.
Now, to muddy the waters even further, imagine running H with K/K as the input pair. If H returns "true," then K itself would run forever. If H returns "false," then K would end immediately. No matter what H does, it cannot be correct in all cases. And, since we posited that H correctly solved the Halting Problem, this contradiction shows that H cannot exist.
As noted above, this kind of machines-reasoning-about-machines problem is typical of a broad class of undecidable problems. This concept is formalized by the Rice Theorem, which states that assessing any non-trivial property of Turing machines is undecidable.
Other non-recursive, non-logic undecidable problems exist—for instance, the problem of determining whether a set of specially constructed colored tiles (called Wang tiles) can tile the plane. Similarly, the Post Correspondence Problem, named for its inventor, Emil Post, is an undecidable problem expressed through dominoes:
Given a fixed set of dominoes, determine if there is a sequence of dominoes (allowing repetitions) such that the sum of the tops is equal to the sum of the bottoms.
While the proof of this problem's undecidability is somewhat complex, the concept boils down to using the numbers on the dominoes to record the history of a Turing machine. Eventually, one discovers that being able to find a working sequence of these dominoes is equivalent to proving whether the encoded Turing machine halts, which we already know to be undecidable.
There are many other known undecidable problems, and many that have not yet been described. This is known simply from the fact that, since we can encode Turing machines as numbers, there are a countable number of them, the "smallest" form of infinite, while there are uncountably many formal problems. Thus, despite the infinity of Turing machines, there are infinitely more problems than machines, and thus many, many problems that are undecidable.
A Note on Really Big Numbers
As an aside, the study of undecidable problems gives rise to the biggest sequence of numbers known, the busy beaver numbers, defined as follows:
Consider the set of all Turing machines of N instructions. BB(N) is the number of steps taken by the Turing machine that runs for the longest number of steps without running forever.
BB(N) is itself undecidable: if we had a machine that could tell us BB(N) in finite time, then we could run each Turing machine of size N for BB(N)+1 steps. Any machine that had not stopped by then must be running forever. This would allow us to solve the Halting Problem in finite time, which we know is impossible. In fact, this tells us that the busy beaver sequence grows faster than any computable function.
For the curious, the first few busy beaver numbers (to the best of our knowledge) are: 1, 6, 21, 107, >= 47176870, >=8690333381690951. The latter two are as yet unknown, and the given numbers are simply known lower bounds.
In the decades since Church and Turing revolutionized mathematics and spawned the modern computer, many researchers have attempted to disprove their thesis by proposing variants of the two systems in an attempt to discover a formalism with greater computing power than a Turing machine. To date, none have succeeded, and it is generally accepted that the Church-Turing thesis is sound.
That said, this is a speculative fiction publication, and I would be remiss not to provide fodder for those eager minds seeking to pull some inspiration from these notes. A few of the wilder proposals for "hypercomputers" that might inspire you include:
- A Zeno machine (inspired by Zeno's Paradox) which performs its computations at an accelerating rate (1 minute, 1/2 minute, 1/4 minute, etc.) could theoretically compute an infinite number of steps in just two time increments.
- A "real" machine, a form of analog computer which uses non-computable real number constants to encode infinite amounts of information in single values. This might require some strange laws of physics, including the ability to measure non-computable physical constants to arbitrary precision regardless of noise.
- A quantum machine that makes use of infinite superposition of states, in contrast to the finite superposition machines that researchers are working on today.
- A neural network machine, like those used in artificial intelligence, that has been evolved over an infinite number of states.
Bounds on Tractable Computing
We have established, then, that there is an infinite number of formal problems for which no computer can determine the solution in finite time. Fortunately, all of the examples we have seen have been fairly abstract, either concerned with machines reasoning about themselves, or other abstract logic puzzles. Are we then safe in assuming that computers can solve anything else?
Unfortunately, this turns out not to be the case. Since the creation of practical computers, programmers have realized that some programs are slower than others. Theoretically, we divide Turing machines into those which run in a polynomial (e.g., n2) number of steps, which we term tractable, and those which take an exponential or combinatorial (e.g. en) number of steps, termed intractable. These two sets are also commonly called P and NP, respectively, and, to the chagrin of many programmers, many useful programs fall within NP.
P ?= NP
In 1971, Cook and Levin proved that a set of programs exists called NP-Complete. These programs have the special property that any NP problem can be seen as a simple restatement of an NP-Complete problem. That is, if you could make an NP-Complete problem tractable, you could make any NP problem tractable. The search since then has been for conclusive proof either way of the possibility of making an NP-Complete problem tractable. Today, while most people believe that they are not equal, P ?= NP stands as the greatest unsolved problem of computer science.
While we might assume that these NP-Complete problems must be abstract or impractical like the undecidables were, that is not always the case. A few examples are:
- Given a boolean expression, find a set of inputs that makes it true.
- Given a set of inter-city distances, find the shortest round trip that visits every city.
- Given a set of friend relationships, find the largest clique in which everyone is friends with each other.
- Given a bag of a fixed size and a set of boxes of different sizes, find the optimal way to pack the bag.
-Find the smallest number of colors needed to color a given non-Euclidean map.
Returning again to speculative possibilities, the consequences of a positive result for P ?= NP would be enormous. Many research problems in operations research, which concerns the application of mathematics to decision-making and efficiency, are NP-Complete, and a solution would make optimal logistical solutions simple. Similarly, protein structure prediction is NP-Complete, and a tractable solution would revolutionize biology.
Perhaps even more revolutionary: the ability to verify a mathematical proof of a fixed length is an NP program. The existence of a tractable proof-checker (such programs exist today, but are impractical for general use due to their slowness) would enable the exhaustive generation and checking of all possible mathematical facts up to a certain size limit, and that limit would continue to recede as computing hardware advanced.
Some might view these limitations as an attempt to rob speculative authors of some of their best tools. As I see it, however, a grounding of fictional computers in a realistic understanding of their limitations is to near-future authors what knowledge of relativity is to far-futurists. Rather than depriving the author of tools, I hope that this information will allow him or her to choose to abide within the established laws of this universe, or to consciously throw them aside and confront their implications head-on.
Alonzo Church, "An unsolvable problem of elementary number theory," American Journal of Mathematics, 58 (1936), pp. 345-363.
Alan Turing, "On computable numbers, with an application to the Entscheidungsproblem," in Proceedings of the London Mathematical Society, Series 2, vol. 42, pp. 230-265, 1936. Reprinted in Martin Davis (ed.), The Undecidable, Raven, 1965.
Tien Kieu. "Quantum Algorithm for Hilber'ts Tenth Problem." Int. J. Theor. Phys. 42(2003): 1461—1478.
M. Sipser. The history and status of the P versus NP question. In Proceedings of the Twenty-Fourth Annual ACM Symposium on Theory of Computing (Victoria, British Columbia, Canada, May 04-06, 1992). STOC '92. ACM, New York, NY, 1992. 603-618.
M. Sipser, 2005. "Introduction to the Theory of Computation." 2nd. International Thomson Publishing.
Allan H. Brady, "The Determination of the Value of Rado's Noncomputable Function Sigma(k) for Four-State Turing Machines." Mathematics of Computation, vol. 40, no. 162, April 1983, pp. 647- 665.
Shen Lin and Tibor Rado, "Computer studies of Turing machine problems," Journal of the Association for Computing Machinery, vol. 12, no. 2, April 1965, pp. 196-212.
Tibor Rado, "On Non-Computable Functions," Bell System Technical Journal, vol. XLI, no. 2, May 1962, pp. 877-884.
O. Shagrir (June 2004). "Super-tasks, accelerating Turing machines, and uncomputability." Theor. Comput. Sci. 317, 1-3: 105-114.
Arnold Schönhage,"On the power of random access machines," in Proc. Intl. Colloquium on Automata, Languages, and Programming (ICALP), 1979, pp. 520-529.
P. Rodrigues, J. F. Costa and H. T. Siegelmann. "Verifying Properties of Neural Networks." Artifical and Natural Neural Networks, LNCS 2084, Springer-Verlag, 2001, 158-165.
B. Berger and T. Leighton. "Protein folding in the hydrophobic-hydrophilic (HP) model is NP-complete." J. Comput. Biol. 5.1 (1998): 27-40.