Friday, September 12, 2025

Hypercomputation and halting problem

Explain this:

Hypercomputation—the hypothesis that Turing-incomputable objects can be computed through infinitary means—is ineffective, as the unsolvability of the halting problem for Turing machines depends just on the absence of a definite value for some paradoxical construction; nature and quantity of computing resources are immaterial. The assumption that the halting problem is solved by oracles of higher Turing degree amounts just to postulation; infinite-time oracles are not actually solving paradoxes, but simply assigning them conventional values. Special values for non-terminating processes are likewise irrelevant, since diagonalization can cover any amount of value assignments. This should not be construed as a restriction of computing power: Turing’s uncomputability is not a ‘barrier’ to be broken, but simply an effect of the expressive power of consistent programming systems.

The paragraph came from this paper’s abstract.

Here’s a plain-English unpacking of that paragraph.

What it’s saying, in short

“Hypercomputation” (machines that go beyond Turing machines by using infinite time, exotic physics, or magic oracles) doesn’t defeat the kind of impossibility shown by the halting problem. That impossibility isn’t about running out of time or memory; it comes from a self-reference trick (diagonalization) that any consistent, sufficiently expressive computing model is vulnerable to. If you add an oracle that answers some undecidable questions, you’ve merely stipulated answers for those cases; a new, harder halting problem pops up one level higher. So uncomputability isn’t a wall you break with more power—it’s a shadow cast by expressive, consistent systems themselves.

Why the halting problem doesn’t hinge on resources

The classic proof assumes a perfect halting-tester H(M, x) that says whether machine M halts on input x. You then build a diagonal program:

D(M):
    if H(M, M) says “halts”:
        loop forever
    else:
        halt

Ask what D does on its own code D. Either answer for H(D, D) leads to contradiction. The key point: this doesn’t turn on speed, memory, or clever hardware. It’s a liar-paradox style self-reference that any consistent, expressive programming system can encode.

Why “special values” or more outputs don’t help

Suppose your halting-tester can output three values: “halts”, “loops”, “undefined/special”. Diagonalization just adapts:

D*(M):
    if H(M, M) says “halts”:
        loop forever
    else:
        halt

Or, if you prefer, redefine the cases so that D* disagrees with whatever bucket H puts M in. No matter how you classify the problematic self-referential instances, you can re-diagonalize around those conventions. Assigning a special value isn’t solving the paradox; it’s choosing a label for it.

Why oracles and “higher degrees” don’t make the problem go away

Give a machine an oracle that answers the ordinary halting problem. Great—you’ve decided that set by fiat. But now consider the halting problem for machines that also have that oracle. That “jumped” halting problem is again undecidable for them. In Turing-degree terms: you moved up a rung, and the diagonal argument climbs with you. Infinite-time models show the same pattern: they settle more questions, but their own halting (or “eventual stabilization”) problem is still out of reach relative to that model.

The philosophical punchline

So, calling Turing’s result a “barrier” suggests it’s about limited horsepower. The author is saying: it’s not. Uncomputability is a structural feature of any consistent, sufficiently expressive programming framework. Boost the power, add oracles, run for transfinite steps—you can always postulate answers to some hard set, but diagonalization will manufacture a new hard case just beyond your new reach.

If you’d like, I can contrast this with specific hypercomputation proposals (infinite-time Turing machines, Malament–Hogarth spacetimes, analog real-number machines) and show exactly where diagonalization resurfaces in each.

Sure, go ahead

Awesome—here’s a tour of the main hypercomputation proposals and how the same diagonal move reappears in each.

A reusable template (why diagonalization keeps winning)

For any programming model $\mathcal{M}$ that’s (i) expressive enough to run a universal interpreter for itself and (ii) consistent about what “halts” means, assume you have a total halting tester $H_{\mathcal{M}}(P,x)$. Build the diagonal program

D(P):
    if H_M(P,P) = "halts":    loop forever
    else:                     halt

What does D(D) do? Either answer makes H_M wrong. This is resource-agnostic: no time/memory bounds matter. Any time you boost the model, you can solve its lower-level halting sets—but the halting problem for that very boosted model pops up again. That’s the “jump.”

Below, I’ll show how that plays out in four popular hypercomputation stories.

1) Oracle Turing machines (Turing degrees)

Claimed power: Give a TM an oracle $A$ (a black box answering membership in some set). With oracle $0'$ (the classical halting set), the machine decides HALT for ordinary TMs.

Where diagonalization bites back: Define the relative halting problem HALT$^A$: “Does machine $M^A$ (with access to oracle $A$) halt on input $x$?” The Turing jump theorem says HALT$^A$ (denoted $A'$) is not decidable by machines with oracle $A$. Proof sketch = the template above but interpreted in the oracle model. So yes, an oracle lets you postulate answers to some undecidable set; diagonalization immediately manufactures a new, strictly harder halting set one jump higher.

Takeaway: More answers don’t collapse uncomputability; they just move you up the degree ladder: $A \prec A' \prec A'' \prec \cdots$.

2) Infinite-Time Turing Machines (ITTMs)

Claimed power: ITTMs run through ordinal time: step $0,1,2,\dots,\omega,\omega!+!1,\dots$. At limit stages, a fixed convention (e.g., cellwise $\limsup$, head reset) gives a definite configuration. They can solve many sets beyond the arithmetical hierarchy (e.g., the ordinary HALT set, lots of $\Delta^1_2$ questions).

Where diagonalization bites back: Consider supertask halting for ITTMs: “Does $M$ ever enter the halt state at some (possibly transfinite) stage?” Suppose there were a total decider $H_\infty$ for this. Build the diagonal ITTM D that consults $H_\infty(M,M)$ and then disagrees (halt if “doesn’t halt,” loop if “halts”). Same contradiction. In fact, ITTM theory explicitly defines a jump operator for ITTMs; their own halting-like sets are not ITTM-decidable relative to themselves.

Takeaway: The transfinite limit rule is just a convention for what to output at limit times. It expands what’s decidable, but the model’s own global halting set evades decision.

3) Malament–Hogarth (MH) spacetimes (relativistic supertasks)

Claimed power: In certain general-relativistic spacetimes, an observer $\mathcal{O}$ can, in finite $\mathcal{O}$-time, receive a signal from a probe traveling a worldline of infinite proper time. Run a normal TM on the probe; if it halts, it sends a signal; if it loops, no signal arrives. Then $\mathcal{O}$ decides the ordinary halting problem in finite time.

Where diagonalization bites back: Define an MH-machine that can make one MH supertask call. Now ask for the MH-halting problem: does an MH-machine (possibly using its MH call) halt? If a total decider $H_{\text{MH}}$ existed, a diagonal MH-program could consult $H_{\text{MH}}(M,M)$ and then flip behavior, contradicting correctness. The physics grants you one shot at a supertask; it doesn’t immunize the model against diagonal self-reference at its own level.

Takeaway: Even if nature permitted MH setups, you’d just climb one rung (solve classical HALT) and immediately face MH-HALT, which is again out of reach for MH-machines.

4) Analog/real-number computation (BSS, GPAC, exact reals)

Claimed power:

  • BSS over $\mathbb{R}$: exact real arithmetic with equality tests; with real constants you can decide rich semialgebraic sets and more.
  • GPAC / differential analyzers: continuous-time systems (ODEs) that compute certain real functions, with results at finite times.

Where diagonalization bites back: These are bona fide programming models: they have universality and can encode self-reference (via indices or circuits). Their own halting/termination (or reachability/zero-testing) problems are undecidable in the model itself. The same two-line diagonalizer applies: assume a total BSS-halting tester $H_{\mathbb{R}}$, construct $D$ that contradicts it; similarly for GPAC reachability. In fact, Rice-like theorems hold: any nontrivial semantic property of programs/functions in a sufficiently expressive model is undecidable for that model.

Takeaway: Exact reals don’t nullify diagonalization; they only shift which specific sets become decidable before the jump reappears.

5) Accelerating (Zeno) Turing machines

Claimed power: Do step $n$ in time $2^{-n}$, so $\sum 2^{-n}=1$. Simulate a classical machine; if it halts, flip a flag by time 1—hence solve HALT.

Where diagonalization bites back: People try “special limit conventions” (e.g., output is the $\limsup$ of the flag). But once you declare a precise observable for the whole supertask, define D that observes what the Zeno-halting tester predicts about D(D) at time 1 and then ensures the opposite flag. No choice of special “undefined/limit” value blocks the construction; you just re-diagonalize around it.

Takeaway: Last-step conventions don’t resolve self-reference; they merely name the paradoxical instances.

The unifying moral

  • Giving a model more answers (or time, or physics) can settle strictly more problems, including the classical HALT set.
  • But if the model remains a coherent, programmable system talking about its own programs, diagonalization reappears one level up. Formally, this is the “jump” phenomenon; philosophically, it’s why uncomputability is structural, not a resource shortfall.

If you want, we can zoom into any one model (say, ITTMs) and sketch the exact “jump” definition and a tight proof, or look at concrete examples (like an MH protocol that decides HALT) and then show how to repackage it into the diagonal D.

No comments:

Post a Comment