Before Turing, there was the Original Sin of Kurt Gödel’s Incompleteness Theorem which proved that any human logical or mathematical system could never be both all-encompassing and consistent. It almost seems designed that in any intellectual pursuit we will always end up with the unknowable.
If the halting problem was solvable, then anyone with a computer could prove Goldbach’s conjecture in an afternoon.
Uh, nope. The halting problem could have been solvable with an astronomic complexity in the length of the program, and no one would have been able to afford the compute budget to run it on Goldbach.
First, write a bash script P1 that runs a SAT solver on every possible SAT problem and halts if any of them take exponential time.
This does not make any sense to me. If I tell you that SAT is in P but not what complexity it has, how can you possibly tell for any single SAT problem how long to wait to prove me wrong?
For all this I’m assuming that the halt detector runs in feasible time. It could be a galactic algorithm that provably checks halting but runs in too long a time to be usable. Even in that case, though, the existence of such an algorithm would reveal deep connections between all computable theorems, which would itself be one of the greatest developments in mathematics.
It’s not the same thing being discussed here, but y’all might like Impagliazzo’s discussion of average-case complexity, a different from the common CS focus on worse-case complexity, where we find a case where something is like another problem or is impossible. Most folks think that symmetric cryptography exists–i.e., that you can make a block cipher that doesn’t easily yield the key given known plaintext. I haven’t seen a survey of cryptographers or anything, but most probably think a successful block cipher can even be a pretty simple one. But we haven’t proven much about this at all! Then asymmetric cryptography is a whole other mess.
Where the halting problem is about stuff you can’t compute in principle, this is more about how we think it’s impractical to compute some things that are computable in principle (but haven’t proven much about it).
Worst case and average case were discussed a lot in my complexity theory courses as an undergrad. Quicksort is the canonical case of example of an algorithm that has great average-case complexity but much worse worst-case complexity. This also comes up a lot in security where an attacker will often try to find the worst case for you. For example, there are a bunch of DoS attacks that rely on sending data with a distribution that will cause collisions in hash function used for a hash table. If the hash table does secondary chaining, an attacker who can craft collisions can make the system go from the constant-time average case to the linear-time worst case.
This does not make any sense to me. If I tell you that SAT is in P but not what complexity it has, how can you possibly tell for any single SAT problem how long to wait to prove me wrong?
That’s an interesting question! The usual science answer is “measure really really carefully”, but I have no real idea how you’d do that here. Honestly, just counting the instructions consumed by each iteration as you increase input size should be able to tell you pretty quickly, but an average desktop is full of noise sources that can mess with that count. Still, with a carefully written SAT solver and a good profiler I think it’s feasible.
You are pointing out that it’s not easy to map from “how much computation steps should this take” to “how long to wait before timing out”. But I was pointing at something different:
Even if you know the precise asymptotic complexity of a problem, you cannot predict the number of computation steps it will take on any given input, because asymptotic complexity is a property “at the limit”.
If you know that something has polynomial complexity, you don’t know what the polynomial formula is to compute the number of steps (maybe the formula is 10^7 + 10^7 * x^1000), so you cannot predict the actual number of steps it should take, even at the limit.
Basically you cannot, after only a finite number of observations, disprove the claim that something has polynomial or O(n) or O(1) complexity, so I don’t see how you would turn this into a monitor to feed to the halting problem as claimed.
If you can nest calls to the halting oracle then you can do “is f in O(g)” using discrete definition of asymptotics, which is nice because there are no limits involved, just exists and forall.
Something like:
from turing import halts
def naturals():
i = 0
while True:
i += 1
yield i
def f_in_order_g(f, g):
# f in O(g) if:
# exists n,k in naturals such that
# forall i in naturals,
# f(i+k) <= n*g(i+k)
def n_machine():
for n in naturals():
def k_machine():
for k in naturals():
def i_machine():
for i in naturals():
if f(i+k)>n*g(i+k): return
if not(halts(i_machine)): return
if halts(k_machine): return
return halts(n_machine)
Note that the existence of a halting oracle is a much stronger property than the decidability of the halting problem, as summarized by 0x2ba22e11 above:
If you can nest calls to the halting oracle
(If you only assume that the halting problem is decidable, then you cannot call halt on a function that itself uses halt, only on base programs.)
Us poor software people, always yearning for the unattainable realm of eternal truth and perfect forms, inhabited only by mathematicians. Ah well, at least most days we don’t have to descend into the fiery cavern of unprincipled raw pragmatism inhabited only by electrical engineers.
When I was a youth, I thought that atoms were some kind of adult conspiracy to make the universe more limited and boring, and I wanted to live in a world where space, time and matter are infinitely divisible. Then you could build machines with arbitrarily small components, which could perform computational steps arbitrarily quickly. In my preferred universe, this would allow you to perform an infinite number of computational steps in a finite amount of time. No need to using a halting oracle to prove or disprove Goldbach’s conjecture: just brute force it by testing all of the countable integers! I wasn’t a very sophisticated thinker, so I never addressed the issues of power consumption or cooling for a machine that can do infinite work in finite time.
Right, that’s what I originally had in mind: infinitely subdividing time the way Zeno does in his paradox, to perform computations.
But how do you design the laws of physics so it’s possible to build such a machine? That’s why I wanted infinitely divisible matter, space and time – no atoms or quanta. And then, what are the implications for life that evolves in this universe, including intelligent life, if the mechanisms of life are not restricted to what a Turing machine can do?
When I was a youth, I thought that atoms were some kind of adult conspiracy to make the universe more limited and boring, and I wanted to live in a world where space, time and matter are infinitely divisible.
To be fair, you’re kind of right, it’s just that going any further down involves quantum physics and we don’t really have a good understanding of that yet. But I don’t think it’s unreasonable to assume that we’ve barely skimmed the surface of discovering what our reality is made of, or even if our reality exists in the way we think it does in the first place (as you mentioned, it’s not only atoms.)
What really brought it home to me was framing it as “you can’t always prove whether a particular instruction/function in a program will be called”. If you could, you could replace the instruction with halt, and solve the halting problem. But if you’re optimizing a program, verifying a program, any time you have to inspect a program without running it, then there’s always gonna be cases where your clever proof doesn’t work.
Before Turing, there was the Original Sin of Kurt Gödel’s Incompleteness Theorem which proved that any human logical or mathematical system could never be both all-encompassing and consistent. It almost seems designed that in any intellectual pursuit we will always end up with the unknowable.
Uh, nope. The halting problem could have been solvable with an astronomic complexity in the length of the program, and no one would have been able to afford the compute budget to run it on Goldbach.
This does not make any sense to me. If I tell you that SAT is in P but not what complexity it has, how can you possibly tell for any single SAT problem how long to wait to prove me wrong?
The compute budget case is covered in a footnote:
It’s not the same thing being discussed here, but y’all might like Impagliazzo’s discussion of average-case complexity, a different from the common CS focus on worse-case complexity, where we find a case where something is like another problem or is impossible. Most folks think that symmetric cryptography exists–i.e., that you can make a block cipher that doesn’t easily yield the key given known plaintext. I haven’t seen a survey of cryptographers or anything, but most probably think a successful block cipher can even be a pretty simple one. But we haven’t proven much about this at all! Then asymmetric cryptography is a whole other mess.
Where the halting problem is about stuff you can’t compute in principle, this is more about how we think it’s impractical to compute some things that are computable in principle (but haven’t proven much about it).
Quick definition of his “five worlds”: https://blog.computationalcomplexity.org/2004/06/impagliazzos-five-worlds.html
And an interview: https://www.quantamagazine.org/the-researcher-who-explores-computation-by-conjuring-new-worlds-20240327/
Worst case and average case were discussed a lot in my complexity theory courses as an undergrad. Quicksort is the canonical case of example of an algorithm that has great average-case complexity but much worse worst-case complexity. This also comes up a lot in security where an attacker will often try to find the worst case for you. For example, there are a bunch of DoS attacks that rely on sending data with a distribution that will cause collisions in hash function used for a hash table. If the hash table does secondary chaining, an attacker who can craft collisions can make the system go from the constant-time average case to the linear-time worst case.
That’s an interesting question! The usual science answer is “measure really really carefully”, but I have no real idea how you’d do that here. Honestly, just counting the instructions consumed by each iteration as you increase input size should be able to tell you pretty quickly, but an average desktop is full of noise sources that can mess with that count. Still, with a carefully written SAT solver and a good profiler I think it’s feasible.
You are pointing out that it’s not easy to map from “how much computation steps should this take” to “how long to wait before timing out”. But I was pointing at something different:
Basically you cannot, after only a finite number of observations, disprove the claim that something has polynomial or O(n) or O(1) complexity, so I don’t see how you would turn this into a monitor to feed to the halting problem as claimed.
If you can nest calls to the halting oracle then you can do “is f in O(g)” using discrete definition of asymptotics, which is nice because there are no limits involved, just exists and forall.
Something like:
Amazing
The halting oracle can make a countably infinite number of observations!
Note that the existence of a halting oracle is a much stronger property than the decidability of the halting problem, as summarized by 0x2ba22e11 above:
(If you only assume that the halting problem is decidable, then you cannot call
halt
on a function that itself useshalt
, only on base programs.)I think you meant to reply to the comment above mine?
Anyway yes that is a good explanation of the crux of why a halting oracle can solve just about any problem that we can write down. :)
Us poor software people, always yearning for the unattainable realm of eternal truth and perfect forms, inhabited only by mathematicians. Ah well, at least most days we don’t have to descend into the fiery cavern of unprincipled raw pragmatism inhabited only by electrical engineers.
Electrical engineers can at least build their own systems instead of reverse engineering other’s, like biologists ;)
When I was a youth, I thought that atoms were some kind of adult conspiracy to make the universe more limited and boring, and I wanted to live in a world where space, time and matter are infinitely divisible. Then you could build machines with arbitrarily small components, which could perform computational steps arbitrarily quickly. In my preferred universe, this would allow you to perform an infinite number of computational steps in a finite amount of time. No need to using a halting oracle to prove or disprove Goldbach’s conjecture: just brute force it by testing all of the countable integers! I wasn’t a very sophisticated thinker, so I never addressed the issues of power consumption or cooling for a machine that can do infinite work in finite time.
You might like Simon Tatham’s infinity machine
Right, that’s what I originally had in mind: infinitely subdividing time the way Zeno does in his paradox, to perform computations.
But how do you design the laws of physics so it’s possible to build such a machine? That’s why I wanted infinitely divisible matter, space and time – no atoms or quanta. And then, what are the implications for life that evolves in this universe, including intelligent life, if the mechanisms of life are not restricted to what a Turing machine can do?
To be fair, you’re kind of right, it’s just that going any further down involves quantum physics and we don’t really have a good understanding of that yet. But I don’t think it’s unreasonable to assume that we’ve barely skimmed the surface of discovering what our reality is made of, or even if our reality exists in the way we think it does in the first place (as you mentioned, it’s not only atoms.)
What really brought it home to me was framing it as “you can’t always prove whether a particular instruction/function in a program will be called”. If you could, you could replace the instruction with
halt
, and solve the halting problem. But if you’re optimizing a program, verifying a program, any time you have to inspect a program without running it, then there’s always gonna be cases where your clever proof doesn’t work.