**Key insights:**

- The Halting problem (or something related, anyway) is the "non-time-travel equivalent" of the Grandfather paradox.
- A computer being able to "conceive" of a statement means being able to formulate the statement as a sentence in some language. A computer being able to "believe" it means the statement being provable in some theory.
- Godel's incompleteness theorems are the natural abstraction of problems like this sort of paradox into model theory. The first theorem tells you that you can't predict your own long-time behavior, the second theorem tells you what mistake you're making when you think you can.

The essence of the Grandfather paradox, in a universe with time travel, is this:

Suppose I precommit to causing an inconsistency in time -- e.g. if I don't get a note from the future, I promise to send myself one, and vice versa if I do. Well, now what?

Time travel isn't real, but something to realize is that in a sense, every phenomenon involving time-travel has a perfectly causal equivalent (e.g. the financial industry is all about trying to achieve the effects of time travel as well as possible in a universe without it) that only involves *predicting* the future, not actually receiving a message from it. Thinking about time travel helps shed light on such situations, e.g. making it obvious why you should one-box in Newcomb's problem.

So the "non-time-travel" version of the Grandfather paradox looks like this:

I predict my behavior at some future point in time -- then violate the prediction.

**Halting problem**(although the precise formulation of the Halting problem is somewhat different -- in it, we have a god-man who claims he can predict the behaviour of

*everyone*, and his skeptic decides to do the opposite of whatever he predicts -- our formulation instead looks at a universal turing machine that can simulate both these programs).

*really*philosophically troubling bit is this --

*WE KNOW*what we will do, even if we haven't "derived it from the laws of physics". E.g. suppose we follow the algorithm:

I start humming. I search for a proof that "I will hum forever", and stop humming iff I find one.

I *know* that I will hum forever, and that I just won't find a proof that I will. Yet somehow this "knowledge" is itself *not* a proof? No axiomatic system can reflect my true understanding of the world? What?

When faced with a statement we "know but cannot prove" -- and especially when faced with a statement that we know but also know but cannot prove that we cannot prove -- it's important to ask exactly *how* we know it in the first place. Surely there is *some* logic in our heads that lets us say "well of course I hum forever". What is our model of the system above?

If you think about it for a few seconds, our theory/axiom $T$ is (or is at least as strong as) the following statement:

$G\iff\lnot(T\vdash G)$

(where $G$ is the statement that I hum forever) Note that obviously, this requires being able to formulate provability $T\vdash G$ within the framework of $T$, which corresponds to being able to model myself. We "physically" know that this is possible, of course, but Godel was also able to show this to be possible in e.g. Peano Arithmetic.

So our axiom tells us that the statement $G$ *is* the statement "I am unprovable" (again -- such self-reference is possible, as we see empirically or as Godel constructed in arithmetic) -- it basically *defines* $G$ to be the statement "I am unprovable".

$G$ is called the **Godel sentence**, and this immediately proves **Godel's (first) incompleteness theorem**.

Now -- how do *we* (as external observers, whatever that means) "know" that $G$ is true? What is the reasoning in our brain that tells us that $G$ is true?

Our reasoning is that if the program does halt, then it must have had found a proof that it doesn't. We believe that if $T$ is a true model of reality -- if it proves that a program doesn't halt, it really mustn't halt. In other words: we argue: $\lnot G \implies (T\vdash G) \implies G$, which is a contradiction. So we believe that $G$ must be true.

The *only* additional assumption we made was "if a program doesn't halt, it really mustn't halt", i.e. $(T\vdash G) \implies G$. This is known as **soundness**, and what we have just proven is that the system $T$ itself **cannot prove its own soundness** (unless it is itself unsound) -- this is **Godel's second incompleteness theorem**. Because if I *believed* that I was sound, then I would believe that I won't stop humming (because if I stopped humming, I must have believed that I wouldn't stop, and since everything I believe is true, that would mean I won't stop, which is a contradiction), and based on that belief would stop humming, which would make me unsound.

(But I *can* believe that someone *else* is sound. So in particular, a theory can believe in the soundness of the "rest of the theory" (excluding the soundness axiom), and this is what "going to a stronger theory" is all about. The stronger theory still can't prove *its own* soundness, but it can prove the weaker theory's soundness. So that's how *I* can consistently believe that someone else will never stop humming.)

*do*believe that

*I*will never stop humming!" Yes, but your belief can't quite be

*certain*, because your belief in your own soundness can't be certain, and this gives rise to the whole field of

**logical uncertainty**.

## No comments:

## Post a Comment