Positivism as the rejection of anthropic reasoning

The Sleeping Beauty problem
The following experiment is performed on Sleeping Beauty: on Sunday, she is put to sleep, after which a coin is flipped.

If the coin came up heads, she will be woken up once on Monday. If the coin came up tails, she will be woken up twice, on Monday and Tuesday.

Each time she is woken up (with no indication of given of what day/which instance it is), she is asked the odds that the coin came up heads, then her memory of the waking is wiped and she is put back to sleep.

Given that she knows how the experiment works, should she answer odds of 1/2 or 1/3?
The answer is obviously 1/2. That is the prior probability of the coin coming up heads, and being woken up gives her no new information -- it just tells her that she's been woken up at least once. She already knew she'd be woken up -- if she expects her odds at being woken up to be 1/3, then her odds would have already been 1/3 before she started the experiment (this is, of course, a standard trick).

$$P(H|W\ge 1)=\frac{P(W\ge 1|H)\cdot P(H)}{P(W\ge 1)}=P(H)=\frac12 $$
A very large number of people, however -- including on LessWrong -- argue that Beauty should answer "1/3", or that that "both answers are right, depending on how you formulate the problem", or something to that flavour.

But this is a perfectly well-posed problem -- both answers cannot be right. The LW post is right that I notice I am confused, but wrong about what I notice I am confused by.

I don't find the problem itself confusing. I find people's minds and people's intuitions confusing -- including my own, because I can certainly see the intuition for 1/3.

One might say that this is uninteresting -- it doesn't matter what you "feel", the truth is the truth.

Well, an important scientific skill is to correct your intuitions to reflect reality. When you learned about relativity, you had to fix your intuition about fixed lengths and time intervals by thinking about four-vectors. When you learned about quantum mechanics, you had to fix your intuition about there being an absolute reality by learning about state vectors and how observation projects it rather than revealing an underlying reality.

Because human heuristic reasoning/understanding/skip-aheads is almost entirely in terms of such mental models, i.e. intuition, I would go so far as to say that if you don't understand why your intuition goes wrong, you don't understand why it is wrong in the first place, because it is your intuition that reflects your understanding, your model of the physical world.



Fair bets

So why do you sometimes "feel" that the answer is 1/3, even when Bayes's theorem says it is 1/2?

I would explain my intuition in terms of Monte-Carlo simulations: I think -- "probability (kindasorta) means how many times will something be true if repeated, right?" So if we repeat the experiment 100 times, then out of the 150 times she's woken up, 50 times the answer would be "heads" and 100 times the answer would be "tails".

Or in other words, if Beauty had the option to bet which way the coin had come up (e.g. she gets 1pt if her prediction is correct, 0pt if wrong), then if she consistently bet tails, she'd end up with 100pt, while if she consistently bet heads, she'd end up with 50pt.

And what of her making this bet before the experiment begins? What if she is given a choice, before the experiment begins to either: (a) bet, when she wakes up, that the coin came up heads or (b) bet, when she wakes up, that the coin came up tails?

Aha! But that's not a fair bet! That's giving her the option to bet twice that the coin came up heads, or bet once that the coin came up tails -- which is really a bet offering her 2:1 odds on the coin coming up tails -- so of course she should take tails.

But that still doesn't eliminate our entire confusion. Sure, from the perspective of SleepingBeauty-before-the-experiment, this is a bet offering 2:1 odds ... but from the perspective of SleepingBeauty-who-just-woke-up, there's no 2:1 odds, is there?



A prisoner's dilemma against a temporally-displaced copy of yourself that may or may not exist

When Beauty wakes up, she knows that there is a 1/2 probability that the coin came tails, and so a 1/2 probability that there will be another time she'll be woken up and asked the same question, offered the same bet -- and a 1/2 probability that that the coin came up heads, and so a 1/2 probability that there won't be such a time.

So she is really playing Prisoner's Dilemma against an identical copy of herself. If she chooses heads, then her twin -- who may or may not exist -- will also choose heads, because two identical copies cannot act differently. If she chooses tails, then that possibly-existent twin will also choose tails.

So if the coin came out heads (of which there's a 50% chance), then her choosing heads will lead to a payout of 1pt, but if the coin came out tails, then her choosing tails will lead to a payout of 2pt.

So if Beauty accepts evidential decision theory, she will, in fact, win, while also holding the true belief about the probability of a heads -- 1/2. Of course she will lose if she accepts causal decision theory, but that's fine -- causal decision theorists lose all the time.

(In fact there is a problem that even EDT seems to fail at, which I will discuss in a future post, but this has nothing to do with anthropics and Sleeping Beauties, so I don't believe it to be relevant here.)



Positivism vs. Anthropic reasoning

The "halfer" argument can be considered a rejection of anthropic reasoning. Anthropic reasoning can be illustrated with simpler, less unwieldy examples:
  • Bostrom's Simulation Argument: If we are not living in a computer simulation, then it is unlikely that humans will ever make a large number of universe simulations, so we'll probably go extinct very soon or something. Also Boltzmann brains.
  • Celibate Adam: You are the Bibilical Adam, and decide, on a whim, to procreate with Eve if and only if a coin toss comes up heads. So an anthropist Adam reasons that the coin toss will almost certainly come up tails, because what are the odds that he has billions of progeny and he just happened to be in this body?
The last riddle makes the problem with anthropic reasoning manifestly obvious, since the decision to only count human bodies and not animals, rocks, and random disparate sets of particles is a completely arbitrary one. 

Anthropic reasoning carries an underlying assumption that there is some metaphysical process that randomly allocates "souls" into human bodies. The basic belief of the anthropists is that metaphysical claims can be information -- like "I am conscious" (note that the basic problem here isn't "conscious" as much as it is "I").

Well, I obviously don't have much respect for this sort of fluff -- I simply reject this kind of thing at the level of epistemology.

The fundamental lesson of logical positivism is that your beliefs should not depend on your metaphysical gauge -- in particular, they should not depend on whether you believe in philosophical zombies (or rather, what you consider to be philosophical zombies), universal minds or many worlds.

The model of Celibate Adam problem according to anthropists.
And by the way, this also explains the "non-anthropic equivalents" that anthropists provide for anthropic problems -- these non-anthropic equivalents are just scenarios which make these metaphysical models real. For example, the "non-anthropic equivalent" of the Sleeping Beauty problem looks like this (and I encourage you to work it out before reading):
There are two Awake Beauties. A coin is flipped -- if the coin comes up Heads, then one Awake Beauty is randomly selected for interview; if the coin comes up Tails, then both Awake Beauties are interviewed. You, who are one of the Awake Beauties, are interviewed -- what is your credence for the coin having come up heads?

The model of Sleeping Beauty problem according to anthropists.
And these sorts of "metaphysical models" are implicit in the betting-based arguments -- when you say, "people in computer simulations would benefit from betting that they are, so so should you", you are in effect putting yourself in the same category as the simulated people to set up the frequentist "experiment". But whether what the simulated people should do is correlated with what you should do is entirely a matter of your Bayesian prior, and there's nothing in the prior that requires anthropic considerations.

By the way, this is why frequentism cannot be a fundamental basis of defining probability. E.g. if you're trying to place odds on an unfair coin toss, and you say "well, my coin-tosses have been 75-25 so far, so that means the probability is 75-25", then you are making the arbitrary decision that the results of your previous coin-tosses predict the next one, you are arbitrarily putting them in the same category. Your Bayesian prior is what justifies this categorization, this believed correlation, the belief that some weird systematic gust of wind won't affect your 101th toss, your Bayesian prior is what lets you do sample tests. Sample tests are not a definition of probability, and in the absence of sample tests -- i.e. without assuming known correlations between some observed phenomenon and the phenomenon you're trying to predict -- like in the case of questions like "what is the probability of us being in a simulation?", your prior is all that matters.

And so similarly with Doomsday arguments, "the mediocrity assumption" or "the Copernican assumption" is just some basically arbitrary Bayesian prior (and in that case we do have additional evidence that correlates with whether the world will end or not, so we should be updating this belief, in whatever direction, and mediocrity should not be the basis of our decisions, just like how we have information on human life expectancies, so a 5-year old should not believe that he will die at 10).

In the Sleeping Beauty problem, if you replaced the monetary reward for something completely short-term, like a cookie, so Beauty does not care about whether her other awakening gets it or not, then betting on tails no longer gives her an advantage over betting on heads. You might say "well, but if she always bets on tails, she gets twice as many cookies", but this is irrelevant -- there's no reason to regard that as a relevant frequentist experiment that affects her beliefs about the probability of the coin coming up heads. Her prior is 50-50, and no new (real, non-metaphysical) information has been introduced to her. 

No comments:

Post a Comment