Friday 25 April 2014

Newcomb's Problem

From time to time, I will use this blog to write about problems in philosophy which interest me. I have no particular objective with writings of this nature, except simply to discuss and write about them. Today's topic is Newcomb's Problem, prolonged occupation with which continues to imperil the sanity of many researchers in philosophy and decision science. I first read about it in Robert Nozick's great book Socratic Puzzles.

Here it is: You are told that an entity of incredible intellectual ability (such as God or a space alien) can predict your actions with perfect accuracy. You are also told that you may enter a room in which there are two boxes marked A and B. Box A contains $1,000. The contents of Box B depends on what the intelligent entity predicted. If he predicted that you will take only the contents of Box B, it contains $1,000,000; if he predicted that you will take the contents of both boxes, Box B contains nothing. You go into the room with the boxes. Which box(es) do you open and why?

It is irritating that different solutions seem to have some support in logic. On the one hand, since the entity of vast intellectual prowess does not make any mistakes, one should open Box B only, since it contains more money than does Box A. On the other hand, since the predictions have already been made, one should open both boxes to take all that one can; if there is a million dollars in Box B, one's opening it will not change that fact. As I say, these two arguments both appear to withstand logical scrutiny. This is a problem because two valid analyses of the same problem should not lead to two different answers.

The way I think of this problem (and this is hardly original with me), it needs to be amended slightly so that the prediction is only almost certainly correct. This is because perfect prediction by outsiders and the free will required to make a choice of which box(es) to open are incompatible. Could one intend to open both boxes until the moment one steps into the room? With the intelligent entity's predictive powers, such a change of mind would have been predicted. But if this is true, then the 'choice' of boxes is not really a choice at all, since if all one's future actions can be completely predicted by an outsider, it cannot be that one has free will. This would contradict the premise of the problem that one may choose which box(es) to open.

The premises 'choice' and 'prediction' can logically coexist only if the latter is made imperfect. That is, the intelligent entity must be wrong at least on occasion. If so, a really strong intention to open only Box B, abandoned immediately upon entering the room in favour of opening both boxes, could yield maximal payoff. Still, if the intelligent entity is almost perfect, one should be really wary of abandoning even a very strong intention.

If one thinks one will open only Box B, even while knowing that as one enters the room, whatever it contains is not changed by one's actions, how is it possible to be committed to opening only Box B? One's realization that Box B's contents won't change upon entering the room will have to take the intelligent entity by surprise. But even if one is slow of mind and does not realize that present actions cannot affect the past until one enters the room, this epiphany should have been predicted by an entity as smart as the one of Newcomb's Problem. Or, more precisely, the epiphany, followed by the decision to act upon it, should have been predicted.

It is good to be precise. The intelligent entity could reason that one will realize that the past cannot be affected but force oneself to choose only Box B. Is such a commitment credible? Again, by the 'choice' premise, it is clear that one can choose only Box B. But again, the past is unalterable and, once one has entered the room, opening both boxes will be profitable. Do those claiming they would open only Box B think they can lull the intelligent entity into thinking they won't actually open both boxes? Although maybe they are committed, or maybe they can?

Nozick likens the problem to that of a genetic predisposition to a certain illness. Suppose you may carry a gene that significantly shortens your life. Whether or not you carry it cannot be changed. You find out that a fondness for fishing is associated with fewer cases of people's possessing the unlucky gene. You hate fishing, but take it up anyway due to this statistical fact. This obviously would not be rational, so why should one open only Box B in the analogous case? Perhaps the analogy breaks down if one imagines that in the Gedankenexperiment with the boxes one's thoughts and actions can influence the prediction until one gets to the room, but then does that not make the intelligent entity somewhat poorer a predictor, if he has to update his forecasts?

It is hard to get away from the idea that one should try to commit to opening only Box B. But it is at least equally difficult to get away from the fact that both boxes should be opened. No wonder, then, that everyone fails to crack this nut.

No comments:

Post a Comment