In 1993, when I was a graduate student, fellow student Josh Dever introduced me to a simple puzzle in decision theory called the "exchange problem" or the "two envelope paradox". It got under my skin.

You are presented with the choice between two envelopes, Envelope A and Envelope B. You know that one envelope has half as much money as the other, but you don't know which has more. Arbitrarily, you choose Envelope A. Then you think to yourself, should I switch to Envelope B instead? There's a 50-50 chance it has twice as much money and a 50-50 chance it has half as much money. And since double or nothing is a fair bet, double or half should be more than fair! Using the tools of formal decision theory, you might call "X" the amount of money in Envelope A and then calculate the expectation of switching as (.5)*.5(X) + (.5)*2X = 5/4 X. So you switch.

Of course that's an absurd result. You have no reason to expect more from Envelope B. Parity of reasoning -- calling "Y" the amount in Envelope B -- would yield the result that you should expect more from Envelope A. Something has gone wrong.

But *what *exactly has gone wrong? I've never seen a satisfying answer to this question. Various authors, like Frank Jackson and Richard Jeffrey, have proposed constraints on the use of variables in the expectation formula, constraints that would prevent the fallacious reasoning above. However such constraints are impractically strong, since they would also forbid intuitively valid forms of reasoning such as: If I have to choose between (i.) a gift from Person A and (ii.) a coinflip determining whether I get a gift from Person B or Person C, and I believe that Person A would, on average, give me about twice as much money as Person B and half as much as Person C, I should take option (ii).

Terry Horgan and Charles Chihara have proposed less formal contraints on the use of variables in such cases, constraints that I find difficult to interpret and which I'm not sure would consistently forbid fallacious calculations (for example in non-linear cases).

Many mathematicians and decision theorists have written interestingly on what happens *after *you open the envelope and see an amount. For example, could there be a probability distribution according to which no matter what amount you see, you should switch? That's a fun question, but I'm interested in the closed-envelope case, in diagnosing what is wrong in the simple reasoning above. No one, I think, has got the diagnosis right.

For Josh Dever's and my stab at a solution, see here (simplified version) or here (more detailed version).

For a list of on-line essays on this topic, see this Wikipedia entry. (This entry gives Josh and me credit for "the most common solution" -- does this mean that our unorthodoxy has become the new orthodoxy? -- and then shifts focus to the open envelope version.)

## Wednesday, May 16, 2007

### The Two Envelope Paradox

Posted by Eric Schwitzgebel at 11:45 AM

Subscribe to:
Post Comments (Atom)

## 28 comments:

In my modest opinion I think I found something wrong in the reasoning. X (amount in envelope A) let's say that can be "a" or "2a". But in the formula of expectation where you have the amount in envelop B you are using .5X and 2X. That makes that if X was 2a you are saying that a possible value for Y is 4a when it is not possible.

I think the formula cannot be as simple as that and has to take in account the conditioned probabilities.

I am not sure if I have explained myself properly. And anyway, very nice post!

Using the tools of formal decision theory, you might call "X" the amount of money in Envelope A and then calculate the expectation of switching as (.5)*.5(X) + (.5)*2X = 5/4 X. So you switch.Butwhatexactly has gone wrong?Let me first explain how I think about this problem.

There are 4 possibilities, depending on whether we pick the envelope with less or more money, and then for each of those possibilities, whether we keep our original choice or switch to the other envelope:

case 1: we have X and we keep it; pays out X

case 2: we have X and and we switch; pays out 2X

case 3: we have 2X and we keep it; pays out 2X

case 4: we have 2X and we switch; pays out X

In all 4 cases, I've kept the value of X constant, which requires the 'before' scenario to be expressed as X when I have the lower-valued envelope and 2X when I have the higher-valued envelope.

The problem with the quoted formula is that it tries to squash these 4 cases into the 2 cases mentioned in the formula, and as a result, the X does not have the same value in its first and second instances in the equation.

Unpacking the left side of the formula ("(.5)*.5(X) + (.5)*2X"), I interpret that as saying:

(.5)*.5(X) ~= There is a 50% chance of ending up with one half X (X must therefore be defined as the value of the envelope with more money in this case, otherwise it wouldn't be possible to end up with one half).

(.5)*2X ~= There is a 50% chance of ending up with double X (X must therefore be defined as the value of the envelope with less money in this case, otherwise it wouldn't be possible to end up with double).

Obviously, the two values of X are not equal, since one instance of X has the value of the lesser-valued envelope, and the other has the value of the higher-valued envelope; thus we are not justified in using the same variable for both.

Note that the cases that pay out a higher value in the cases enumerated above are cases 2 and 3; for the 'before' valuation, case 2 has value X, and case 3 has value 2X. And yet the statement says it has value X before. If we wish to say that it has a definite value in the before valuation, then it would have to have the value 1.5X (since it has X in case 2 and 2X in case 3, and they are equally likely), or if we insist on calling the definite value X, we would have to change the values of the payouts.

Note also that the possibilities for paying out the lower value in the cases above are cases 1 and 4; likewise, the before valuations are X and 2X, so if we wish to state the before valuation as a single number, we must take the average and express it as 1.5X again (or call it X but change the payouts).

If we define X as the intermediate value, and we create a formula using that value, then we can set the initial X equal to 1.5 times its previous value -- so X for envelopes containing $1 and $2 would be $1.50 -- and construct the formula as follows:

.5 * (2/3) X ~= there is a 50% chance of ending up with 2/3 X (since X = $1.50 and $1 is 2/3 of $1.50).

.5 * (4/3) X ~= there is a 50% chance of ending up with 4/3 X (since X = $1.50 and $2 is 4/3 of $1.50).

This gives us the formula: .5 * (2/3) X + .5 * (4/3) X = .5 * 6/3 X = X

It seems, then, that there are 2 ways of thinking about what has gone wrong. We can say that the problem is that the value of X changes between its first and second appearance in the formula, and thus that the formula makes no sense, or we can say X does not change but that the payouts are incorrect, since we could interpet X as meaning "the intermediate value between high and low", and the payouts would be (4/3)X and 2/3(X), instead of 2X and .5X.

Anyway, I hope I'm not confusing the matter, but that's my analysis.

Fun and weird!

Here is what I am thinking...

If we look at what happens to the people that choose (say that we have $50 and $100 envelopes), is that they either gain or loose $50.

So, the correct analysis should end up with the conclusion that the result of our action is gaining or loosing same amount of money.

I will leave the easy part of producing such correct analysis, and also the explanation why the other one is not correct for exercise.

...

Just kidding :)

That paradoxical case between competing alternatives, probablities known and unknown, reational models... resonates to my ears as "avant la letre" neuroeconomics performed by Schwitzgebel and Dover.

As it is the case, an interesting reading (and progress in understanding rational choice) of the case, would be scanning (neuroimaging) the subjects when they choose their options, and characterize their personal aversion to loss and willingness to gain. That could inform us philosophically as well. Go ahead Eric and Josh and tell us what you find.

Hi Eric,

In the spirit of the puzzle I'm not going to look at any of the proposed solutions until I send this off. I put this paradox in the same category as those of Zeno. They are generated by blending the existential and the mathematical. You have the opening of the envelope/decision, the firing of an arrow, the winning of a race and the mathematical divisibility of that event by discrete moments or probability. In other words real time and mathematical time so to speak.

Apply Oscar rules. There is no decision until the envelope is opened. This is a single event. It doesn't have a beginning and an end that can be 'divided', except notionally. Well, you might say, is that not to re-write the script? No, it is to point out the obvious, that mathematics and the real world are different and that you have created infinite divisability where there is none. Not that that's not a useful thing to be able to do.

Did you ever come across Mark Haddon's book on the adventures of the Asperger's (maths savant)boy called 'The Curious Incident of the Dog in the Night-Time'. There is a discussion of the Monty Hall Problem in it. Is it a veridical paradox? No, I believe, along with the irate professors, that it is just a plain old paradox.

I agree, this is a fun and vexing puzzle.

By coincidence, I have been presenting it in my webcomic, Lump of Clay, this week.

Thanks for all the posts, folks!

Xurxo: Your suggested solution is very much like Frank Jackson's. Though appealing, I think it is an unnecessarily stringent a requirement on the use of variables in expectation. For example, depending on one's priors, it might be violated in my gift case without (I'd suggest) impeding the merit of that calculation.

Joseph: I find your analysis interesting. There's no doubt that your "X" is a well-functioning variable and the original "X" is not. I'm quite sympathetic with the suggestion that the problem with the original X is that it is "not the same" between the two terms. But what does it mean to be "not the same" in the first and second terms of the equation? That, I think, is the key!

Tanasije: I agree with your proof that the fallacious reasoning *is* fallacious. Of course (as you point out) the trick is to explain clearly *why* it is so!

Anibal: If you can get me an fMRI machine, I'll gladly do it. Just ship to: Philosophy Dept, UC Riverside, Riverside CA 92521! ;)

Michael: I think the question, though, is why the envelope case is problematic and the gift case isn't (or doesn't seem to be). Both have the same abtraction-reality structure, no? The Monty Hall problem, I'm inclined to think (with most mathematicians and decision theorists) is just surprising, not paradoxical; and (despite superficial similarities) is very different in logical structure.

Neat strip, Jonathan. I confess I hand't seen it before!

I think that the key to understanding this is to recognize that the results are determined based on your first choice of envelope - any subsequent switching of envelopes results in 100% probable results depending on the outcome of the first choice. This is very different from halving or doubling a set amount.

That seems right. And yet, once you've made that choice, why can't you take X = the amount in my envelope as a new starting point?

To me the problem lies with Bayes's theorem whenever we attempt to apply it to the possibility of earning really large sums of money.

Assume we cast the problem so that we have a valid probability distribution in which Bayes's theorem says its always rational to switch - ie the expected value of switching is always positive.

The paradox arises because we assume that the value of money is linear, but this is just a simplification that works with normal amnounts of money. It breaks down when we consider very large amounts of money.

This has NOTHING to do with the diminishing marginal utility of money to an individual. This is irrelevant in an efficient market which we implicitly assume exists, because we assume we can (for example) securitise the game such that everyone is in the linear marginal utility area.

Rather it arises because money merely represents real wealth which is finite in quantity. We use money in decision theory as a proxy for wealth or utility. This works perfectly if the amount of money is also fixed. However when considering really large amounts of money in our paradox, we must assume that, if required, the envelope-stuffer can simply issue more currency to satisfy the required probability distribution. If we don't make this assumption there is no paradox since the distribution is then bounded on the upside.

But at some point issuing the required trillions of dollars will be inflationary. ie the value of the money will decline as we issue more of it.

If we replace the amount of money in the decision theoretic framework with the value of the world's underlying wealth which it represents its easy to show the paradox disappears.

There is nothing wrong with the mathematics, and really nothing wrong with the decision theory once we realise we have made a small simplifying assumption that money is proportional to value. This assumption is fine, so long as the quantity of money is fixed or the amounts involved are small.

If we substitute the % of wealth of the whole world for the quantity of money in the decision theory then, with small amounts of money, you get the same results as with the standard decision theory. As the amounts of money become a meaningful % of the world's total wealth, the modified framework starts to produce different results which are intuitively more correct.

Even so there are some slight wrinkles with this approach to the two-envelope paradox, largely around whether you are allowed to burn the unopened envelope or not. But I think it provides a route to a mathematically rigorous solution to the paradox.

Such a solution requires no complex math and doesn't challenge our rational decision making approach in any important ways. It merely shows the difficulty we sometimes have in uncovering implicit simplifying assumptions.

Thanks, Kim, for the thoughtful comment. What if we do the same problem, but not with money? Suppose, for example, the envelopes are simply supposed to contain the numeric expressions, in some standard format, for two rational numbers, one of which is exactly half of the other? Wouldn't the same problem arise in calculating the expectation -- the expected numerical value -- of one envelope relative to the other? But now there's no issue about money.

Thanks Eric.

You've actually hit the nail on the head. When you change the rules to eliminate money, and just make it a game about numbers, the paradox disappears.

This is because the paradox contains a normative element from rational decision theory. You "should" always switch envelopes, because in doing so you increase your expected wealth. In fact symmetry tells us that you can't make money by switching, and common sense tells us that constantly switching and re-switching can't keep making you more money. It must be a zero sum game.

We cure the paradox in the money version of the game by replacing money in the rational decision theory framework with percentage of the total finite wealth of the world. This fairly obvious refinement makes the value of the distribution finite again which cures the problem.

With this refinement, while you will USUALLY swap envelopes once you have opened yours, sometimes you won't swap because the number in yours is so huge that you can already command such a large part of the wealth in existence that there's more downside than upside from swapping.

Since you don't always swap after opening your envelope you can't make the assertion that you don't need to open your envelope to know whether to swap, hence there is no paradox.

Assuming we all agree that is right, why doesn't the paradox re-emerge once we eliminate money and just make it a game about numbers?

As a preliminary point I note the paradox is not usually expressed in this way. Its usually viewed as a challenge to rational decision theory which inherently involves money or some equivalent store of value. Without money (or equivalent) the paradox can't challenge rational decision theory.

But is it still a paradox at all?

Let's say we create a the right sort of probability distribution which sums to one but has an expected value of infinity.

We take two adjacent terms from that distribution and randomly write them, each one inside one of two envelopes.

Now we need a rule of the game which says "switch envelopes if the expected size of the number in the other envelope is larger than the expected size of the number in your envelope". There is no reason for that rule other than we made it up.

If you open your envelope, the expected size of the number in the other envelope will indeed be larger. So you will always switch if you open your envelope. But since you will always switch, you don't need to open the envelope.

So what?

So you can just go ahead and switch anyway, and re-switch again and again, thus endlessly increasing the expected value of your envlope?

Again, so what?

This does admittedly sound a bit paradoxical. Unless of course the expected value of each envelope is infinite to start with, in which case its completely unexceptional.

Its just another one of those slightly weird sounding results of dealing with infitity. Its meaningless to increase infinity. Infinity plus anything is still just infinity. So you can go ahead and increase it all day without changing it.

This may challenge our minds to grasp the concept of infinity, but it doesn't challenge any useful real world decision making process.

To reiterate, the paradox is only a paradox because it appears to challenge modern rational decision theory. Its a great paradox, because it does indeed show that a small obvious refinement to the theory is required when dealing with the possibility of really huge numbers.

As a simple numbers game its useful to show how infinity creates some counterintuitive mathematical results, but its hardly one of the world's great paradoxes.

I think one can get the paradox, still, with finite expectations and without money -- so I guess I disagree with you!

Suppose it's just a rational number in the envelopes (with one envelope twice the other), but your expected value for the numbers is still finite. Switching still doesn't increase the expected value.

It's a problem, I argue, with the use of variables within the expectation formula. I have an essay on this (with Josh Dever) on my homepage.

Thanks!

Thanks Eric

That's an amazing proposition. It should be fairly easy for someone to come up with a formal proof one way or another.

My contention is that if the probability distribution has a finite expected value then:

1 There are some situations where, upon opening your envelope, you will decide not to switch (hence there is no paradox that you should switch unopened envlopes)

2 More generally, in this case switching unopened envelopes always has an expected value of zero.

Only where the probability distribution is infinite can you apparently always increase the expected value by switching.

Anyway if that's wrong it should be fairly easy to show formally.

Yes, it is really sad to leave comments after such a long time but I only discovered the site recently plus I really like this paradox and its one I've not met before.

Also, sorry for the 'loser length' post.

I'm taking it as read that other correct ways to address the question of whether to swap or not are easy to find but that the issue here is to show why the approach taken goes wrong.

Well...

The paradoxical formula is correct! Good one eh? The thing is, it doesn't tell you what you think it is telling you.

You make the statement "you might call X the amount of money in Envelope A..." and indeed you might. But from now on the expectation formula only works for situations in which Envelope A contains (the fixed) X. And indeed, for all the situations in which Envelope A contains X

it isbetter to swap. Unexpected but true.To make this obvious let's do a couple of things. Firstly, never mind this namby-pampy doubling, we will put 100 times more in one envelope than the other. Secondly, we will actually open envelope A and find out what's inside. OOooh look, X is $1 (one dollar).

So, you now have the choice of swapping, where you would get $100 with prob 0.5 or 1 cent with prob 0.5. Sticking will leave you with $1. Even without the math it is obvious that over multiple trials (where Envelope A contains $1) the best gain is by swapping. This might be counter intuitive but it is true. The moment X is fixed at

any onenamed value it really is better to swap in the long run.If X is $10 it is better to swap, because overall, in $10 situations we come out ahead. If X is $50 it is better to swap, because overall, in $50 situations we come out ahead. If X is any named amount it is better to swap.

The formula is correctly telling us that for all situations where Envelope A contains X we should swap. In the original problem we don't know what X is. So when we find that the expected gain of swapping is 5X/4 we can't actually put a value in dollars to that number but whatever X is, the formula is true.

Now, we feel instinctively that in the original situation it can not actually make any mathematical difference if we swap or not - and this is correct too.

To marry the two parts together (formula: swap -> gain, versus common sense: swap -> no gain). I think we have to understand this:

The expectation formula is based on an idea of multiple trials. Since we have mentally fixed X at the start of this one trial, it only covers the trials where X is the same as our current trial. It doesn't apply to all the other trials. But to get the 'common sense' answer, what we need is a formula based on this trial, the next trail, the trail after that and so on forever, without missing any trials out (or another method entirely).

If we want to get the common sense answer we just can't use the formula as it stands because X is (potentially) different in every trial but the formula keeps it always the same.

Hey, Chris, thanks for the comment! I don't disagree as much as you think. *If* you open the envelope and see an amount, then depending on the amount it may be the case that you think the probability of the other envelope having less is equal to the probability of the other envelope having more -- and then you should switch. But only very weird prior probability distributions allow that you'll grant those probabilities no matter what you see.

That's why Josh and I focus on the "closed envelope" case, where issue is in some sense simpler (and thus in another sense more difficult).

Let's call the amounts in the evelopes {'x', '2x'}, and the amount in the envelope we pick 'a'.

If you ask me whether to switch or not in order to end up with the highest amount of the envelopes,

I reason in a bayesian way:

50% chance I picked 'x', switching would result in '2x', so I gain 'x'. 50% change I picked '2x', switching would result in 'x', so i lose 'x'.There is no expected gain in switching, so my answer would be:Whether I switch or not, and how many times I would switch doesn't influence the result.But you didn't ask me the above question. You asked me whether I should switch or not in order to go home with as much profit as possible. Again I go bayesian on this:

100% sure I pick 'a' (given), 50% chance 'a' = 'x', 50% chance 'a' = '2x'. This influences my a priori distribution. Where I know the amounts in the envelope are {x, 2x}, I now also know the amounts in the envelope are either {a/2, a} or {a, 2a}, with each situation equally likely. This changes my a priori distribution from well known and indicative, to subjunctive. That is to say, the amount in the other envelope is either 'a/2' or '2a', depending on the a priori distribution, but in reality one of the 2 amounts is fictional, the other is real.If the a priori distribution is not entirely indicative, we do not have enough facts for any puzzle to solve on a bayesian way. The flaw we make is calculating the expected gain of 2 subjunctive a priori distributions, where we know only 1 would in retrospect appear to be indicative. My answer to the above question would be:

I do not have enough factual information to make a decision.Let me elaborate a bit on this problem by making some statements:

1. if we were to gain, we would gain more than if we were to lose.

2. if we were to lose, we would lose less than if we were to gain.

3. if we were to gain, we would gain less than if the other person were to gain.

4. if we were to lose, we would lose more than if the other person were to lose.

5. if we were to gain, we would gain exactly what the other person would lose.

6. if we were to lose, we would lose exactly what the other person would gain.

Statements 1,2,5,6 are commonly understood. Statements 1 and 2 makes us think we should switch, while statements 5 and 6 makes us think switching doesn't mather.

But statements 3 and 4 are overlooked! I believe these last statements are the other side of the coin that represents the first two statements. There were those lead us to switching, these lead us to staying. In fact, I'm not sure whether statements 1,2,3 and 4 complement eachother or should all be regarded as counterfacts, which do not influence our expected utility.

My previous comment accidently got published to soon, for which I apoligize.

I'll provide some clarification about statement 3 and 4. Remember the value in my envelope is 'a'.

3. So I gained, this means I gain 'a' and now have '2a'. For my opponent this means he lost 'a', where if he would have won, he would have gained '2a', bringing him from '2a' to '4a'. My gain is thus smaller than his would-have gain(a < 2a).

4. So I lost, this means I lost 'a/2' and end up with 'a/2'. For my opponent this means he gained 'a/2', where if he would have lost, he wouls have lost 'a/4', bringing him from 'a/2' to 'a/4'. My lost is this bigger than his would-have lost (a/2 > a/4).

Interesting observation about 3 and 4, Benjamin.

On your first comment, I'm not sure why you want to say either a/2 or 2a in the other envelope is "fictional". Of course we're dealing with uncertainties, so in general anything but the actual amount is in some sense fictional....

It's not always easy to distinguish a condition that is not met with an uncertainty.

if i throw a coin and head falls,

the tails is a condition that is not meet. before the throw you are uncertain whether to throw head or tails, but this is your a priori distribution rather than uncertianty.

if you throw a coin, and keep the outcome secret to me, and then present me a subjective problem where i need to outcome of your throw as part of my a priori distribution, this is an uncertainty for me. i have 2 possible counterfacts: either you threw heads, or tails, but only one of these counterfacts is the real situation.

in this envelope problem you write down x and 2x in some envelopes and present me an envelope at random. it is uncertain to me what you have written down. so i'm faced with 2 counterfacts (a/2, a) and (a, 2a), of which i know 1 is reality.

i hope it is more clear now what uncertainty really is, and how it affects subjective probalistics, .. how it even keep you from making any rational decision.

We have been disregarding the value of money in respect of the credit it corresponds too. If there is a finite amount of credit represented by an infinite amount of money (which is the case in this paradox), the vale of money is the reciprocal of infinity, or plain zero. This means that whether loosing any finite amount of money or gaining it, would not result in a loss or gain in credit. It is the expected utility in credit though that makes us richer or poorer. In most mathematical problems, we ignore this fact because we assume the value of money is a good indicator for the value of credit. This paradox manifests itself in a way there is a workd of difference between money and credit. Taking this in regard, one should not be even playing this game, because money is worthless anyway.

Benjamin, you're losing me. How did we end up with "money is worthless" at the end of that? Feel free to mail any spare cash to me! ;)

Eric,

I hope to clarify my previous post with an example:

Let's call the unit of credit a goldpiece and the unit of money a dollar. Let's also assume there are 5000 goldpieces in the world, represented by 1000 dollar. This means each dollar is worth 5 goldpieces. Let's also state a car costs 100 goldpieces.

Suppose I got 100 dollar, worth 500 goldpieces. I can buy 5 cars with this amount of money.

The national banks now start to press more money. We now have 2000 dollar representing our 5000 goldpieces. The credit represented per dollar is now 2.5 goldpieces.

Suppose I got 100 dollar, worth 250 goldpieces. I can buy 2.5 cars with this amount of money.

Where in the latter case I can buy 2.5 cars, in the former case I can buy 5 cars with the SAME amount of money (100 dollars). This insight is necessary to realize that not money, but credit is what we should examine in order to state how rich we are.

In the 2 envelopes problem any amount of money could be in the envelopes. This means there is an infinite amount of money, representing a finite number of credit. The credit value per monetary unit is zero. This means that the value of one dollar is the value of one million dollars is zero. So when loosing an amount of money, or gaining double that amount, both would be zero-profit events.

In reality though, we never will be in the situation of having an infinite amount of money in the world, this makes the problem purely theoretical.

However, from time to time we have witnessed massive presses of money, causing a very high devaluation of money. For instance, in Germany, at the end of the second world war, an egg costed a few hundred thousand marks.

When dealing with money in decision making problems, we must always question the economic background of the situation, in order to determine the value of money.

Expected utility should be based on gain or loss in credit, not in money.Ah, I get it, Benjamin -- sorry for being dense. I'm inclined to agree, although some utility measurement may be even better than "credit" as you describe it.

However: I do not think that this (closed envelope) version of the two envelope paradox assumes infinite money or credit. The open-envelope version does, if you assume a function on which whatever value you see you think there's a 50% likelihood that the other envelope has more. Mathematically, I think the open envelope and closed-envelope versions are very different.

I'd like to know if this is "satisfactory" ...

One solution is this: Switching when you find a very low value is obviously a good plan. But if you see a value so large that it surprises you, then it might be a bad idea to switch. If the amount X in the envelope surprises you because it is very large... that's because you guess than the average envelope contains much less. The fact that you saw X is consistent with one of the following:

> The envelopes are better than I thought, and the prizes are X and X/2.

> The envelopes are much better than I thought, and the prizes are X and 2X.

If your "surprise curve" says that odds of those two conditions are, say 9:4 in favor of the first case, then you should not switch, as the costs of switching down exceed the benefits of switching up. Now, suppose you are surprised to see X, but you figure "I would be more surprised to see twice as much, but not *twice* as surprised," then you are assuming that the distribution of

"what the value in an envelope could be" is very flat -- the chance of "between 2X and 3X" exceeds half the chance of "between X and 1.5X". But then we can add in the missing intervals and "the chance of being somewhere" diverges. I think that people do, psychologically, cling to their ignorance, and this is part of the fun of the paradox: in order to have rational behavior, a person must guess a distribution of "what the value could be" and that distribution has to be tight -- gathering most of its support in a small area -- in order to be integrable. If at this moment you rebel and say "I'd rather keep my distribution wide than keep it integrable," let me ask you whether the probability that the prize is between 1 and a million coins is greater than 0.00001. If you believe that, or any similar statement, with the parameters "1", "a million" and "0.00001" changed to any values at all, then you believe in integrability. To not believe in integrability is tantamount to concentrating your distribution at infinity -- for any finite number, you really think that the prize is more probably bigger than this.

Another reason the paradox is compelling is because you should definitely switch from a median value. This is balanced in the expected-value calculation by the great sums you would lose by switching from a high value.

Odatafan: Nicely put. I think that makes a lot of sense, which is why I find the "closed envelope" version more interesting and puzzling than the more traditional "open envelope" version. In the closed envelope version, you don't get to see what is in Envelope A. Clearly, there's no point in switching. And yet there seems to be a compelling argument that you should switch. Figuring out what is wrong with that argument is the challenge!

Post a Comment