Wednesday, January 23, 2013

Oh That Darn Rationality, There It Goes Making Me Greedy Again!

Or something like that?

In a series of studies, David G. Rand and collaborators found that participants in behavioral economics games tended to act more selfishly when they reached decisions more slowly. In one study, participants were paid 40 cents and then given the opportunity to contribute some of that money into a common pool with three other participants. Contributed money would be doubled and then split evenly among the group members. The longer participants took to reach a decision, they less they chose to contribute on average. Other studies were similar, some in physical laboratories, some conducted on the internet, some with half the participants forced into hurried decisions and the other half forced to delay a bit, some using prisoner's dilemma games or altruistic punishment games instead of public goods games. In all cases, participants who chose quickly shared more.

I find the results interesting and suggestive. It's a fun study. (And that's good.) But I'm also struck by how high the authors aim in their introduction and conclusion. They seek to address the question: "are we intuitively self-interested, and is it only through reflection that we reject our selfish impulses and force ourselves to cooperate? Or are we intuitively cooperative, with reflection upon the logic of self-interest causing us to rein in our cooperative urges and instead act selfishly?" (p. 427). Ten experiments later, we have what the authors seem to regard as pretty compelling general evidence in favor of intuition over rationality as the ground of cooperation. The authors' concluding sentence is ambitious: "Exploring the implications of our findings, both for scientific understanding and public policy, is an important direction for future study: although the cold logic of self-interest is seductive, our first impulse is to cooperate" (p. 429).

Now it might seem a minor point, but here's one thing that bothers me about most of these types of behavioral economics games on self-interest and cooperation: It's only cooperation with other participants that is considered to be cooperation. What about a participant's potential concern for the financial welfare of the experimenter? If a participant makes the "cooperative" choice in the common goods game, tossing her money into the pool to be doubled and then split back among the four participants, what she has really done is paid to transfer money from the pockets of the experimenter into the pockets of the other participants. Is it clear that that's really the more cooperative choice? Or is she just taking from Peter to give to Paul? Has Paul done something to be more deserving?

Maybe all that matters is that most people would (presumably) judge it more cooperative for participants to milk all they can from the experimenters in this way, regardless of whether in some sense that is a more objectively cooperative choice? Or maybe it's objectively more cooperative because the experimenters have communicated to participants, through their experimental design, that they are unconcerned about such sums of money? Or maybe participants know or think they know that the experimenters have plenty of funding, and consequently (?) they are advancing social justice when they pay to transfer money from the experimenters to other participants? Or...?

These quibbles feed a larger and perhaps more obvious point. There's a particular psychology of participating in an experiment, and there's a particular psychology of playing a small-stakes economic game with money, explicitly conceptualized as such. And it is a leap -- a huge leap, really -- from such laboratory results, as elegant and well-controlled as they might be, to the messy world outside the laboratory with large stakes, usually non-monetary, and not conceptualized as a game.

Consider an ordinary German sent to Poland and instructed to kill Jewish children in 1942. Or consider someone tempted to cheat on her spouse. Consider me sitting on the couch while my wife does the dishes, or a student tempted to copy another's answers, or someone using a hurtful slur to be funny. It's by no means clear that Rand's study should be thought to cast much light at all on cases such as these.

Is our first impulse cooperative, and does reflection makes us selfish? Or is explicit reflection, as many philosophers have historically thought, the best and most secure path to moral improvement? It's a fascinating question. We should resist, I think, being satisfied too quickly with a simple answer based on laboratory studies, even as a first approximation.

13 comments:

dietl said...

My reasoning, given enough time, would go like this: If I give nothing I have nothing to lose but nothing to gain. But if I give everything I would only lose half of my money and could double up. Furthermore if I give all and every other person gives at least 10, which seems like to me, then I'm on the winning side.
So my reasoning would suggest giving everything.

dietl said...

"... which seems like to me,..."
like -> likely

Callan S. said...

If a participant makes the "cooperative" choice in the common goods game, tossing her money into the pool to be doubled and then split back among the four participants, what she has really done is paid to transfer money from the pockets of the experimenter into the pockets of the other participants.

Good point - does it show how selfish the people who put more money in without thinking were, in how little they thought about that they were draining the experimenters funds?

I guess you might want to try a fake 'natural' situation, where people are put in some situation where it's ostensibly natural resource they get and there's some way of multiplying them. Though it strikes me that that'll be hard to replicate in a way that seems natural, as nature doesn't really work that way - that's another thing about the test, in how it measures 'selfishness' based on the abstractions of capitalist gain - there seems a moral conundrum in that! Using capitalism to measure if someones selfish? Or if it's not capitalism, exactly why do you double your money? It's a pretty alien notion. Or is it somehow perfectly natural?

Also, why is it assumed it's selfishness that has people not put in? So when a conman offers a deal too good to be true, I'm selfish when I don't put my money into his ponzi scheme? I'm sure he'd like me to think that.

The notion of morality here, in regard to selfishness, seems childish simple in that regard - as if you should just believe and give, rather than estimate whether other people are likely to forfil their mutual social obligations (particularly as such obligations are essentially voluntary). It's the sort of thing that has people vote for politicians because of promises, but when it comes to the voters own lives, they have to sign contracts (like, for a mortgage) rather than just make promises. A kind of notion that makes people put themselves into chains, while they let others off on promises (or even 'aspirations', as some politicians are wont to slip in).

Okay, spreading out wide a bit there! But the test seems juvinile in the morality it attempts to test.

I'll grant that most likely the person uses their own reaction as the measure of how other people react - the longer they think about it/the longer they say no, the more they project their own saying no onto others as their measure of how others would react.

Aldo Antonelli said...

The game you mention seems similar to the Nash bargaining game, with multiple equilibria, some of which are Pareto sub-optimal. It seems implausible to think that rationality has anything to say in such cases, especially untutored rationality of experiment subjects.

Eric Schwitzgebel said...

Aldo: You might be right about that. "Reflection" or "reasoning" would probably be more somewhat more neutral terms here than "rationality". In fact, Rand et al. seem to be careful to stay away from the word "rational" except in their more gestural intro and conclusion.

Eric Schwitzgebel said...

Dietl: A lot of economists would reason differently, since by flat-footed money-maximization keeping your money would be the more profitable choice regardless of what the other participants do.

Eric Schwitzgebel said...

Callan: I agree that the whole thing quickly becomes a murk of complexity in the ways you say, and other ways besides. For me, that's part of what makes it fun to think about.

dietl said...

Maybe that's the difference berween a poker players and an economists reasoning.

Callan S. said...

Eric, I guess the test just triggered that 'Argh, how could you just judge someone as selfish over this - as if someone who doesn't just think the best of other people automatically == selfish' responce in me. It gets to that quavering ground where it seems sound reasoning to question others commitment, but at the same time it feels like being locked on the outside of what's good, in regards to the test makers. Hanging on the reasoning of right, for redemption! Jeez!

dan haybron said...

Nice post. Another large issue is whether the result applies across cultures. By many standards our culture applauds selfish behavior, and you could well get the opposite effect in other cultures.

Eric Schwitzgebel said...

Dan: Yes, that seems possible. Is there a good way to research this?

Anonymous said...

Joseph Heinrich, Robert Boyd, Samuel Bowles and others wrote an article that looks at the standard model of economic self-interest in different cultures. It is entitled "'Economic Man' in Cross-cultural Perspective: Behavioral Experiments in 15 Small-Scale Societies." They find that the standard model fails in these other societies.

Samuel Bowles links to it through his website:
http://tuvalu.santafe.edu/~bowles/bbs_final.pdf

Eric Schwitzgebel said...

Thanks for the suggestion and link, Anon!