Or something like that?
In a series of studies, David G. Rand and collaborators found that participants in behavioral economics games tended to act more selfishly when they reached decisions more slowly. In one study, participants were paid 40 cents and then given the opportunity to contribute some of that money into a common pool with three other participants. Contributed money would be doubled and then split evenly among the group members. The longer participants took to reach a decision, they less they chose to contribute on average. Other studies were similar, some in physical laboratories, some conducted on the internet, some with half the participants forced into hurried decisions and the other half forced to delay a bit, some using prisoner's dilemma games or altruistic punishment games instead of public goods games. In all cases, participants who chose quickly shared more.
I find the results interesting and suggestive. It's a fun study. (And that's good.) But I'm also struck by how high the authors aim in their introduction and conclusion. They seek to address the question: "are we intuitively self-interested, and is it only through reflection that we reject our selfish impulses and force ourselves to cooperate? Or are we intuitively cooperative, with reflection upon the logic of self-interest causing us to rein in our cooperative urges and instead act selfishly?" (p. 427). Ten experiments later, we have what the authors seem to regard as pretty compelling general evidence in favor of intuition over rationality as the ground of cooperation. The authors' concluding sentence is ambitious: "Exploring the implications of our findings, both for scientific understanding and public policy, is an important direction for future study: although the cold logic of self-interest is seductive, our first impulse is to cooperate" (p. 429).
Now it might seem a minor point, but here's one thing that bothers me about most of these types of behavioral economics games on self-interest and cooperation: It's only cooperation with other participants that is considered to be cooperation. What about a participant's potential concern for the financial welfare of the experimenter? If a participant makes the "cooperative" choice in the common goods game, tossing her money into the pool to be doubled and then split back among the four participants, what she has really done is paid to transfer money from the pockets of the experimenter into the pockets of the other participants. Is it clear that that's really the more cooperative choice? Or is she just taking from Peter to give to Paul? Has Paul done something to be more deserving?
Maybe all that matters is that most people would (presumably) judge it more cooperative for participants to milk all they can from the experimenters in this way, regardless of whether in some sense that is a more objectively cooperative choice? Or maybe it's objectively more cooperative because the experimenters have communicated to participants, through their experimental design, that they are unconcerned about such sums of money? Or maybe participants know or think they know that the experimenters have plenty of funding, and consequently (?) they are advancing social justice when they pay to transfer money from the experimenters to other participants? Or...?
These quibbles feed a larger and perhaps more obvious point. There's a particular psychology of participating in an experiment, and there's a particular psychology of playing a small-stakes economic game with money, explicitly conceptualized as such. And it is a leap -- a huge leap, really -- from such laboratory results, as elegant and well-controlled as they might be, to the messy world outside the laboratory with large stakes, usually non-monetary, and not conceptualized as a game.
Consider an ordinary German sent to Poland and instructed to kill Jewish children in 1942. Or consider someone tempted to cheat on her spouse. Consider me sitting on the couch while my wife does the dishes, or a student tempted to copy another's answers, or someone using a hurtful slur to be funny. It's by no means clear that Rand's study should be thought to cast much light at all on cases such as these.
Is our first impulse cooperative, and does reflection makes us selfish? Or is explicit reflection, as many philosophers have historically thought, the best and most secure path to moral improvement? It's a fascinating question. We should resist, I think, being satisfied too quickly with a simple answer based on laboratory studies, even as a first approximation.