In Milgram's famous experiments on the moral psychology of evil, he finds that obedience to commands to deliver extreme electric shocks to another person decreases as the person issuing the commands gets farther away from the subject as and the victim of the shocks gets closer. On the basis of his research, one might expect very low rates of obedience when the victim is in close physical proximity and the authority is issuing commands over the telephone.
This video is therefore doubly interesting. A man calls a McDonald's in Kentucky, purporting to be a police officer, and orders the assistant manager to strip search a teenage girl. Eventually, the assistant manager's fiance is brought in to replace the assistant manager, and at the command of the man on the phone he performs corporal punishment on the naked girl and has her perform sexual acts. This goes on for several hours. I recommend watching the video (which consists of security camera clips and interviews with the victim and assistant manager) to get a vivid sense of the events.
Several features of the situation may be working to increase compliance, despite the proximity of the victim and distance of the authority: the slow, stepwise progression (for the assistant manager), the existence of an apparently more knowledgeable person whose interpretation of the situation would frame his own (for the fiance), the obedience generally accorded police, and maybe (esp. for the fiance) an appealingly erotic aspect of the activity.
Only when a sufficiently skeptical outsider was brought in, with a different perspective on the sitution, was the assistant manager able to reframe the situation and consider the possibility that the commands of the "authority" on the phone were not legitimate.
We shouldn't be too quick, I think, to assume that we would have seen through the ploy and resisted the man's commands....
Friday, April 13, 2007
Obedience and Evil in McDonald's
Posted by Eric Schwitzgebel at 5:53 AM
Labels: moral psychology
Subscribe to:
Post Comments (Atom)
24 comments:
The link isn't working, at least for me. And you got me all interested to see it! After all, my view is that the sort of research in social psychology represented by Milgram's work (and presumably this situation) offers a potential challenge to free will and moral responsibility more interesting and significant than the sorts of threats philosophers typically discuss (e.g., determinism).
Still - I think some people are much more vulnerable to suggestion and less likely to question things.
Speaking of which...
try putting the ID (UFXeXK3szOk) into the video ID box.
Im pretty dubious as to whether there is either a lot more to this story - or a lot less.
GNZ
And some more research.... sorry to clog your comments with my developing thoughts...
they say (www.prisonexp.org/pdf/HBE-transcript.pdf), he tricked 70-100 people of similar things presumably none of which going this far, and 4 others resulting in lawsuits (http://www.msnbc.msn.com/id/13165455/) i.e. which obviously went too far, even if not this far.
I'm confortable with an argument that says maybe 5-10% of people are dangerously suggestable via phone calls. (it makes me feel better anyway!)
5-10% of people probably have a very low barrier to criminal activity anyway (Wes being a classic example).
GNZ
Thanks Eric. This was a very helpful “experiment” we have on camera – although I am not sure why the public has easy access to this. This footage and later inteviews would seem to offer serious situational problems for judicial decision.
I like your analysis. But one more thought about these sorts of scenarios: I wonder if we are susceptible to err in our interpretation if we consider these scenarios to be strange abnormalities instead of simply a revelatory level of vividness we need to see what actually permeates our social world. Just because a more typical situation has not called for this sort of wild behavior does not mean that the force of authority, immediacy, fear of non-compliance, and the rest, will be absent; the social teeth are still there on my view. For example, the same forces are in operation before and after a nuclear bomb explodes. This is why I appreciate Zimbardo’s somewhat dispositionalist solution to a situationist problem; in extreme cases, it is easy for an outsider to see that there is something ‘wrong’ going on. But it is not clear that this outsider therefore possesses an understanding of why it is going on, which gives this outsider the same sort of susceptibility when the situation is viewed from the inside. Certain personalities and certain skill sets seem necessary to resist natural social forces of authority, group conformity, and other potentially violence mechanisms – even when they do just feel so good and so right on one level. Zimbardo writes:
“That means developing “discontinuity detectors,” a sense of awareness of things that don’t fit, are out of place in this setting, that don’t make sense to you. It means asking questions, getting the information you need to take responsible action. It is important also not to fear interpersonal conflict, and to develop the personal hardiness necessary to stand firm for cherished principles. Think not of getting into conflicts, but rather challenging others to support their means, their ends, and their ideology. Take nothing for granted, be a hard-headed behavioral accountant. Finally, those who resist unwanted situational influences are willing to admit mistakes, of being wrong, of cutting bait rather than sticking with prior bad decisions; thus, they do not have to rationalize away earlier commitments made without full awareness of their consequences. Our hardy band of resistors insist also on retaining their personal sense of identity and self-worth, of not allowing others to de-individuate them or to de-humanize them. . . .”
http://thesituationist.wordpress.com/2007/03/19/from-heavens-to-hells-to-heroes-%e2%80%93-part-ii/
Thanks for the comments, folks!
Eddy, I hope you can get it to work. The link is working for me on both my office and home computers.
Anon: Those are interesting data, but I'm not sure I'm comfortable with the 5-10% estimate. Many people who were wronged may prefer not to bring lawsuits or press charges (cf. the low rate of charges in rape cases). Also, the difference in rate may have to do with changes in the caller's methods (he might have tried some things that didn't work at first) or increased awareness that this sort of thing is going on (after the news broke), or other situational differences, rather than difference in the suggestibility of the subject.
(I'm more tempted to see a dispositional factor in Wes's behavior compared to the others on the videotape.)
Nice comment, Michael. I agree with you, but I also think there are some shortcomings in Zimbardo's advice. I've got a post cooking on that, but the short version is this: Evil can come as easily (in a different way) from non-conformity, from refusing to respect others' values and standing instead on one's own strange judgment, as from conformity. Zimbardo's advice seems to me to reduce the risk of evil by conformity by increasing the risk of some other sources of evil.
It seems to me important to distinguish between two things that are going on here:
1) The fact that some individuals were poor enough critical thinkers to believe that the person on the telephone really was a police officer giving legal orders, and
2) The fact that those same individuals believed that the mere circumstance that a police officer ordered them to do something like this was sufficient justification for them to do it.
Certainly, it is appalling that individuals such as the woman who detained her employee and ordered her to strip could be taken in by such an (apparently obvious) swindle. One also wonders how a young woman who was raised to follow the maxim of obeying all commands given by an adult managed to reach the age of 19 without encountering more serious problems than this.
But what I found especially disturbing was that the woman who callously humiliated her employee in this way felt it to be a sufficient defense of her actions that she believed she was following police orders. That shows (unless, as is possible, her lawyer advised her to keep playing dumb) that, in the event that an actual police officer came her way today and gave her good evidence of his official position on the force, she would still -- even after reflecting on this experience -- think it was right of her to follow these instructions all over again.
Her gullibility is horrid enough, but at least she recognizes that error now. The fact that she apparently doesn't recognize _even now_ that it would be morally wrong to follow such commands in any event is, I think, most troubling of all.
Justin,
I think most troubling is the fact that this footage represents who we all are and how our social world works, and how we are all capable of extreme evil given the right circumstances.
Sorry to post so many comments here, but I forgot to note my agreement with Eddy:
" . . the sort of research in social psychology represented by . . . this situation . . . offers a potential challenge to free will and moral responsibility more interesting and significant than the sorts of threats philosophers typically discuss (e.g., determinism)."
I think this is a great point to make here; it seems to me, just from this clip, that at least two parties have a significant narrative illusion about the kind of rational, deliberative, control they had during the episode of evil.
Very interesting discussion, folks! I do agree with you, Michael and Eddy, that there's a potential challenge here to our ideas of free will and responsibility.
You raise some excellent points, Justin -- but I wonder also to what extent you are being taken in by the hindsight effect. It's obvious to us now, in retrospect, that she shouldn't have believed the man on the phone was a police officer -- but I've been suckered in my life before in ways that in retrospect seemed pretty stupid, and I don't think I'm a dispositionally stupid sucker!
I do wonder if the assistant manager would have thought that if she had acted on excellent evidence that the man was indeed a police officer, her obedience would have been appropriate -- that would have been an interesting question to pose to her!
Yes, Eric, I may be taken in by the illusions that come with hindsight and my belief that I am a fairly competent critical thinker (reminds me of the opening of Descartes 'Discourse': "Good sense is the most evenly distributed thing in the world, for all people suppose themselves so well provided with it that even those who are the most difficult to satisfy in every other respect never seem to desire more than they have"). That was why I cautiously said "apparently obvious" instead of 'obvious' -- although, admittedly, I feel I have some reason to believe I would not be taken in just because I have developed a suspicious turn of mind. But, correct or not, that wasn't my main point.
I'm still keen to know what people find most interesting here: the gullibility of the people involved (leaving aside the question whether we ourselves might be similarly gullible)? Or their obedience to authority? Do others feel those things are somehow connected? If so, in what ways?
I take it that we already have fairly good evidence that the assistant manager still feels it was her incorrect appraisal of the non-moral facts, rather than her obedience to authority, that was to blame here: as I recollect, when the interviewer asked (incredulously) why she acted as she did, the assistant manager's response was just to re-assert that she had been completely taken in by the caller's claim to be a police officer (as though that were morally sufficient).
Interestingly, when I taught introductory ethics courses at a community college in the States, I once had a student who closely resembled (in all kinds of ways) the assistant manager. I told my students about Milgram's experiments in some detail. Then, after giving the students some time to think about the implications of this, I asked them to consider the fact that they themselves were statistically quite likely to have shocked someone to death if they had been brought into a situation like what the Milgram experiment simulates. I pointed out to them that learning this startling fact about human tendencies in the face of authority gives them an opportunity to consider carefully their own tendencies toward such unethical obedience, and to try to put in place some sort of mental system of checks and balances so that they would be less likely to go along with such things. I then asked them how they thought one could go about doing this.
At that point, the woman in question said that she would certainly have gone to the end of the panel of switches and shocked the person to death. Impressed by this frank admission, I asked her how she reacted to this discovery about herself. She didn't seem to understand the question, or even to realize that this was morally problematic.
So I asked her why she wasn't troubled by this discovery. She explained that she had been raised to follow orders, and that if someone gives you a job to do, you've got to "get your ass in gear" and do that job. If that job involves torturing and killing people, so be it.
Unless she was a sort of 'classroom Borat', and I really don't think she was, this is (I think) very revealing. I think this mentality is quite common, and that for many people it is the weeding-out of that mentality that frees people from being potential Milgram-type killers.
So perhaps it isn't impossible to work out whether one would be likely to co-operate in that kind of situation -- perhaps tests to measure one's ability to separate what is ethical from what is urged by an authority, _and_ to separate what is _reasonable_ from what is urged by an authority, have good predictive value here.
And perhaps the two things are related, and the ability and _tendency_ to separate both these pairs of things is the first (and maybe greatest) gift philosophy has to offer.
And perhaps, also, the comment by the employee that she was raised to obey the commands of all adults -- which, interestingly, she didn't clearly call into question after this experience -- makes the victim and perpetrator in this case flip-sides of one another.
I continue to think, Justin, that it's hazardous to think of those who obey as different from oneself -- while at the same time I agree that there's something to be said for thinking about how one might cultivate in oneself a disposition to resist unethical commands. *Maybe* the McDonald's manager is different, but it is too easy and comforting to quickly come to that conclusion. And surely most of Milgram's obedient subjects would not have agreed with your student (as one can see from the interviews Milgram splices into his book).
I think of myself as relatively independent-minded, but I recall being struck at how cowed I was by the judge as I sat in a jury selection box. Among other things, I remember the judge reading through a list of questions, asking us to raise our hands if we would say "no" to any of them. Among the questions was something like this, "Do you agree to apply the law as instructed, regardless of your personal opinions?" I was almost shaking as I, alone in the courtroom, raised my hand....
To answer your first question, I find the obedience and gullibility both interesting -- and I think they are connected. Quite often, it seems to me, epistemic and moral vices are intermixed, and support each other, in acts of evil....
1) First, a side-note (not the main point of this post): perhaps I am mistaken in that brief comment I made (and then reiterated in the next response), but my belief -- however uncertain -- that I would be _less than likely_ to have been taken in by the caller is not something I have formed "quickly" or uncritically. I'm leery of jumping, from the premise that people can often be deceived about their own tendencies, to the conclusion that a)there is never any basis, even after careful testing, etc., for making any assessment as to how oneself (or someone else) may react in the face of authority or that b)any efforts one might make to safeguard oneself against indiscriminately following authority should be deemed as no more effective than doing nothing. It seems to me that you are, in a sense, implying both these things. However, again, that was not my point in either post.
2) I find it difficult to agree with your comment that "surely most of Milgram's obedient subjects would not have agreed with your student (as one can see from the interviews Milgram splices into his book)."
The post-experiment interviews are discussed in Chapter 5 and 7. Bruno Batta, Jack Washington and Morris Braverman are subjects who didn't break off the experiment in Chapter 5, and in Chapter 7 it's Karen Dontz, Elinor Rosenblum, and Pasqual Gino.
Of these six subjects, it seems that five still had not worked out some time after the experiment that they had acted wrongly in obeying authority in this way. Only one (Morris Braverman) seems to have acknowledged this to some degree.
I find the views of the other five to be in keeping with the comments of the former student of mine I mentioned.
Oops... sorry, I realize now you are (apparently) assuming a) but not b) as mentioned in my comment.
I didn't mean the comments personally, Justin; I hope you're not offended! I'm quite ready believe that in your case your comments arose from considerable deliberation. I apologize if I unthinkingly implied otherwise! I do have the concern at the *general* level, though.
On your second point: Well, shoot, I think you're right about those interviews! I should have gone back and looked more carefully at the text before saying what I did. But here's a thought, anyway: Your student may have been unusual in saying and thinking that in *prospect*. The fact that these subjects justified and excused their obedience in retrospect may reflect defense mechanisms or cognitive dissonance effects more than a general societal attitude about obedience. The *general* reaction, it seems to me, of my students and others, is not in line with the attitude conveyed by your student. I'm very reluctant to believe that your student was simply boldly expressing what the majority implicitly think.
No, I'm not offended, Eric -- don't worry! My concern wasn't with my own powers of autonomous reasoning in the face of authoritative pronouncements (whatever those powers might be). It was that your tone seemed too pessimistic.
I understand (and take) your point: most of us are poor judges of what our own reactions would be in most situations like this, and I daresay each of us is a poor judge of how he/she would react in at least some situations. But my concern is that, in adjusting our beliefs to accord better with the findings of psychological research, we not over-compensate.
In this regard, I still think it is important to distinguish between two separate things that are going on in the McDonald's case:
1) the obedience to authority, and
2) the gullibility.
In particular, Milgram's experiments suggest that some 2/3 of the population are willing to kill merely to obey authority (though as you say, the number decreases with proximity to the victim and absence from the authority). Also, 'anonymous' said that only 5-10% of the population are liable to be fooled by such phone calls.
Since being fooled and immorally obeying authority are two different things, I don't see why there is any tension between those two claims. I'm also curious whether it's merely your feeling that most people would follow such orders as the assistant manager received _if_ they believed those orders came from a genuine authority, or whether you also believe that most people would be fooled by the telephone call.
I think there are some interesting (but independent) discussions to be had on that second point, but I'm not sure whether it's part of the point you're making and that thread (about gullibility) probably belongs elsewhere.
Also, you wrote: "I'm very reluctant to believe that your student was simply boldly expressing what the majority implicitly think."
Can we talk about this? What are your reasons for doubting it? Keeping in mind that the people we meet professionally and socially are far from a fair cross-section of society, and keeping in mind also the post-experiment discussions Milgram reports, I'm not sure we really are justified in those doubts.
That raises another interesting question: Milgram reports that many people polled before the experiments predicted that the subjects would behave vastly more ethically than the actual subjects did. Is this because, as I assumed when I read Milgram, all the people these polled individuals took to be moral (upon careful acquaintance) had a hidden immoral side?
Or is it because, as I begin to suspect now, the people with whom the polled individuals had had serious moral discussions were just not a good cross-section of society (or because they hadn't had serious discussions with their acquaintances about ethics)?
I have encountered many people from all walks of life -- including some philosophers -- who quite explicitly (however unintentionally) invoke poor moral reasons for doing things in casual conversation. Non-philosophers, who are less guarded, do this even more. Every time, for instance, I hear someone say "That would be wrong -- you could go to jail for that!" or "Let's talk morals here: your country has to enter the modern world and understand that if it doesn't improve its human rights record, it will suffer from reduced trade", I get a sort of shudder and start to consider that the speaker might be a Milgram-killer in waiting.
Could I be right? Might there not be _somewhat_ reliable, and rather straightforward, ways of determining which individuals would throw the last (or the first) switch? I think we should consider this.
I think our fall-back position should be that our intuitions are broadly correct; and one strong intuition is that the ordinary moral justifications people offer bear _some_ connection with how they would act in Milgram situations (the post-incident comments of the assistant manager offer a case in point). I don't yet see a reason to reject that intuition.
Thanks, Justin, for your interesting comments -- and especially for your correction on the Milgram interviews.
I agree with much of what you say, though I'm not sure the gullibility and obedience issues are as separable as all that, especially if one thinks of sadistic or destructive desires as a common contributing factor to both the gullibility and the inclination to obey.
The reason I am reluctant to think your student's expressed view reflect most people's views is the failure of people to predict the Milgram results in advance as well as my sense of their shock at the results. I don't buy that it's just that Milgram polled a select, especially moral group. I don't even think ethicists are especially moral (as you know).
I wouldn't say moral beliefs aren't motivating at all, but I suspect that the broadest ones ("be kind", "abide by the categorical imperative") are pretty close to ineffectual.
Interesting discussion!
Eric, you said:
"... I'm not sure the gullibility and obedience issues are as separable as all that, especially if one thinks of sadistic or destructive desires as a common contributing factor to both the gullibility and the inclination to obey."
Would you mind elaborating on this, Eric? I ask because I haven't heard anything about this connection, (perhaps all that I need is to be referred to some experiments, etc) and so naturally I'm a bit skeptical. I have two concerns:
(a) Is it clear that sadistic and destructive desires really are linked to gullibility (perhaps separate from obedience and from destructive outcomes of the gullibility)?
(b) Is it clear that these desires _contribute_ to obedience to authority? Or could it be that, given the fact that those who (for whatever reason) follow the commands of an authority figure are often ordered to carry out destructive and sadistic acts?
Finally, and perhaps more related to the discussion, if it _is_ true that destructive and sadistic desires can be linked _causally_ with gullibility and obedience to authority, then wouldn't this support the position that we can predict, if crudely, the sort of people who would fall for or be controlled by Milgram-type situations?
Thanks for pushing me on this, laro. Consider this example:
Some German (circa 1940) hates Jews. An authority tells him (1.) that Jews are plotting the overthrow of the Reich, and (2.) to shoot a Jewish girl in the head.
Plausibly, his willingness to accept both (1) and (2) -- that is to be both gullible and obedient -- is enhanced by his hatred of the Jews. A Judenfreund (Jew friend) would plausibly be at least somewhat less likely to buy either of these things.
One excuse that people sometimes offer for ordinary Germans' compliance with genocidal commands was that they sincerely believed such-and-such about Jews. But this defense leaves out (as Nomi Arpaly has nicely pointed out) the hate-motivated epistemic irresponsibility plausibly involved in having such beliefs about Jews.
If you accept that, then the behavior of the German (analogously that of the McDonald's assistant manager) is not entirely driven by local features of the situation; but neither need such gullibility and obedience reflect a broadly gullible and obedient character.
It's a false dichotomy often invited in discussions of situationism (including perhaps my own?) to suppose that behavior must be motivated either by narrowly defined situational factors or broad personality traits. There's lots of stuff between or to one side (such as narrowly defined but stable attitudes).
In the McDonald's case, I think it's hard to know about the assistant manager whether something relatively stable made her behave differently than others would or whether most people in exactly the same situation would do what she did. For some reason, I'm more inclined to buy that something stable (though not necessarily a broadly defined character trait) was instrumental in the behavior of the fiance.
Thank you Eric.
You made a very good point. I can now see the connection between malicious desires and obedience to an authority and gullibility to suggestions that would support/fulfill these desires.
And it might be important to separate the case of the assistant manager from that of the fiancé. It does seem likely that he had sexual desires that played an important role in his obedience to the man on the phone.
I still tend to think, though, that if we bring these extra desires or motives into play in the McDonalds case then we have to admit that we are moving farther away from Milgram’s experiments. (And, consequently, admit that the predictability of our involvement in similar situations will change as well.) It's hard for me to accept that _every_ participant in the first Milgram experiment had some sort of hidden desire to do harm (for no participant stopped before administering 300vts). (And I’m not suggesting that this is your view either.) I tend to think that the Milgram experiments have more to do with straight-up obedience to authority and less to do with hidden desires.
So, if we add in desires to the McDonald’s case to account for the distance of the man on the phone, etc, as you suggest in the original post, then I also think it becomes necessary to ask something different from “would I do the same thing in that circumstance, given the results of the Milgram studies?” to something more like: “in which situations would I be most likely to be gullible, obedient, and to do wrong, given that these are the beliefs and desires I hold?” and “How can I reduce that likelihood?”
And I suppose this is where your skepticism about our ability to conduct reliable introspection comes in. (And, I think, for good reason.) However, two things still seem possible:
1. We can avoid much introspection if we focus instead on the external evidence available and make some conclusions about the sort of individuals who are more likely to commit these acts. (Perhaps those with the relevant desires and beliefs already in place, who’ve been taught to obey authority without question, who’ve had little practice trying to be objective when listening to accusations, etc.)
2. When it does come down to judging our own likelihood to fall for these situations, we can at least look at the sorts of characteristics that would make us more and less likely to be gullible and follow orders without question, and, since we can learn, work at developing some stable character traits that are more resilient.
I apologize if I’ve repeated things that have already been said in this discussion. I do realize that in the end it seems to come down to how much trust we can put in introspection, how much we can learn and improve our introspection, etc.
Actually, Milgram's Experiments 12, 13, 13a and 14 (from Chapter 8) seem to rule out the possibility that most people have a hidden desire to fatally shock others and are just looking for a justification for doing so. To quote from Milgram, pp.167-168:
"In several of these experiments, subjects were given opportunities to shock the victim but did not do so unless the social structure of the situation was appropriately arranged.
"The key to the behavior of subjects lies not in pent-up anger or aggression but in the nature of their relationship to authority."
If Milgram's conclusion from these other experiments is correct, and I think he presents good evidence that it is, then it seems the sorts of 'obedience to authority' issues he discusses should be distinguished from the tendency for those who are hateful of certain people to be motivated by these characteristics to believe bad things about those people and to hurt them.
What is going on in the McDonald's scenario? It seems there are at least two distinct, explanations:
1) The assistant manager had a secret desire to humiliate and harm the worker (or her underlings in general, or people in general); for that reason, she accepted the professed identity of the caller without employing the critical faculties she normally uses, and took advantage of the opportunity this afforded to act out her hidden (perhaps even to herself) wish.
2) The assistant manager suffers from both weak abilities at critical thinking _and_ a disturbing (however common) tendency to obey authority even to the point of doing things she would find unethical in other circumstances.
I find little evidence to support option 1). I did not get the impression in her interview that this was a normally intelligent and thoughtful person who just had a lapse in thinking. Everything I saw suggested to me that she could have been just as easily fooled if the caller had claimed to be the CEO of McDonalds and had insisted that the employee should be made manager for the day. Of course, I acknowledge that that is an empirical question...
It also seemed clear from the security footage that the assistant manager tried to console the employee at certain times, as though the assistant manager didn't wish to do those things but recognized that she had to.
For those reasons, option 2) -- the straight Milgram explanation --looks much more plausible to me.
If I'm right, is there still a connection between the gullibility and the unquestioning obedience? I think so. I think both these things are characteristics of a person who has never learned a healthy disrespect for authority.
Wow, laro, great comment! I think you're right on the money.
I agree with your analysis of the Milgram, Justin; and I take it you're largely agreeing with laro. To the extent the McDonald's scenario taps into pre-existing desires (especially in the fiance), it differs from the Milgram scenario. (Or plausibly so: I do wonder whether there might be some sadistic joy in the shocks, for some, but they need a direct command to shock at those levels to release their inhibitions. That said, I trust Milgram that their body language generally suggests otherwise.)
Regarding options (1) and (2), I'm more tempted by (1) for the fiance and (2) for the assistant manager, but I don't think we have the evidence really to know. And in thinking about (2), I want to be clear that it may or may not be a matter of *unusually* weak critical thinking or ability to resist authority. That too, I don't think we're in a position to know.
Thanks again for your interesting and thoughtful comments!
Although I am coming a bit late into this very interesting discvussion, I felt the need to post a comment. I'm a social psychologist and was teaching my intro psych class about the Milgram experiment last spring when a very telling incident occurred. Specifically, I went through the whole experiment and talked about the need to question authority when it did not appear to be legitimate, etc. At the very end of class, a police officer walked in and asked all of the students to put their bags and purses up on their desks and he was going to search all of them. He didn't give a reason, didn't tell them they were allowed to refuse, etc. After each student was searched they were allowed to leave. At the beginning of the following class I asked why they had allowed the police officer to search their bags and no one had questioned him. They seemed very shocked that they should have questioned him. Many said, "I didn't have anything to hide." Some didn't believe he was a real police officer and that I had just arranged for them to be tricked as some sort of psychology experiment. When I explained that it was a real police officer that seemed to be implicating them in some sort of wrongdoing and that I didn't have anything to do with it, they seemed even more shocked. They also didn't seem to know the law that the police cannot search inside of things without probable cause or a warrant (I guess all of those Law and Order type shows aren't making an effect). All in all, they seemed surprised that questioning of authority would even have been called for in that situation.
So, with regard to the question of guillability vs. obedience, I think they are often at work. In this case the obedience did not hurt another person, but did result in the students willingly giving up their civil rights. They saw the police uniform which triggered automatic obedience with the request. Turns out a faculty member's wallet was stolen out of a nearby office and they thought they'd just search all of the students in nearby classes in case one was the thief.
Also my opinion with regard to where is our attention best focused - individual differences in predicting this type of behavior or situational variables. Of course I am a social psychologist, so I'm going to say situational variables. Although I would agree that all situations involve a person x situation interaction, I would say in the majority of social influence studies, situational variables account for a larger degree of the variance in explaining the behavior. Thus, if we are looking for the best predictors of individual behavior, we should look to situational variables.
I think an interesting venue for future research would be reducing gullability in these type of situations which should reduce unquestioning obedience. I think guillability really stems from people's lack of experience. That is, I approach these things from a very behavioral point of view. It all comes down to learning history. If it is a novel situation, they are going to look to cues such as authority, conformity, etc. However, if the situation is not novel and they have experience (either true experience, role playing, etc.) with appropriate behaviors in these type of situations, they are more likely to engage in these behaviors when confronted with a similar situation in the future.
Anyhow, these are some of the thoughts that crossed my mind as I read this post.
Wow! Thanks for the interesting comment, Knitty!
Post a Comment