Tuesday, June 19, 2012

Chinese Room Persona

I just came across this passage in Karl Schroeder's science fiction novel Sun of Suns (from a conversation between high-tech engineer Aubri Mallahan and low-tech Admiral Chaison Fanning):
"Max is a Chinese Room persona, which makes him as real as you or I." She saw his uncomprehending stare, and said, "There are many game-churches where the members of the congregation each take on the role of one component of a theoretical person's nervous system -- I might be the vagus nerve, or some tiny neuron buried in the amygdala. My responsibility during my shift is to tap out my assigned rhythm on a networked finger-drum, depending on what rhythms and sounds are transmitted to me by my neural neighbors, who could be on the other side of the planet for all I--" She saw that his expression hadn't changed. "Anyway, all of the actions of all the congregation make a one-to-one model of a complete nervous system... a human brain, usually, though there are dog and cat churches, and even attempts at constructing trans-human godly beings. The signals all converge and are integrated in an artificial body.  Max's body looks odd to you because his church is a manga church, not a human one, but there's people walking around on the street you'd never know were church-made."
Chaison shook his head. "So this Thrace is... a fake person?"
Aubri looked horrified. "Listen, Admiral, you must never say such a thing! He's real. Of course he's real. And you have to understand, the game-churches are as incredibly important part of our culture. They're an attempt to answer the ultimate questions: what is a person? Where does the soul lie? What is our responsibility to other people? You're not just tapping on a drum, you're helping to give rise to the moment-by-moment consciousness of a real person.... To let down that responsibility could literally be murder.
Although John Searle would likely disagree with Aubri's perspective on the Chinese room, I'm inclined to think that on general materialist principles there's no good reason to regard such details of implementation as sufficient to differentiate beings who really have consciousness from those who don't. We don't want to be neurochauvinists after all, do we?

10 comments:

  1. Chinese gym?

    Just want to get the thought experiments straight :)

    ReplyDelete
  2. Karl would be thrilled!

    For me, of course, the million dollar question is one of why these scenarios jar against our intuitions so.

    To BE a Chinese Room is to be unable to conceive of oneself as a Chinese Room... perhaps because consciousness arises at the wrong end of a game of Chinese Whispers, or perhaps because it finds itself trapped in the wrong part of some neural Chinese Bureaucracy.

    ReplyDelete
  3. "We don't want to be neurochauvinists after all, do we?"

    Of course we do! The overwhelming weight of empirical evidence tells us that we must be neurochauvinists.

    ReplyDelete
  4. Hmmm, do I understand correctly that each church member here represents one neuron?

    The average human central nervous system consists of 85 billion neurons, so if they were doing this in eight-hour shifts, using eight hours for drum-beating, eight hours for sleep and the remaining eight to produce food and do everything else that keeps a planet's economy going, they would need a population of 255 billion people just to create a single human analogue. Of course that assumes that the billions of glia cells have no function in creating consciousness, which is by no means certain.

    More interestingly, how do you integrate all those signals? Presumably there is some sort of computerised system so that each drumbeater can receive the correct signals from another performer across the planet and send it to the approprate "neuron" at yet another location. With those kinds of numbers, any system able to do that in realtime would be in severe danger of developing consciousness itself!

    Be that as it may, would the resulting be a conscious human? Yes, but in a static, predictable sort of way. What is missing here is the ability of a real neuron to decide that it really needs to connect with that other neuron over there, that it hadn't done before. Now if there was an element of mutability, a loose organisation of drumbeaters who changed the rhythm at unpredictable intervals, or who sent the signal off to an unexpected destination ...

    ReplyDelete
  5. Good points all, clasqm. It would have been nice see Schroeder think all this through. Maybe he does elsewhere?

    ReplyDelete
  6. The more I think about examples like these, the more I think that we should not say there is always a fact of the matter, out there, as to whether a given entity is conscious. Rather, we should ask whether it makes sense for us (or whichever rational beings happen to be having the discussion) to regard the entity as conscious. Whether it makes sense will depend on the nature of our social interactions with the entity, our views on moral obligation to the entity, whether we see the entity as made up of smaller entities that we see as individually conscious, and lots of other things.

    We must regard other human beings, and a fair number of animals high up the evolutionary scale, as conscious. But human beings can disagree as to how far down the scale to go. Martians can also disagree with human beings about the consciousness of at least some of the entities that human beings must regard as conscious, and we can disagree with Martians about the consciousness of at least some of the entities that they must regard as conscious.

    (This assumes that Martians have a concept that corresponds to our concept of consciousness. They may not have. If my general approach is right, a possible reason for them not to have would be that they might well not have concepts that corresponded to our concepts of social interaction or of moral responsibility.)

    Then a dispute about whether the China brain is conscious, or whether the United States is conscious, can be seen as a dispute about the relative significance of the members of two groups of indicators of consciousness. The first group, on which such entities score highly, includes the sophistication of processing and the existence of a generally consistent, yet gently mutable, character of conduct, that differs, but does not differ radically, from the characters exhibited by other, comparable, entities. The second group, on which such entities score badly, includes the personal nature of our interaction with the entities, the existence of feelings of moral responsibility towards them that are very similar to our feelings towards other human beings, and a sense that the entities have qualia of experience. (I do not mean to claim that qualia are real, only that most of us, in our everyday lives, think that they are real.)

    The example from Sun of Suns presents a new challenge. A China brain arrangement is in the background, and we are presented with a single body, which looks like a person, and with which we can interact as we would with a person. So the entity scores highly on personal interaction, and might easily come to score highly on being regarded as an object of moral responsibility. The one thing about which we would still worry would be the qualia (or whatever our views on the human mind allow along the lines of qualia).

    Another interesting example is the David character in the film A.I. This is an artificial child, the capacity of which to display love towards the human being who acts as its mother can be switched on, but cannot then be switched off. Once this capacity had been switched on, and the "love" had developed, could the mother argue that the creature was just a machine, to which she had no moral responsibility? I rather think that it would depend on how the programming was done. If the intelligent processing of data from the child's environment went on deep inside, but it was only near the surface, in a separate module, that appropriate behaviour was generated, then the mother would have less of a moral obligation than if the intelligent processing and the generation of behaviour were fully integrated. I have not worked this out properly, but if there is something in this idea, and if considerations of moral responsibility are relevant to the attribution of consciousness, then the details of implementation of processing could matter to the attribution of consciousness.

    Cross-posted, with amendments to allow for the lack of the context of Eric's post, at http://analysisandsynthesis.blogspot.co.uk/

    ReplyDelete
  7. Thanks for the thoughtful and interesting comment, Richard!

    ReplyDelete
  8. All that would happen with that silly Chinese Brain as the production of meaningless nerve impulses. Playing the role of a single neuron encompasses far more than sending drum beats to distant recipients. Short-range interactions with neighbors are as informationally important as the neural impulses, but nowhere appear in this game. The organic unity of a brain directing a motile multisensory body is completely missing, so there is no possibility of any consciousness. Ditto for the United States.

    In fact, I would posit that consciousness is meaningless outside of biology and will never be produced by non-life.

    ReplyDelete
  9. Besides what clasqm said I wonder about propagation speed, which would include human reaction/decision time per synapse, so the Chinese Room Persona would be a slow thinker, perhaps glacially slow. For a better fantastic treatment of the theme, see Neal Stephenson's Diamond Age, although the point there is not a simulation. Also sexier.

    I have never understood how the Chinese Room proves what John thinks it does... if you can't tell the difference, what difference does it make? Anyway as an Engineer I assert that what can be done at all can be done in various ways. (Although I don't think the CR can be made to work any more than the CRP.)

    ReplyDelete