Monday, August 11, 2008

Which Machine Is Conscious? (by guest blogger Teed Rockwell)

The following thought experiment ended up in my online paper “The Hard Problem Is Dead, Long Live the Hard Problem”.

It was first sent out to the Cognitive Questions mailing list, and received the following replies from a variety of interesting people.

Let us suppose that the laboratories of Marvin Minsky and Rodney Brooks get funded well into the middle of the next century. Each succeeds spectacularly at its stated goal, and completely stays off the other's turf.



The Minskians invent a device that can pass every possible variation on the Turing test.

It has no sense organs and no motor control, however. It sits stolidly in a room, and is only aware of what has been typed into its keyboard. Nevertheless, anyone who encountered it in an internet chatroom would never doubt that they were communicating with a perceptive intelligent being. It knows history, science, and literature, and can make perceptive judgments about all of those topics. It can write poetry, solve mathematical word problems, and make intelligent predictions about politics and the stock market. It can read another person's emotions from their typed input well enough to figure out what are emotionally sensitive topics and artfully changes the subject when that would be the best for all concerned. It makes jokes when fed straight lines, and can recognize a joke when it hears one. And it plays chess brilliantly.



Meanwhile, Rodney Brooks' lab has developed a mute robot that can do anything a human artist or athlete can do.

It has no language, neither spoken or internal language-of-thought, but it uses vector transformations and other principles of dynamic systems to master the uniquely human non-verbal abilities. It can paint and make sculptures in a distinctive artistic style. It can learn complicated dance steps, and after it has learned them can choreograph steps of its own that extrapolate creatively from them. It can sword fight against master fencers and often beat them, and if it doesn't beat them it learns their strategies so it can beat them in the future. It can read a person's emotions from her body language, and change its own behavior in response to those emotions in ways that are best for all concerned. And, to make things even more confusing, it plays chess brilliantly.

The problem that this thought experiment seems to raise is that we have two very different sets of functions that are unique and essential to human beings, and there seems to be evidence from Artificial Intelligence that these different functions may require radically different mechanisms. And because both of these functions are uniquely present in humans, there seems to be no principled reason to choose one over the other as the embodiment of consciousness. This seems to make the hard problem not only hard, but important. If it is a brute fact that X embodies consciousness, this could be something that we could learn to live with. But if we have to make a choice between two viable candidates X and Y, what possible criteria can we use to make the choice?

For me, at least, any attempt to decide between these two possibilities seems to rub our nose in the brute arbitrariness of the connection between experience and any sort of structure or function. So does any attempt to prove that consciousness needs both of these kinds of structures. (Yes, I know I'm beginning to sound like Chalmers. Somebody please call the Deprogrammers!) This question seems to be in principle unfalsifiable, and yet genuinely meaningful. And answering a question of this sort seems to be an inevitable hurtle if we are to have a scientific explanation of consciousness.

9 comments:

  1. If I met someone in an internet chat room who was articulate, well-read, thoughtful and emotionally sensitive, I'd know for sure it's a robot.

    (Sorry, couldn't resist).

    This is a fascinating topic. It reminds me of Howard Gardner's multiple intelligences theory. Each scenario posits a machine that is skilled at a different subset of these facets of human capability. Simulating even one facet would be a huge win for AI though.

    ReplyDelete
  2. so Teed,
    are you are suggesting

    1) that there is no specific structure that we need just any structure with a simple attribute like "sufficient complexity" or "sufficient feedback loops"
    or
    2) that we do need these specific structures - but that is only because our definition is arbitrary
    or
    3) that there is a non-arbitrary definition we just are not any good at finding it
    or something else?

    ReplyDelete
  3. I've enjoyed your comments (and thinking) - and have a "different sort of direction" to think about consciousness. Ideas deriving from Mead and Attachment theory - and Elaine Morgan about the m/other "seeing somebody there" from the moment of its birth. I'll attach my paper in progress, hopefully for some conversation. Obviously, the machines we've proposed are no more characteristic of the human than our understandings of the human as individual.

    ReplyDelete
  4. To Genius,

    I'm suggesting that I have no idea what the answer to this question is. I would welcome any suggestions.

    ReplyDelete
  5. Howdy Teed,

    What if they hook them both up with a corpus callosum style connection?

    cheers,
    jim

    ReplyDelete
  6. Teed--

    I think that this thought experiment puts the cart before the horse, perhaps.

    Here's a way to figure out if either machine is conscious: first, figure out how consciousness works in healthy adult humans. If we find a functional answer to that question (and I see no reason, at this stage of the game, to say such an answer is impossible, despite what armchair qualophiles would have us think!), then see if either robot instantiates that sort of functional system. If so, then that robot is conscious. What's wrong with this approach? And how do we know, at this point, that the explanation we get won't be fine-grained enough to distinguish between the robots you've described?

    I think the intuition that such an approach could not work is rooted in an (often implicit) assumption of strong first-person privilege, and an unreasonable demand to "know what it's like" in such a way as to become the very entity under scrutiny. But this is just bad epistemology, in my opinion. If there's no way to know if another being is conscious, then, ok, there's no way to know if these particular robots are conscious. QED. But why think there's no way to know that another being is conscious? What's the argument for that?

    This is a standard qualophile gambit: crank up the epistemic standards, show our knowledge of creature X could not meet those standards, and declare victory in all possible worlds. It's the first step that is suspect.

    (I think you might have some sympathy with what I'm saying. If so, put down that Chalmers! It can be hazardous to your (mental) health!)

    Josh

    ReplyDelete
  7. Is the underlying feeling here that the Turing Test (and its variants) is not all it's cracked up to be?

    That's what Josh Weisberg's solution seems to say. (figure out what makes us tick, then generalize accordingly). Unfortunately, in addition to handwaving towards a magical future neuroscience, it just creates a different set of similar looking problems. (here's one: how different can animals/aliens/the mentally ill be from you and me before they're dismissed as "zombies"?)

    I expect part of the problem is that the Turing Test, originally conceived, is a test for intelligence, not consciousness, which is arguably a more nebulous concept.

    ReplyDelete
  8. holyoke--

    I completely agree about the Turing Test. A good test for intelligence, but not (at least not obviously) for consciousness.

    As for animals/aliens/the mentally ill, I think if they share enough similar machinery with normal adult humans, we have good reason to say they are conscious. And if in our developing a theory of normal adult human consciousness we make discoveries, confirm models, discover new methods of data collection, etc. it may turn out to be relatively straight forward, not so different from determining if other critters have nonconscious mental states or other biological attributes.

    I am indeed appealing to the Glorious Future of Neuroscience (G-FON?). But Teed's question, as I took it, was about the very possibility of EVER figuring these things out. My claim is we don't have reason to rule out the possibility of success in G-FON, and my feeling is there's no argument for such strong pessimism, beyond appeal to intuitions about zombies and such. And those claims turn on bad epistemology, it seems to me.

    ReplyDelete
  9. Ah, okay, I understand your point better now Josh. I didn't mean to pick you out: the leap from "intelligence" to "consciousness" is implicit in the original post (and elsewhere). And as an old hand at eliminative materialism, I am used to lots of "future neuroscience" handwaving...

    But the more I read about consciousness, the more the questions look poorly formed. Not that the philosophy per se is bad, but that there's a big search for an answer when the questions themselves aren't that clear.

    ReplyDelete