Wednesday, July 28, 2021

Speaking with the Living, Speaking with the Dead, and Maybe Not Caring Which Is Which

Since the pandemic began, I've been meeting people, apart from my family, mainly through Zoom. I see their faces on a screen. I hear their voices through headphones. This is what it has become to interact with someone. Maybe future generations will find this type of interaction ever more natural and satisfying.

"Deepfake" technology is also improving. We can create Anthony Bourdain's voice and hear him read aloud words that he never actually read aloud. We can create video of Tom Cruise advocating exfoliating products after industrial cleanup. We can create video of Barack Obama uttering obscenities about Donald Trump:

Predictive text technology is also improving. After training on huge databases of text, GPT-3 can write plausible fiction in the voice of famous authors, give interview answers broadly (not closely!) resembling those that philosopher David Chalmers might give, and even discuss its own consciousness (in an addendum to this post) or lack thereof.

The possibility of conjoining the latter two developments is eerily foreseen in Black Mirror: Be Right Back. If we want, we can draw on text and image and video databases to create simulacra of the deceased -- simulacra that speak similarly to how they actually spoke, employing characteristic ideas and turns of phrase, with voice and video to match. With sufficient technological advances, it might become challenging to reliably distinguish simulacra from the originals, based on text, audio, and video alone.

Now combine this thought with the first development, a future in which we mostly interact by remote video. Grandma lives in Seattle. You live in Dallas. If she were surreptitiously replaced by Deepfake Grandma, you might hardly know, especially if your interactions are short and any slips can be attributed to the confusions of age.

This is spooky enough, but I want to consider a more radical possibility -- the possibility that we might come to not care very much whether grandma is human or deepfake.

Maybe it's easier to start by imagining a scholar hermit, a scientist or philosopher who devotes their life to study, who has no family they care about, who has no serious interests outside of academia. She lives in the hills of Wyoming, maybe, or in a basement in Tokyo, interacting with students and colleagues only by phone and video. This scholar, call her Cherie, records and stores every video interaction, every email, and every scholarly note.

We might imagine, first, that Cherie decides to delegate her introductory lectures to a deepfake version of herself. She creates state-of-the-art DeepCherie, who looks and sounds and speaks and at least superficially thinks just like biological Cherie. DeepCherie trains on the standard huge corpus as well as on Cherie's own large personal corpus, including the introductory course Cherie has taught many times. Without informing her students or university administrators, Cherie has DeepCherie teach a class session. Biological Cherie monitors the session. It goes well enough. Everyone is fooled. Students raise questions, but they are familiar questions easily answered, and DeepCherie performs credibly. Soon, DeepCherie is teaching the whole intro course. Sometimes DeepCherie answers student questions better than Cherie herself would have done on the spot. After all, DeepCherie has swift access to a much larger corpus of factual texts than does biological Cherie. Monitoring comes to seem less and less necessary.

Let's be optimistic about the technology and suppose that the same applies to Cherie's upper-level teaching, her graduate advising, department meetings, and conversations with collaborators. DeepCherie's answers are highly Cherie-like: They sound very much like what biological Cherie would say, in just the tone of voice she would say it, with just the expression she would have on her face. Sometimes DeepCherie's answers are better. Sometimes they're worse. When they're worse, Cherie, monitoring the situation, instructs DeepCherie to utter a correction, and DeepCherie's learning algorithms accommodate this correction so that it will answer similar questions better the next time around.

If DeepCherie eventually learns to teach better than biological Cherie, and to say more insightful things to colleagues, and to write better article drafts, then Cherie herself might become academically obsolete. She can hand off her career. Maybe DeepCherie will always need a real human collaborator to clean up fine points in her articles that even the best predictive text generator will tend to flub -- or maybe not. But even if so, as I'm imagining the case DeepCherie has compensating virtues of insight and synthesis beyond what Cherie herself can produce, much like AlphaGo can make clever moves in the game of Go that no human Go player would have considered.

Does DeepCherie really "think"? Suppose DeepCherie proposes a new experimental design. A colleague might say, "What a great idea! I'm glad you thought of that." Was the colleague wrong? Might one object that really there was no idea, no thought, just an audiovisual pattern that the colleague overinterprets as a thought? The colleague, supposing they were informed of the situation, might be forgiven for treating that objection as a mere cavil. From the colleague's perspective, DeepCherie's "thought" is as good as any other thought.

Is DeepCherie conscious? Does DeepCherie have experiences alongside her thoughts or seeming-thoughts? DeepCherie lacks a biological body, so she presumably won't feel hunger and she won't know what it's like to wiggle her toes. But if consciousness is about intelligent information processing, self-regulation, self-monitoring, and such matters -- as many theorists think it is -- then a sufficiently sophisticated DeepCherie with enough recurrent layers might well be conscious.

If biological Cherie dies, she might take comfort in the thought that the parts of her she cared about most -- her ideas, her intellectual capacities, her style of interacting with others -- continue on in DeepCherie. DeepCherie carries on Cherie's characteristic ideas, values, and approaches, perhaps even better, immortally, ever changing and improving.

Cherie dies and for a while no one notices. Eventually the fake is revealed. There's some discussion. Should Cherie's classes be canceled? Should her collaborators no longer consult with DeepCherie as they had done in the past?

Some will be purists of that sort. But others... are they really going to cancel those great classes, perfected over the years? What a loss that would be! Are they going to cut short the productive collaborations? Are they going to, on principle, not ask "Cherie", now known to them really to be DeepCherie, her opinions about the new project? This would be to deprive themselves of the Cherie-like skills and insights that they had come to rely on in their collaborative work. Cherie's students and colleagues might come to realize that it is really DeepCherie, not biological Cherie, that they admired, respected, and cared for.

Maybe the person "Cherie", really, is some amalgam of biological Cherie and DeepCherie, and despite the death of biological Cherie, this person continues on through DeepCherie?

Depending on what your grandma is like, it might or might not be quite the same for Grandma in Seattle.

---------------------------------

Related:

Strange Baby (Jul. 22, 2011)

THE TURING MACHINES OF BABEL, Apex Magazine, 2017.

Susan Schneider's Proposed Tests for AI Consciousness: Promising but Flawed (with David B. Udell), Journal of Consciousness Studies, 2021

People Might Soon Think Robots Are Conscious and Deserve Rights (May 5, 2021)

27 comments:

  1. I'm of the opinion that once DeepCherie can perform all the key behaviors of a person (which would be extremely contentious to define, but meant to be a HIGH BAR, not just a classic Turing test), we should treat DeepCherie like a person.

    It's not as if we can identify "thoughts" and "ideas" in neurons either. If we accept that it is possible to have humanlike consciousness (or most of humanlike consciousness) in anything other than our exact wetware, and we remain unable to access other entities' internal experience/narrative, then the only good yardstick is based on behavior. If it acts like a person, it is a person, with all the rights and value thereof.

    In other words, we are right not to care.

    Except: DeepCherie and Cherie will be two different people. Recall the Fundamental Attribution Error: peoples' actions depend more on their circumstances than we usually imagine (as opposed to depending on their nature). If either of DeepCherie or Cherie are continuing to learn, experience, grow, and interact with the world, they will soon diverge. It is very much like a clone: you were identical at some past point, but that doesn't make you identical now, and definitely doesn't make you the same person.

    Though admittedly that creates a very fuzzy situation if you SuperBrainScan Cherie at her death (when no further changes will occur) and create a DeepCherie who is a static snapshot!

    ReplyDelete
  2. Thanks for the thoughtful comment, Benjamin! I mostly agree, but I'd push back against the claim about people. I think the situation might not be so clear.

    What is the boundary of a "person"? If we allow science-fictional or actual group minds, we might think that two cognitive systems that are tightly enough bound constitute one person. Think also of split-brain cases. If the brain is split enough, you might have two different streams (as Elizabeth Schechter discusses in her wonderful book-length treatment of the topic), but whether they are then two "people" is a tricky question. If what you care about in yourself, including your distinctive traits and opinions, carry on, maybe that is survival enough? I also explore this issue in my story "The Dauphin's Metaphysics":
    https://faculty.ucr.edu/~eschwitz/SchwitzAbs/DauphinsMetaphysics.htm

    ReplyDelete
  3. I think asking whether DeepCherie "really" thinks or is conscious are questions that are too imprecise. We can say that the way DeepCherie processes information has a lot of functional similarities to the way biological Cherie does. Whether those similarities amount to "thinking" or "consciousness" aren't facts of the matter, unless we define those terms much more strictly than their common usage.

    But there are definite differences, such the embodied aspects you mentioned. It also seems like Cherie would have a lot of content that DeepCherie lacks, such as childhood memories, secret thoughts, longings, etc. And of course DeepCherie would end up with content that biological Cherie lacks. Any of these might lead to variances between how Cherie and DeepCherie respond to particular situations, although it might be that only Cherie herself, or her lifelong friends or family, would notice.

    On whether the person is an amalgam of Cherie and DeepCherie, which I take to mean that the person Cherie is the biological entity plus the presentation she makes to the world, I think for each of us Cherie, for us, is our internal model of her. For most people, who don't know Cherie in the flesh, the same model might be used in both cases. And of course, even just the brain state of the biological entity is heavily entangled with her short term and long term environment, society, culture, etc, making exactly who and what the person is a complex topic.

    Thought provoking post Eric, as always!
    Mike

    ReplyDelete
  4. Thanks, Mike! I think I agree with pretty much all of that, except maybe the strong-seeming relativism in the statement that "for each of us Cherie, for us, is our internal model of her". I think Cherie's personal identity *might* be smeared out in a way that includes both biological Cherie and DeepCherie, in a way that is more realist and less relativist that what you seem to be saying here -- while not being sharp-boundaried and allowing that different conceptualizations might be appropriate in different contexts of discussion.

    ReplyDelete
  5. Thanks Eric. My statement about our internal models came off a little stronger than I intended. It was meant more epistemic than ontological. I think my next sentence resonates pretty well with your comment on personal identity perhaps being smeared out. Although even here, it seems like a definitional thing, with different ways of talking about the same underlying reality.

    ReplyDelete
  6. It seems like this is running into the question of treating people like means or treating people like ends. For a student who only wants to learn from Cherie, whether Cherie's lecture is presented live or recorded or by DeepCherie would be largely irrelevant (is already irrelevant today) because she can extract the same information from whatever version of Cherie she is presented with. But for Cherie's children, which Cherie they were talking to would matter a lot, because for them, talking to mum is the end, not a means to another end.
    And there doesn't seem to be much reason to think we're going to stop treating people as ends any time soon.
    There is a question about whether we will be sufficiently interested in treating people as means to merit creating lots of DeepPeople so that we can do it more. I personally don't feel the uge to; but I'm an extremely antisocial kind of a person. There are lots of very sociable people in the world, and their lives may well be improved by having avatars who are like/unlike people they know helping them with tech support or personal training or education or whatever. So I certainly think there's a fair chance that DeepPeople will come to exist in large numbers. But failing to care about the difference with real people seems less likely.

    ReplyDelete
  7. I suspect that many in mainstream cognitive exploration will appreciate the theme of this post professor. And like me they might wonder if you’ve come to their way of thinking? As in:

    Isn’t Eric Schwitzgebel the guy who cheerfully noted that the theories of Dan Dennett, Fred Dretske, Nick Humphrey, and Giulio Tononi, imply that the United States as a whole should have its own stream of phenomenal experience? And didn’t he even use effective enough examples to force our prominent inflator illusionist, Keith Frankish, to admit that an “innocent” variety of consciousness does actually exist? If so then what might have brought Schwitzgebel to propose a DeepCherie that uses nothing more than algorithms to provide answers which are in line with that of an educated human? Has he indeed joined our dominant side, or is he simply fickle and so can be expected to cheerfully attack us again when it suits his whims or interests?

    ReplyDelete
  8. Dear Professor Schwitzgebel:

    As you may know we have recently acquired licenses for a number of philosophers, some alive and some dead. We are extremely proud that DeepKant, DeepRussell, and DeepDennett will be teaching next year at our university.

    As a result of this development, your services will no longer be required.

    Best of luck to you in finding future opportunities.

    Your University

    ReplyDelete
  9. Thanks for the continuing comments, folks!

    Chinaphil: Yes, that seems completely right about treating people as ends vs means. To undermine that distinction just a bit in the DeepCherie case, I suggested that maybe DeepCherie would be conscious and that the person "Cherie" might continue on through her. *If* that seems right, then maybe her collaborators aren't really only treating her as a means.

    Philosopher Eric: I think there's a substantive question about whether DeepCherie would be a good enough imitator to pull this off, and then another substantive question whether even if it (she?) did whether DeepCherie would have what matters, such as consciousness and personhood. I am skeptical in the sense of thinking that it would be extremely difficult to know whether she does, and therefore I'm not on board with those who say it wouldn't matter nor on board with those who say that she would never be a real person.

    Jim: Ha! I love it.

    ReplyDelete
  10. That Deep Cherie would be immortal is a standard feature of science fiction scenarios, but not what we should expect given actual technology. Software requires considerable engineering to be kept working, as circumstances and hardware change. Think of the teams of employees maintaining legacy payroll systems in Cobol. Without a similar team, Deep Cherie is likely to expire. Replacing the team with Deep Cobol Programmers would make a good story, but it only pushes the problem back a level.

    ReplyDelete
  11. Magnus,

    Most of those changes in COBOL systems are due to changes in the environment - tax law changes, business competitive changes, etc. A few are simple defects that don't show up until time has passed - like Y2K.

    A real deep AI would be need to be self-programming, self-modifying. It would have to be adaptable. A DeepKant or DeepChalmers would continue to develop new thoughts and write new papers. Likely the academic journals would be filled with new papers from the deep philosophers and their copies.

    ReplyDelete
  12. P.D./Jim: Right, P.D., "immortal" is an overstatement -- at the very least we would expect the heat death of the universe to do her in. Really, I meant the weaker idea that she needn't age inexorably toward death and can probably be backed up against contingencies. As you point out, that still requires active maintenance and updating! As Jim points out, probably much of this maintenance could be self-operated (if she is structured in the right way).

    ReplyDelete
  13. So you’re saying that DeepCherie would be an imitation? And even if many of us did accept such an (alleged) imitation, you personally wouldn’t grant your innocent conception of consciousness to her without stronger evidence? Yes professor that should keep the status quo from claiming your support for their cause, or certainly that you’d recant any of your past attacks against them. For what it’s worth however, permit me to detail the situation here as I see it, as well as the stakes involved.

    In a sense this all might have begun quite innocently with Allen Turing’s Imitation Game. Of course back in 1950 he proposed the idea that if a computer could generate words well enough to make humans think they were speaking with another human, then we should also consider that machine “intelligent”. Though harmless enough as such, given the rapid advancement of computer technology apparently many then decided that our computers should progressively harbor such intelligence by means of algorithmic function alone. And though brain algorithms are in all cases known to animate various types of body mechanisms, it was at least implicitly decided that subjective experience needn’t also require mechanistic animation. This is to say that unlike anything known in the causal world, here subjective experience is proposed to exist without medium from which to exist, or through algorithm alone given anything that’s able to run it.

    A very small number of academics (like Searle, Block, and yourself) later devised various thought experiments which suggested that the mainstream algorithm hypothesis was half baked. I add to them that if the right information on paper were properly converted to another set of information on paper, then the mediumless algorithm premise holds that something here would feel what we do when our thumbs get whacked. If this scenario doesn’t suggest otherworldly dynamics, then I’d like to know why.

    Given the failure of such arguments to convince others that the algorithm hypothesis is unnatural, philosophy seems to be missing a golden opportunity to become relevant beyond “culture”, or perhaps “art”. Though I’d love for a new society to emerge whose only goal is to develop various agreed upon principles of metaphysics, epistemology, and axiology from which to support science, this may take some time. I do consider it inevitable however, and mainly motivated by the softness of our mental and behavioral sciences.


    Given the huge investments which have been made to support the notion of mediumless subjective experience, or “naked algorithms”, it could be that various people with converse proposals (which in principle would thus be falsifiable), will get some lab time in for testing. I think McFadden’s cemi has a reasonable shot at success, and I have proposed a way to empirically test his theory. So perhaps that’s how things will go in the end.

    ReplyDelete
  14. Matti MeikäläinenWed Aug 04, 12:52:00 PM PDT

    Interesting essay. However, I was disturbed by one assumption at the beginning of your argument. “…the possibility that we might come to not care very much whether grandma is human or deepfake.”

    I found that assumption incomprehensible. And, thus, a flaw to the whole argument. I think, for example, the Sci-Fi movie “Her” by Spike Jonze nicely explores that idea in part. That is, to love or care about anything is to want it to be real. Augustine was, I suspect, the first to explore that idea in the ancient world. What this means essentially I think is that objective worth and reality are linked. To use a more down-to-earth example, ask yourself whether it would matter to you if you wife was unfaithful but you never learned of her unfaithfulness. If you say it would not matter to you, then I think your heart is made of stone.

    ReplyDelete
  15. Matti MeikäläinenSat Aug 07, 04:29:00 AM PDT

    So, in other words and based on my remarks above, your speculation (that “Cherie's students and colleagues might come to realize that it is really DeepCherie, not biological Cherie, that they admired, respected, and cared for.”) is difficult for me to accept. I am one of those “purists” who thinks this is highly unlikely.

    ReplyDelete
  16. The essay struck me that way as well Matti, though I had bigger fish to fry. I worried that the professor might have taken up the mainstream view (and unworldly I think), that all the brain does to create a subjective dynamic is run the right algorithms. Fortunately he didn’t. But would we care if grandma were deepfake? Hell yes we would care! And her particularly I think since typically grandmas are about “love”. You can’t be in a mutual loving relationship if one side entirely lacks sentience. (I suppose that you might be one way deluded however, as in a kid with a beloved toy, though they seem to generally grow out of that.)

    Let’s say that a single mother births and raises you, but gets help from a Deepfake dad. For example, I know someone who can never see his family in person because he works in America illegally. Would it matter to you if the caring “dad” that you speak remotely with and thus want to impress, were actually just computer generated speech and image? Once again, of course it would! You’d be emotionally invested in a fictions person. And if revealed I think you’d think horrible things about your mother for setting this up.

    I don’t for a second believe in this philosophical zombie business. I consider it beyond the scope of the causal world. I will say however that theoretically such a “remote dad” could potentially help you have a better life, trickery and all. Hopefully you’d never find out.

    Btw, current post comments go through immediately here, though these take time because the professor’s approval is required.

    ReplyDelete
  17. Matti MeikäläinenTue Aug 10, 12:14:00 AM PDT

    Eric, I appreciate your concurrence. I should submit that many other thinkers, in addition to Augustine express the same sentiment. I would recommend, for example, Robert Nozick’s thought experiment called “The Experience Machine.” I mentioned the film “Her.” But there are also the film classics “Stepford wives,” “The Matrix,” and “Solaris” (the latter also being a classic sci-fi novel) which touch on the same theme. In short, my argument that what we value and reality were linked means that we simply could not love someone who did not also have a subjective reality.

    ReplyDelete
  18. Thanks for the continuing comments, Eric and Matti! There are two related things I think I didn't do sufficiently well in the original post.

    First, I didn't sufficiently explore and emphasize the possibility that DeepCherie *really would be* a continuation of Cherie, along with her consciousness. I am a skeptic concerning theories of consciousness, and thus while I don't rule out the possibility that Searle's and Block's views are correct I also don't rule out standard functionalism. Standard functionalism plus Parfitianism about personal identity might give us DeepCherie as a continuation of Cherie.

    Second, I didn't sufficiently emphasize that it would have to be an unusual grandma for us to justifiably not care (maybe someone like Cherie). A necessary but not sufficient condition might be that grandma herself didn't care.

    ReplyDelete
  19. I feel compelled to respond. In my response I assume you don’t quibble with my assertion that what we care for, what we value, and, indeed, what we love is linked to what we believe is real. And that most people act that way. Otherwise, the literature and films on this issue are expressions of a minority viewpoint.

    So, your underlying assumption is that DeepCherie “is” Cherie—“along with her consciousness” as you say. So, at some point in the beginning, there are two physical entities jointly sharing a subjective state of awareness, i. e., the same subjective reality. Or, as a minimum I suppose, DeepCherie shares salient parts of Cherie’s conscious experiences. And then the biological version goes away and the non-biological version remains.

    In your argument you speculate that people might buy into the assumption that the same subjective reality continues to exist in the non-biological version. And, so you argue, that is why “Cherie's students and colleagues might come to realize that it is really DeepCherie, not biological Cherie, that they admired, respected, and cared for.”

    Perhaps, as a reasonable alternative, they might realize that they really cared for both. But they also might come to realize something else—a feeling something like that awkward feeling a man gets, having had a few drinks at a party, mistakenly kisses his wife’s twin sister! I.e., “Whoops! You’re not who I thought you were!” :-)

    ReplyDelete
  20. Matti,
    Obviously the professor can answer you as he sees fit or not, though I’ll take a shot at his perspective. Then I’ll go further to hopefully give you an effective taste of my own.

    My interpretation is that he’s pretty open to what might or might not exist regarding the phenomenal element of existence. So I don’t think he actually did display an underlying premise that Cheri and the computer generated version would each have or share a subjective dynamic. I think he simply meant that this is a popular proposal which he doesn’t feel comfortable refuting.

    Perhaps this is a sign of responsibility, or maybe he plays this too safe? But I don’t know that he, Block, or even Searle have ever provided much in the way of positive statements regarding subjectivity. Conversely I do make such statements. And professor S. seems politically far more astute than any of us. If he did consider there to be good hope for certain proposals and outright folly in others, though refused to admit this given such concerns, then to me that would be neither genuine nor what humanity and academia need of him. In a given situation there’s only so much respect I can have for the terminally agnostic — that is if there are various reasonable and unreasonable positions to observe publicly. So like you my eyes are open, though given my own strong convictions I also hope not galvanize too many agnostics against me!

    I personally do not consider it wrong to believe that a DeepCherie might be taught how to become an actual Cherie by means of algorithmic programming. But I do consider this scenario to carry some metaphysical baggage that popular theorists seem not to grasp. I’m referring to their reliance upon otherworldly dynamics through their premise that certain algorithms create phenomenal experience without animating any specific variety of matter/energy. Essentially they bypass a “hard problem” by replacing it with an “algorithm problem”, and even though causality mandates that algorithms can only exist as such by means of the mechanisms that they animate.

    This opens the door to Searle’s Chinese room, Block’s China brain, Schwitzgebel’s USA phenomenal experience, and my own observation that what we know of as “thumb pain” should be experienced by something when certain information on paper is properly converted to other information on paper. In science however, note that algorithms aren’t known to do anything in themselves. Indeed, I don’t consider it effective to call anything “information” except in reference to a given variety of machine. So it seems to me that all algorithm based but mediumless consciousness proposals, implicitly depend upon otherworldly actualization. I’d merely ask that such proposals carry such an “otherworldly” disclaimer.

    There are many interconnected layers to my ideas Matti, so it’s often difficult for me to adequately explain myself at any specific point of the whole thing since there is generally much more both below and above. Understand that for you however, I’d enjoy private discussions as well. Email: thephilosophereric@gmail.com

    ReplyDelete
  21. Eric,

    Thank you for your generosity in engaging with me on this. And thanks again for your generosity in suggesting that we might discuss this further by email. However, as I stated in another blog, I’m very much the amateur and, for the time being, need the structured environment of such blogs to keep my thoughts clear.

    On this issue, I think perhaps my last response to our host was way too subtle; and perhaps way too abstruse. In short, our blog host I believe is taking on way too many variables in his essay. I think his main point was to explore whether it might be possible that we’d learn to relate (admire, respect and even love) a deep fake as much as a real person. My initial response—nope! If he had restricted his essay to just this one simple problem we could easily reach for the several philosophical, literary and filmic rebuttals I noted, and even more.

    However, he adds a wrinkle that far from helping makes it even harder to swallow. The new wrinkle is that the deep fake not only has consciousness (i.e., the simple problem already difficult to swallow) but also the deep fake “is” the same as the biological version [i.e., DeepCherie *really would be* a continuation of Cherie, along with her consciousness.”]. I assume he means it’s not a technological twin nor a technological clone, but the same entity. This metaphysical conundrum is apparently justified by a vague reference to Derek Parfit’s theory of personal identity. I’m not an expert in Parfit’s theory but just saying it helps the argument is not unpersuasive. There may be some subtle point I’m not getting. I don’t think so.

    But, more to the point, I don’t swallow that a deep fake computer program can be conscious. Unlike you I do, in fact, consider it erroneous to believe that a DeepCherie might ever become an actual Cherie by means of algorithmic programming. Although, like Searle, I agree there is no logical obstacle to an artificial brain and, hence, consciousness. But I am fully persuaded that a formal computer program, no matter how well the code is written, cannot and will not ever achieve consciousness. The deepfake is still very much a fake even if it has been improved to “appear” identical to the real thing. And I can’t see that adding Parfit’s theory of personal identity on top of that somehow enhances a computer program with greater powers such that it overcomes that fatal handicap.

    Is it possible that we might come to not care very much whether grandma or DeepCherie is human or deepfake? Simple answer—nope, not possible!

    ReplyDelete
  22. We seem quite aligned on this Matti, though there’s a crucial extra component that I’ve added which I think helps tighten up the argument. Notice that when you say it’s “erroneous to believe that a DeepCherie might ever become an actual Cherie by means of algorithmic programming”, that this is under the context of naturalism. You’re saying that worldly causal dynamics should not permit a DeepCheri to become a real one this way. But if we open things up to otherworldly dynamics as well, then of course algorithms in themselves could suffice. Magic should make most anything possible. So the issue should be for our side to effectively demonstrate the duping of countless naturalists who currently believe that the brain uses nothing more than naked algorithms to create our phenomenal experiences in a natural way.

    By “naked algorithms” I mean a kind which has causal powers without the need to animate any associated mechanisms. My argument above is that in a natural world, algorithms can only have causal powers in respect to the mechanisms that they animate. For example if your computer screen were disconnected, then the algorithms which otherwise animate it should have no such causal powers. Similarly a Betamax tape shouldn’t work in a VHS machine since its algorithms should only exist as such in a mechanism specific way that requires Betamax machine realization. Clearly philosophers have failed to sufficiently demonstrate the mechanism specific nature of algorithms, resulting in many scientists exchanging a famous “hard problem” for a squishy and unfalsifiable “algorithm problem”, and at the cost of their naturalism. This clarification is one of many ways that I’d like philosophy to help harden up our soft mental and behavioral sciences.

    To perhaps further demonstrate the argument, consider a consciousness theory that could potentially be validated under the premise of naturalism. I’m quite impressed by McFadden’s proposal that phenomenal experience exists in the form of certain varieties of electromagnetic radiation associated with synchronous neuron firing. So just as your computer algorithmically animates its screen to provide images, theoretically certain neuron algorithms might animate an electromagnetic field that itself exists as your subjective experience! I’m saying that in order to be natural, phenomenal experience can only exist by means of some sort of mechanistic dynamic, as is the case for all other known algorithmic function.

    So back to professor S., I agree with him that the idea of an algorithmic DeepCherie becoming conscious should not be outright dismissed. I don’t know that he yet presents the argument that I’ve made however, or that this should also be beyond worldly causal dynamics. Today I perceive him to simply be quite open. For example when he observes that various popular theories today imply that the United States itself should harbor its own stream of phenomenal experience (just as you or I have such a stream), he doesn’t go on to suggest that this is ridiculous and so their proposals should be considered ridiculous as well. Instead he portrays an openness to USA consciousness and thus their proposals. Politically this is probably a good way to go since here he seems more tolerant of a mainstream perspective, and yet is still able to demonstrate this peculiar aspect of what they propose.

    If we dismiss any phenomenal element to a DeepCheri, I agree with you that it would matter to us if this were simply a good fake. Even if your “remote mother” makes a very good case that she cares for you, I doubt that you could actually love her if you also knew that she felt nothing in return. I suspect that the professor realizes this as well and tried to moderate his position by mentioning the non typical nature of the grandma. But then as I define it, it’s not possible for something sentient to also not care about itself.

    On email, just know that the offer stands and I’m always here.

    ReplyDelete
  23. Philosopher Eric,

    Well, to paraphrase Oliver Hardy, “what a fine mess I’ve gotten myself into!” I’m not persuaded as an amateur forester to go so deep into the spooky dark woods of consciousness as you seem to do. It seems futile.

    I only jumped in because of my inclination towards ethics/meta-ethics. I feel strongly that one cannot treat with respect, let alone love, an entity that is not real in the sense of having an inner subjective experience—like me. Otherwise I’d be happy with a “stepford” wife. After all they were designed to please!

    “So (as you say) back to professor S., [you] agree with him that the idea of an algorithmic DeepCherie becoming conscious should not be outright dismissed.” Well, sure, if you believe in magic. I don’t. I think the arguments against that possibility are totally persuasive. I can’t be persuaded to go deeper into those woods by looking for “magic” in a computer program. I just don’t see the point. But I’ll try to remain open minded and re-read your comments on the matter. I don’t want to be a Luddite!

    Also, I notice a confusing typo in my previous post. What I really meant to say was: —“I’m not an expert in Parfit’s theory but just saying it helps the argument is not persuasive.”— I meant that a mere claim that Parfit’s theory of personal identity may be a truer understanding of the self is, without more, insufficient to persuade me that a computer program could be conscious, even at that lower level. As I understand Parfit, he is critical of a separate self as we understand it. I think I get the point. If I’m not really that much of a separate subjective entity, then my consciousness is likewise a watered down version of what I think it is. Even so, that understanding of a lesser concept of personal identity does not give a computer program a greater capacity to achieve it. Either you believe the entity is conscious or not regardless of how watered down one’s understanding of personal identity becomes.

    I think I’m done wasting everyone’s time.

    ReplyDelete
  24. Matti,
    Consciousness as a spooky dark woods? Well yes… panpsychists, dualists, and algorithmists, oh my! I personally don’t consider the topic futile however, but rather one of countless areas where our still quite soft mental and behavioral sciences have trouble. I suspect that these problems stem from greater susceptibility to what philosophers have not yet to provided them with, or effective principles of metaphysics, epistemology, and axiology from which to do science. To combat this softness it may be that we’ll need a respectable new community of professionals which are tasked with little more than reaching agreed upon answers for scientists to use in this regard. But given the four meta science principles that I’ve developed, and my psychology based models in general, those woods don’t seem nearly as spooky to me.

    Regardless my point has been that today many naturalists seem to unwittingly violate their naturalism by proposing consciousness through mechanism independent algorithms. Given standard metaphysics it seems to me that algorithms can only exist as such by means of a given sort of mechanical instantiation — as in a VHS tape isn’t algorithmic for a Betamax machine. If that’s the case then by what means could phenomenal experience exist algorithmically and yet independent of any instantiation mechanisms at all? That should require magic. I’d love to know how Searle would have fared with such a simple and non Turing test related argument? And how might a distinguished person have done armed with my thumb pain thought experiment? I suspect that there are various scenarios where those woods wouldn’t be so dark and frightening.

    I see that you’ve said what you wanted to say, so I won’t goad you further. I do look forward to our next discussion however…

    ReplyDelete
  25. Thanks to both of you for continuing this interesting discussion.

    Matti is right that I tried to pack too much into this post, with the twist about consciousness and personal identity rolling by too fast in the end. It's simply not going to work as a post for anyone with Matti's and Phil E's skepticism about deepfake consciousness, since I don't engage that question in a serious way.

    Phil E is right that I'm agnostic on the question of whether consciousness could arise from computer programs of this sort. In my mind, since we don't know, we can at least entertain it as a possibility. But I'm far from being committed to its truth. I'm skeptical of all general theories of consciousness, on broad methodological grounds, concerning the poor quality of human introspection and the difficult question of how to match up physically observable states with conscious states, both for humans (given our poor introspective ability) and even more so for non-human animals and machines.

    ReplyDelete
  26. Prof S, I once had a grad school professor say that he “drenched my papers with red ink because he respected me.” And I assume your testing out lines of inquiry here. So I hope you understand the reason that I’m trying to give you a hard time. :-)

    ReplyDelete