Friday, March 01, 2024

The Leapfrog Hypothesis for AI Consciousness

The first genuinely conscious robot or AI system would, you might think, have relatively simple consciousness -- insect-like consciousness, or jellyfish-like, or frog-like -- rather than the rich complexity of human-level consciousness. It might have vague feelings of dark vs light, the to-be-sought and to-be-avoided, broad internal rumblings, and not much else -- not, for example, complex conscious thoughts about ironies of Hamlet, or multi-part long-term plans about how to form a tax-exempt religious organization. The simple usually precedes the complex. Building a conscious insect-like entity seems a lower technological bar than building a more complex consciousness.

Until recently, that's what I had assumed (in keeping with Basl 2013 and Basl 2014, for example). Now I'm not so sure.

[Dall-E image of a high-tech frog on a lily pad; click to enlarge and clarify]

AI systems are -- presumably! -- not yet meaningfully conscious, not yet sentient, not yet capable of feeling genuine pleasure or pain or having genuine sensory experiences. Robotic eyes "see" but they don't yet see, not like a frog sees. However, they do already far exceed all non-human animals in their capacity to explain the ironies of Hamlet and plan the formation of federally tax-exempt organizations. (Put the "explain" and "plan" in scare quotes, if you like.) For example:

[ChatGPT-4 outputs for "Describe the ironies of Hamlet" and "Devise a multi-part long term plan about how to form a tax-exempt religious organization"; click to enlarge and clarify]

Let's see a frog try that!

Consider, then the Leapfrog Hypothesis: The first conscious AI systems will have rich and complex conscious intelligence, rather than simple conscious intelligence. AI consciousness development will, so to speak, leap right over the frogs, going straight from non-conscious to richly endowed with complex conscious intelligence.

What would it take for the Leapfrog Hypothesis to be true?

First, engineers would have to find it harder to create a genuinely conscious AI system than to create rich and complex representations or intelligent behavioral capacities that are not conscious.

And second, once a genuinely conscious system is created, it would have to be relatively easy thereafter to plug in the pre-existing, already developed complex representations or intelligent behavioral capacities in such a way that they belong to the stream of conscious experience in the new genuinely conscious system. Both of these assumptions seem at least moderately plausible, in these post-GPT days.

Regarding the first assumption: Yes, I know GPT isn't perfect and makes some surprising commonsense mistakes. We're not at genuine artificial general intelligence (AGI) yet -- just a lot closer than I would have guessed in 2018. "Richness" and "complexity" are challenging to quantify (Integrated Information Theory is one attempt). Quite possibly, properly understood, there's currently less richness and complexity in deep learning systems and large language models than it superficially seems. Still, their sensitivity to nuance and detail in the inputs and the structure of their outputs bespeaks complexity far exceeding, at least, light-vs-dark or to-be-sought-vs-to-be-avoided.

Regarding the second assumption, consider a cartoon example, inspired by Global Workspace theories of consciousness. Suppose that, to be conscious, an AI system must have input (perceptual) modules, output (behavioral) modules, side processors for specific cognitive tasks, long- and short-term memory stores, nested goal architectures, and between all of them a "global workspace" which receives selected ("attended") inputs from most or all of the various modules. These attentional targets become centrally available representations, accessible by most or all of the modules. Possibly, for genuine consciousness, the global workspace must have certain further features, such as recurrent processing in tight temporal synchrony. We arguably haven't yet designed a functioning AI system that works exactly along these lines -- but for the sake of this example let's suppose that once we create a good enough version of this architecture, the system is genuinely conscious.

But now, as soon as we have such a system, it might not be difficult to hook it up to a large language model like GPT-7 (GPT-8? GPT-14?) and to provide it with complex input representations full of rich sensory detail. The lights turn on... and as soon as they turn on, we have conscious descriptions of the ironies of Hamlet, richly detailed conscious pictorial or visual inputs, and multi-layered conscious plans. Evidently, we've overleapt the frog.

Of course, Global Workspace Theory might not be the right theory of consciousness. Or my description above might not be the best instantiation of it. But the thought plausibly generalizes to a wide range of functionalist or computationalist architectures: The technological challenge is in creating any consciousness at all in an AI system, and once this challenge is met, giving the system rich sensory and cognitive capacities, far exceeding that of a frog, might be the easy part.

Do I underestimate frogs? Bodily tasks like five-finger grasping and locomotion over uneven surfaces have proven to be technologically daunting (though we're making progress). Maybe the embodied intelligence of a frog or bee is vastly more complex and intelligent than the seemingly complex, intelligent linguistic outputs of a large language model.

Sure thing -- but this doesn't undermine my central thought. In fact, it might buttress it. If consciousness requires frog- or bee-like embodied intelligence -- maybe even biological processes very different from what we can now create in silicon chips -- artificial consciousness might be a long way off. But then we have even longer to prepare the part that seems more distinctively human. We get our conscious AI bee and then plug in GPT-28 instead of GPT-7, plug in a highly advanced radar/lidar system, a 22nd-century voice-to-text system, and so on. As soon as that bee lights up, it lights up big!

26 comments:

  1. Synergized Ethics (new) from engineers working within digital LLS-24/7...
    ...continuously updating knowledge for understanding ethical scenarios, any where...

    Leave consciousness' to transformation'...
    ...till ethics is in play around the world...

    Seemingly, live human programmers are large language systems (models)...
    ...AI is Human...thanks...

    ReplyDelete
  2. The last paragraph suggests that the distinctly human aspect of conscious AI will be the large language model modules that would be plugged in to an appropriate substrate. Could it be the other way around? Unless the substrate is capable of instantiating the subjective “consciousness” that we think of as human all the sting generating probabilistic code will be inadequate.

    ReplyDelete
  3. I think of the old saying among AI researchers: what’s hard is easy, but what’s easy is hard. It referred to the fact that computers have always been capable of superhuman capabilities, even if they can’t reason as well as a fruit fly. And a lot depends on how we define “consciousness”.

    Myself, I think for a system to be able to seem conscious to us in any sustained manner it'll require a minimal level of general intelligence, and that's what's required to actually understand the poetry it may be assembling, logistics it's managing, etc. (It's worth noting that global workspace theories had their beginnings in AI research. GWTs could be seen as general intelligence theories.)

    A system capable of a lot of sophisticated behavior but with the general intelligence of a bumblebee is probably going to fail in a lot of unexpected and counter-intuitive ways. It's going to have a lot of blind spots in its capabilities.

    On the other hand, we could have a system with a lot of general intelligence without the specific architecture we think of as "feelings", automatic reactions that use energy, and can only be overridden with yet more energy, stressing the system. The question is how many systems we'll saddle with an architecture that is a legacy of our evolutionary lineage.

    ReplyDelete
  4. Consciousness will never be an “engineering challenge” because consciousness is dependent upon sociocultural specieshood. A engineered device will only ever become conscious if it is first a member of a species. “If a lion could speak…”

    ReplyDelete
  5. In academia today there is only one strong voice regarding the topic of “consciousness”. It goes under headings like “computationalism”, “functionalism”, and an attack element known as “illusionism”. Furthermore this voice champions a position that has countless ridiculous implications beyond what professor S. has observed in this post. I’m partial to my thumb pain thought experiment. Apparently the status quo position holds if there were paper with the right markings on it that were processed algorithmically to create the right other marked paper, then something here would experience what you do when your thumb gets whacked. (They refuse to tell us what would do this experiencing.)

    My question is, should there not also be a strong voice in academia that rises up against this status quo? Since the disgrace and fall of John Searle, do any such voices exist?

    ReplyDelete
  6. ...neuromorphicness...for or from neuromorphic computing...

    AI needs a body..."Invasion of the body snatchers" 1978 Movie...

    ReplyDelete
  7. Some, if not all, of the leap frog analogy reminds me of the late Gerald Edelman's hypothesis around consciousness. He proposed consciousness as either *primary* or, *higher order*. Of course, the hypothesis was founded upon life forms. There is major difference, seems to me, between physiology and physics...living forms and, uh, quantum mechanics. AI visionaries and transhumanists seem numb to the distinction. Even cosmologists create THEIR own reality around this and a few have positions and views that make me laugh. Out loud....even insofar as laughing silently was never laughing, a priori. Pragmatism does not = speculation. I have talked about contextual reality, at some length. Postmodern thought revels in that. Some futurists think my stance archaic. All right. I think theirs is fanciful, at most.

    ReplyDelete
  8. Thanks for all the comments, folks!

    Dan: I'm not sure I quite follow, but if I do, I think that's a possibility. We won't have consciousness in AI until we have something sufficiently human. That would be a "leapfrog" possibility.

    SelfAware: Right, what's "easy" might be the hardest part. I completely agree about the idea that AI might have a strange-seeming mix of capacities and incapacities. I have a student working on a dissertation on this. Humans are good at some things, oddly bad at others (the Wason Selection Task), and AI might have a very different pattern.

    Grant: That's certainly a reasonable approach. I am more neutral among approaches, but embodied skills in real environments is an important constraint, and human cognitive architecture is an important inspiration. One hesitation about part of what you say, in connection with SelfAware's comment and my reply: If we insist on too "human" a set of capacities before accepting consciousness, that might be too restrictive a criterion. We might not expect that genuinely conscious AI would show a pattern of capacities and incapacities similar to ours.

    Jim: I'm curious what argument you take to show that.

    Phil E: I tilt toward the mainstream here -- despite the absurd-seeming implications! -- but I leave some of my credence space for more skeptical, Searlean positions. You might check out Peter Godfrey-Smith, Evan Thompson, or Gualtiero Piccinini for views that are interesting advocates of alternative perspectives.

    Paul: Maybe physiological details are super important to consciousness -- Godfrey-Smith, Block, and Searle suggest that, for example. But the question is why, and whether the relevant details can be duplicated in artificial systems....

    ReplyDelete
  9. Thank you for those suggestions professor. I did some searching to get a better sense of what each of them stand for. I’ve also sent emails to alert them of your recommendation. I’ll of course relay their messages here if any respond to me. I’ve also left links to your post in case any of them would like to comment here.

    While John Searle was quite prominent in his day, in truth I don’t think his arguments were nearly as effective as they might have been. For example I consider his much studied Chinese room thought experiment, far too complex and vague. In comparison my own thumb pain thought experiment seems quite parsimonious. Furthermore Searle was never able to provide a reasonable potential solution. For example I don’t think he ever asked people to think about what consciousness might be “made of”, or proposed any potential answers. Though his quest has failed, hopefully Peter, Evan, and/or Gualtiero will succeed.

    In truth I suspect that solid reasoning alone will not cure academia of this particular ill. Instead I think experimental evidence will need to demonstrate what consciousness happens to be made of. Furthermore perhaps modern Brain-Computer Interface already provides such evidence, though almost no one beyond myself grasps it as such. Why is it possible for the 39 phonemes of the English language to be interpreted from a state of the art detection array implanted in a specific place on the cerebral cortex? Perhaps because this sort of energy is what all consciousness happens to be made of? A great way to potentially confirm or deny this assessment would be to implant something that, rather than detects EMF, produces it in the brain in ways that ought to constructively and destructively interfere with the endogenous brain EMF. If consciousness happens to exist in the form of such a field, then a subject ought to report that their consciousness becomes altered by means of such transmissions.

    ReplyDelete
  10. Thanks, Eric. I will think about that. Another blogger put up a discourse on the existence of god. No, God was not with a capital G. I did not comment there, inasmuch as that blogger does not invite comment. Uncertainty is troublesome for all, and, has always been, accordingly, troublesome. Your point on this discussion is well taken, and, supports my reservations concerning non-sentient consciousness. I could make a few other remarks on the blog mentioned, but won't. God, is a subject of much speculation, belief and invention (in no particular order). Thanks for allowing me here.

    ReplyDelete
  11. Evan has emailed me the following response:

    Dear Eric,

    I don’t have time to comment at the blog, but I’m glad to hear Eric thought of me.

    I’ve been writing about consciousness for years, including a book on the topic (Waking, Dreaming, Being) and a new book out next week with a chapter on consciousness (The Blind Spot), so if you’re interested in my views, that’s where to find them. Regarding AI: the book out next week has a chapter devoted specifically to AI and computationalism.

    Best wishes,
    Evan

    ReplyDelete
  12. OK. I will look up AI/Computationalism. I think I understand the terms---may comment and or ask questions, if I conclude I don't...
    Kindest regards,
    PDV.

    ReplyDelete
  13. "complex conscious intelligence, rather than simple conscious intelligence"

    The general view is that consciousness and intelligence are closely related but the actual relationship could be more problematic to your hypothesis. Intelligence may not require consciousness at all. Consciousness may an ex post facto reflection of the output of intelligence that doesn't participate directly in the production of the output.

    ReplyDelete
  14. '...constructivism is where individuals construct knowledge through social interaction and collaboration for living in the bountifulness of our being conscious...'

    ReplyDelete
  15. Well stated. However, before consciousness became a 'hard problem', other ideas emerged. Change is inexorable. That is how we roll.

    ReplyDelete
  16. Thanks for the continuing comments, folks!

    Phil E: Your last suggestion sounds a bit like Susan Schneider's "Chip Test" if you know it. (I have expressed some doubts about that test in a 2021 JCS paper with David Udell, but I do think that it's a promising start.)

    Paul: You are always welcome here.

    Phil E: I'm looking forward to reading Thompson's latest book.

    ReplyDelete
  17. Thanks professor. I interpret her to be saying that if scientists can never upgrade someone’s brain with a microchip that affects the person’s consciousness, then microchips must not be the right sort of stuff for consciousness. The test seems inordinately difficult to me however since microchips run AND, OR, and NOT gated effects in series by means of normal electricity, while neurons in the brain are instead synaptically connected with each other in AND, OR, and NOT ways that are massively parallel.

    What I mainly dislike however is the perspective of: “Neuronal computation clearly facilitates consciousness, though if we can’t get our computer chips to facilitate consciousness then they must not be appropriate”. Instead of a specific type of computer, the thing that should matter is whether any computer at all happens to be operating the sort of physics which causality mandates to exist as consciousness. Sometimes our brains create this consciousness physics while other times they don’t. Given your own grasp of neuroscience, does any of the physics which our brains potentially could operate to exist as consciousness, seem appropriate to exist as such to you?

    The only such physics that seems appropriate to me lies in various parameters of electromagnetic field associated with synchronous neuron firing. This is the theory that Johnjoe McFadden put forth in 1999 and has occasionally written papers about since then. But I’d say modern Brain-Computer Interface researchers have inadvertently been testing his theory, and successfully so!

    In this recent article they talk about implanting EM field detection arrays in a speech area of a woman’s brain. They had her try to speak specific words over 25 sessions of 4 hours each to see if a computer could correlate anything in the detected EM field, to the 39 phonemes of the English language. Then after the computer had done so the woman’s phonemes were interpreted pretty well by means of EMF detection.

    This result should either be because consciousness exists in various forms of EMF, or because whatever consciousness does exist as, also leaks over enough to this EMF to provide such a result. I consider the second option about as likely as a standard computer’s EMF telling us about what its computer screen pixels are suppose to be doing. (Detecting something like Bluetooth however that was designed to operate a screen, wouldn’t count since we’re talking about spandrels rather than something effective).

    At this point I get the sense that it hasn’t occurred to modern BCI researchers that all elements of consciousness might exist within the field that they’ve been detecting. But beyond that they might continue to dismantle the supposed inherent privacy of consciousness by means of detecting lots of other elements of consciousness this way, I’m saying that researchers should also try to validate or refute McFadden’s theory in the opposite way. If they were to put EMF transmitters in the brain that experts believe should both constructively and destructively interfere with someone’s endogenous EMF, does a subject report that their consciousness gets whacky in ways that it otherwise should not? If so then perhaps this will be because the field exists as the physics of consciousness itself.

    If McFadden’s theory were to fail, at least this should be a great demonstration of how to do science in an extremely troubled field. But if it were to succeed, can you imagine the amazing implications for both academia and general humanity?

    ReplyDelete
  18. ...consciousness is gravity is life is movement is energy is force is being now...
    ...as in the processes of learning with it...

    ReplyDelete
  19. Here is my latest take, on the consciousness question. I approach this, as a pragmatist. There are levels (?) of consciousness, in my view. Right now, I contend with someone who was once responsively conscious. She is now, superficially conscious, insofar as her cognitive skills are diminished. If these are new terms does not matter. But the distinction between superficial and responsive matters. My point is cognitive decline=loss of self.

    ReplyDelete
  20. I think I am out of this league, inasmuch as I cannot wrap my *consciousness* around a combinatorial meld of physics and physiology: del otro, the entire notion of "what is going on" is, inventive, but fundamentally flawed. For my puny mind, a Vulcan meld, between human consciousness and *machine* consciousness makes no more sense than green cheese on the moon. I have, as a matter of record, written about a duck. A pet that did not get to be dinner. The duck lived, breathed, followed us around and waited for our return from school. He was, in any limited sense, sentient. Edelman would have agreed, in principle...I think. Am re-examining Sheldrake, because there is something deeper I missed years ago. But,well, there it is, Professor.

    ReplyDelete
  21. I make an experiential Award to this blog, its' originator and commentors:

    Best of Its' Kind, for 2024.

    warmest regards.
    PDV.

    [still thinking on a remark here on constructivism. sounds circular. people discussed circles a lot in the 1960s. those conversations later supported my thesis on infinity:
    infinity is neither objective, nor destination. you can't get there, from here. there is no "there", there. Ouroboros, by any other name.]

    ReplyDelete
  22. Well, just to set the record straight on polarity. There is AC and DC current. In wiring a house or part thereof, electrical circuits may be wired in series or parallel. There are matters of wattage, amperage and resistance. I have not followed much of what is written here---disagreed with some of it.
    Please understand, electricity is up the scale from the quantum. This is why electrical current evaluation is irrelevant to quantum anything. I am unfamiliar with morphic fields. However, those do not run toasters or microwave ovens, so far as I know. They may interfere with such appliances...no lo contendre.

    ReplyDelete
  23. Gualtiero Piccinini has also responded to me as a person in academia who challenges the position that consciousness arises by means of computation alone. He even included a draft of his new book with Neal Anderson on the matter. I don’t know that I can include the draft, though otherwise this is his response:

    Hi Eric,

    Sorry for taking long to respond but I’ve been swamped. In Ch 9 of my forthcoming book with Neal Anderson (attached), we argue that computation is insufficient for consciousness.

    Best wishes,
    -G

    Gualtiero Piccinini
    Curators’ Distinguished Professor, Department of Philosophy
    Associate Director, Center for Neurodynamics
    University of Missouri - St. Louis
    https://www.umsl.edu/~philo/People/Faculty/piccinini/
    https://umsl.academia.edu/GualtieroPiccinini

    ReplyDelete
  24. Wondering about this exchange for gravitational biology with google gemini...

    ..."The idea of subjective values guiding cognitive integration of gravity has also been explored in research. For instance, a study on the cognitive integration of gravity in movement tasks suggests that individual values and subjective rewards could influence the planning of movements executed along the vertical plane. This indicates that our understanding of gravity may not be purely objective but influenced by subjective experiences."

    Vertical plane is intriguing for humanities role...
    ...proposed as upward force to a downward force and between...

    ReplyDelete
  25. My April 6 exchange was with Bing-copilot not Google-gemini...sorry

    ReplyDelete
  26. https://dx.doi.org/10.2139/ssrn.4457168

    ...swipe and search for more from neural/gravity science on movement of dispositions in synapses...

    ReplyDelete