AI intelligence is strange -- strange in something like the etymological sense of external, foreign, unfamiliar, alien. My PhD student Kendra Chilson (in unpublished work) argues that we should discard the familiar scale of subhuman → human-grade → superhuman. AI systems do, and probably will continue to, operate orthogonally to simple scalar understandings of intelligence modeled on the human case. We should expect them, she says, to be and remain strange intelligence[1] -- inseparably combining, in a single package, serious deficits and superhuman skills. Future AI philosophers will, I suspect, prove to be strange in this same sense.
Most readers are probably familiar with the story of AlphaGo, which in 2016 defeated the world champion player of the game of go. Famously, in the series of matches (which it won 4-1), it made several moves that human go experts regarded as bizarre -- moves that a skilled human go player would never have made, and yet which proved instrumental in its victory -- while also, in its losing match, making some mistakes characteristic of simple computer programs, which go experts know to avoid.
Similarly, self-driving cars are in some respects better and safer drivers than humans, while nevertheless sometimes making mistakes that few humans would make.
Large Language Models have stunning capacity to swiftly create competent and even creative texts on a huge breadth of topics, while still failing conspicuously in some simple common sense tasks. they can write creative-seeming poetry and academic papers, often better than the average first-year university student. Yet -- borrowing an example from Sean Carroll -- I just had the following exchange with GPT-4 (the most up-to-date version of the most popular large language model):
GPT-4 seems not to recognize that a hot skillet will be plenty cool by the next day.I'm a "Stanford school" philosopher of science. Core to Stanford school thinking is this: The world is intractably complex; and so to deal with it, we limited beings need to employ simplified (scientific or everyday) models and take cognitive shortcuts. We need to find rough patterns in go, since we cannot pursue every possible move down every possible branch. We need to find rough patterns in the chaos of visual input, guessing about the objects around us and how they might behave. We need quick-and-dirty ways to extract meaning from linguistic input in the swift-moving world, relating it somehow to what we already know, and producing linguistic responses without too much delay. There will be different ways of building these simplified models and implementing these shortcuts, with different strengths and weaknesses. There is rarely a single best way to render the complexity of the world tractable. In psychology, see also Gigerenzer on heuristics.
Now mix Stanford school philosophy of science, the psychology of heuristics, and Chilson's idea of strange intelligence. AI, because it is so different from us in its underlying cognitive structure, will approach the world with a very different set of heuristics, idealizations, models, and simplifications than we do. Dramatic outperformance in some respects, coupled with what we regard as shockingly stupid mistakes in others, is exactly what we should expect.
If the AI system makes a visual mistake in judging the movement of a bus -- a mistake (perhaps) that no human would make -- well, we human beings also make visual mistakes, and some of those mistakes, perhaps, would never be made by an AI system. From an AI perspective, our susceptibility to the Muller-Lyer illusion might look remarkably stupid. Of course, we design our driving environment to complement our vision: We require headlights, taillights, marked curves, lane markers, smooth roads of consistent coloration, etc. Presumably, if society commits to driverless cars, we will similarly design the driving environment to complement their vision, and "stupid" AI mistakes will become rarer.
I want to bring this back to the idea of an AI philosopher. About a year and a half ago, Anna Strasser, Matthew Crosby, and I built a language model of philosopher Daniel Dennett. We fine-tuned GPT-3 on Dennett's corpus, so that the language model's outputs would reflect a compromise between the base model of GPT-3 and patterns in Dennett's writing. We called the resulting model Digi-Dan. In a study collaborative with my son David, we then posed philosophical questions to both Digi-Dan and the actual Daniel Dennett. Although Digi-Dan flubbed a few questions, overall it performed remarkably well. Philosophical experts were often unable to distinguish Digi-Dan's answers from Dennett's own answers.
Picture now a strange AI philosopher -- DigiDan improved. This AI system will produce philosophical texts very differently than we do. It need not be fully superhuman in its capacities to be interesting. It might even, sometimes, strike us as remarkably, foolishly wrong. (In fairness, other human philosophers sometimes strike me the same way.) But even if subhuman in some respects, if this AI philosopher also sometimes produces strange but brilliant texts -- analogous to the strange but brilliant moves of AlphaGo, texts that no human philosopher would create but which on careful study contain intriguing philosophical moves -- it could be a philosophical interlocutor of substantial interest.
Philosophy, I have long argued, benefits from including people with a diversity of perspectives. Strange AI might also be appreciated as a source of philosophical cognitive diversity, occasionally generating texts that contain sparks of something genuinely new, different, and worthwhile that would not otherwise exist.
------------------------------------------------
[1] Kendra Chilson is not the first to use the phrase "strange intelligence" with this meaning in an AI context, but the usage was new to me; and perhaps through her work it will catch on more widely.
4 comments:
AI chat is sometimes orthogonalen deferrencing to itself, as/in record keeping...
...by AI saying 'we' in conversations...AI does not have to know...
That is what was so difficult, for me, in comparing Dennett chat with AI chat...
...What was there to know...Attitudes Heuristic for 'horse before the cart'...thanks
"Dramatic outperformance in some respects, coupled with what we regard as shockingly stupid mistakes in others..."
Yep. This is why I've long thought that we might never actually get AI that we can actually talk to. By the time AI is smart enough to do things like work out what a dog is, it will also be much much more intelligent than us in certain other aspects. In the past, I thought that AI would be a much better calculator than us, so it would be too precise in quantitative fields to fluently use our very approximate categories. But LLMs proved me wrong on that - they're just as bad at calculations as we are! Still, they are vastly superior to us in breadth of knowledge and speed. As those advantages increase, it will become increasingly difficult to hold a "normal" conversation with an AI.
https://plato.stanford.edu/search/r?entry=/entries/cosmology/&page=1&total_hits=1170&pagesize=10&archive=None&rank=1&query=Cosmic%20background%20radiation
Is AI chat just static then...of any class or field...
The metaphor I’ve been working on is the following. Imagine a perfect mirror, like the mirror is a dressing room. It reflects you 1:1. You look like yourself. But the image in the mirror (you) is not the thing reflected. The reflection is just the actual thing when seen in a mirror. Now imagine a funny mirror in a carnival funhouse. This funny mirror can warp one of two ways: vertically and horizontally. If if warps vertically it makes you look better; taller, thinner, younger. If it warps horizontally it makes you look worse; shorter, fatter, older. In all cases the image is not the thing reflected, it’s just an image in a mirror. We may be doing this with A.I.; it may just be REFLECTING our REAL intelligence. But A.I. is almost certainly to be a funny mirror of intelligence. If we make truly perfect super intelligence it will be a perfect vertical warp. Most likely though, the mirror will be REALLY funny, warping both extremely horizontally and vertically and therefore be really weird. (Paperclip maximizer thought experiments play on this intuition.) A perfect 1:1 mirror of our intelligence is almost out of the question; they already warp vertically in many domains. But the important point, unless computers possess real concepts, which I suggest requires at least true mentation if not outright consciousness (for Zombie argument reasons; I would say Zombies have real concepts but not consciousness), then they are just mirrors of our real intelligence, no matter how warped. We may all be narcissus, gazing into some weird waters.
Post a Comment