Our main idea, condensed to 1000 words:
On a linear model of intelligence, entities can be roughly linearly ordered in overall intelligence: frogs are smarter than nematodes, cats smarter than frogs, apes smarter than cats, and humans smarter than apes. This same linear model is often assumed when discussing AI systems. "Narrow AI" systems (like chess machines and autonomous vehicles) are assumed to be subhuman in intelligence, at some point -- maybe soon -- AI systems will have approximately human-level intelligence, and in the future we might expect superintelligent AI that exceeds our intellectual capacity in virtually all domains of interest.
Building on the work of Susan Schneider, we challenge this linear model of intelligence. Central to our project is the concept of general intelligence as the ability to use information to achieve a wide range of goals in a wide variety of environments.
Of course even the simplest entity capable of using information to achieve goals can succeed in some environments, and no finite entity could succeed in all possible goals in all possible environments. "General intelligence" is therefore a matter of degree. Moreover, general intelligence is a massively multidimensional matter of degree: There are many many possible goals and many many possible environments and no non-arbitrary way to taxonomize and weight all these goals and environments into a single linear scale or definitive threshold.
Every entity is in important respects narrow: Humans also can achieve their goals in only a very limited range of environments. Interstellar space, the deep sea, the Earth's crust, the middle of the sky, the center of a star -- transposition to any of these places will quickly defeat almost all our plans. We depend for our successful functioning on a very specific context. So of course do all animals and all AI systems.
Similarly, although humans are good at a certain range of tasks, we cannot detect electrical fields in the water, dodge softballs while hovering in place, communicate with dolphins by echolocation, or calculate a hundred digits of pi in our heads. If we put a server with a language model in the desert without a power source or if we place an autonomous vehicle in a chess tournament and then interpret their incompetence as a lack of general intelligence, we risk being as unfair to them as a dolphin would be to blame us for our poor skills in their environment. Yes, there's a perfectly reasonable sense in which chess machines and autonomous vehicles have much more limited capacities than do humans. They are narrow in their abilities compared to us by almost any plausible metric of narrowness. But it is anthropocentric to insist that general intelligence requires generally successful performance on the tasks and in the environments that we humans tend to favor, given that those tasks and environments are such a small subset of the possible tasks and environments an entity could face. And any attempt to escape anthropocentrism by creating an unbiased and properly weighted taxonomy of task types and environments is either hopeless or liable to generate a variety of very different but equally plausible arbitrary composites.
AI systems, like nonhuman animals and neuroatypical people, can combine skills and deficits in patterns that are unfamiliar to those who have attended mostly to typical human cases. AI systems are highly unlikely to replicate every human capacity, due to limits in data and optimization, as well as a fundamentally different underlying architecture. They struggle to do many things that ordinary humans do effortlessly, such as reliably interpreting everyday visual scenes and performing feats of manual dexterity. But the reverse is also true: Humans cannot perform some feats that machines perform in a fraction of a second. If we think of intelligence as irreducibly multidimensional instead of linear -- as always relativized to the immense number of possible goals and environments -- we can avoid the temptation to try to reach a scalar judgment about which type of entity is actually smarter and by how much.
We might think of typical human intelligence as "familiar intelligence" -- familiar to us, that is -- and artificial intelligence as "strange intelligence". This terminology wears its anthropocentrism on its sleeve, rather than masking it under false objectivity. Something possesses familiar intelligence to the degree it thinks like us. It is a similarity relation. How familiar an intelligence is depends on several factors. Some are architectural: What forms does the basic cognitive processing take? What shortcuts and heuristics does it rely on? How serial or parallel is it? How fast? With what sorts of redundancy, modularity, and self-monitoring for errors? Others are learned and cultural: learned habits, particular cultural practices, acquired skills, chosen effort based on perceived costs and benefits. An intelligence is outwardly familiar if it acts like us in intelligence-based tasks. And it is inwardly familiar if it does so by the same underlying cognitive mechanisms.
Familiarity is also a matter of degree: The intelligence of dogs is more familiar to us (in most respects) than that of octopuses. Although we share some common features with octopuses, they evolved in a very different environment and have very dissimilar cognitive architecture as a result. It's hard for us even to understand their goals, because their existence is so different. Still, as distant as our minds are from those of octopuses, we share with octopuses the broadly familiar lifeways of embodied animals who need to navigate the natural world, find food, and mate.
AI constitutes an even stranger form of intelligence. With architectures, environments, and goals so fundamentally unlike ours, AI is the strangest intelligence we have yet to encounter. AI is not a biological organism; it was not shaped by the evolutionary pressures shared by every living being on Earth, and it does not have the same underlying needs. It is based on an inorganic substrate totally unlike all biological neurophysiology. Its goals are imposed by its makers rather than being autopoietic. Such intelligence should be expected to behave in ways radically different from familiar minds. This raises an epistemic challenge: Understanding and measuring strange intelligence may be extremely difficult for us. Plausibly, the stranger an intelligence is from our perspective, the easier it is for us to fail to appreciate what it’s up to. Strange intelligences rely on methods alien to our cognition.
If intelligence were linear and one-dimensional, then a single example of an egregious mistake by an AI -- a mistake a human would never make, like confusing a strawberry for a toy poodle -- would be enough to show that the systems are nowhere near our level of intelligence. However, since intelligence is massively multidimensional, all these cases show on their own is that these systems have certain lacunae or blindspots. Of course, we humans also have lacunae and blind spots – just consider optical illusions. Our susceptibility to optical illusions is not used as evidence of our lack of general intelligence, however ridiculous our mistakes might seem to any entity not subject to those same illusions.
Full draft here.

No comments:
Post a Comment