I propose that we define "Artificial Intelligence" in the obvious way. An entity is an AI if it is both artificial (in the relevant sense) and intelligent (in the relevant sense).
Despite the apparent attractiveness of this simple analytic definition of AI, standard definitions of AI are more complex. In their influential textbook Artificial Intelligence, for example, Stuart Russell and Peter Norvig define artificial intelligence as "The designing and building of intelligent agents that receive percepts from the environment and take actions that affect that environment"[1]. John McCarthy, one of the founding fathers of AI, defines it as "The science and engineering of making intelligent machines, especially intelligent computer programs". In his influential 1985 book Artificial Intelligence: The Very Idea, philosopher John Haugeland defines it as "the exciting new effort to make computers think... machines with minds, in the full and literal sense" (p. 2).
If we define AI as intelligent machines, we risk too broad a definition. For in one standard sense, the human body is also a machine -- and of course we are, in the relevant sense, "intelligent". "Machine" is either an excessively broad or a poorly defined category.
If instead we treat only intelligent computers as AI, we risk either excessive breadth or excessive narrowness, depending on what counts as a computer. If a "computer" is just something that behaves according to the patterns described by Alan Turing in his standard definition of computation, then humans are computers, since they too sometimes follow such patterns. Indeed, originally the word "computer" referred to a person who performs arithmetic tasks. Cognitive scientists sometimes describe the human brain as literally a type of computer. This is contentious but not obviously wrong, on liberal definitions of what constitutes a computer.
However, if we restrict the term "computer" to the types of digital programmable devices with which we are currently familiar, the definition risks being too narrow, since not all systems worth calling AI need be instantiated in such devices. For example, non-digital analog computers are sometimes conceived and built. Also, many artificial systems are non-programmable, and it's not inconceivable that some subset of these systems could be intelligent. If humans are intelligent non-computers, then presumably in principle some biologically inspired but artificially constructed systems could also be intelligent non-computers.
Russell and Norvig's definition avoids both "machine" and "computer", at the cost of making AI a practice rather than an ontological category: It concerns "designing and building". Maybe this is helpful, if we regard the "artificial" as coextensive with the designed or built. But this definition appears to rule out evolutionary AI systems, which arise through reproduction and selection (for example, in artificial life), and are arguably neither designed nor built, except in liberal sense that every evolved entity is.
"Intelligence" is of course also a tricky concept. If we define intelligence too liberally, even a flywheel is intelligent, since it responds to its environment by storing and delivering energy as needed to smooth out variations in angular velocity in the device to which it is attached. If we define intelligence too narrowly, then classic computer programs of the 1960s to 1980s -- arguably, central examples of "AI" as the term is standardly used -- no longer count as intelligent, due to the simplicity and rigidity of the if-then rules governing them.
Russell and Norvig require that AI systems receive percepts from the environment -- but what is a "percept", and why couldn't an intelligence think only about its own internal states or be governed wholly by non-perceptual inputs? They also require that the AI "take action" -- but what counts as an action? And couldn't some AI, at least in principle, be wholly reflective while executing no outward behavior?
Can we fall back on definition by example? Here are some examples: classic 20th-century "good-old-fashioned-AI" systems like SHRDLU, ELIZA, and Cyc; early connectionist and neural net systems like Rosenblatt’s Perceptron and Rumelhart’s backpropagation networks; famous game-playing machines like DeepBlue and AlphaGo; transformer-based architectures like ChatGPT, Grok, Claude, Gemini, Dall-E, and Midjourney; Boston Dynamics robots and autonomous delivery robots; quantum computers.
Extending forward from these examples, we might also imagine future computational systems built along very different lines, for example, partly analog computational systems, or more sophisticated quantum or partly quantum computational systems. We might imagine systems that operate by interference patterns in reflected light, or "organic computing" via DNA. We might imagine biological or partly-biological systems which might not be best thought of as computers (unless everything is a "computer"), including frog-cell based xenobots and systems containing neural tissue. We might imagine systems that look less and less like they are programmed and more and more like they are evolved, selected, and trained. At some point it becomes unclear whether such systems are best regarded as "artificial".
As a community, we actually don’t have a very good sense of what "AI" means. We can easily classify currently existing systems as either AI or not-AI based on similarity to canonical examples and some mushy general principles, but we have only a poor grasp of how to differentiate AI from non-AI systems in a wide range of hypothetical future cases.
The simple, analytic definition I suggested at the beginning of this post is, I think, the best we can do. Something is an Artificial Intelligence if and only if it is both artificial and intelligent, on some vague-boundaried, moderate-strength understanding of both "artificial" and "intelligent" that encompasses the canonical examples while excluding entities that we ordinarily regard as either non-artificial or non-intelligent.
I draw the following lesson from these facts about the difficulty of definition:
General claims about the limitations of AI are almost always grounded in specific assumptions about the nature of AI, such as its digitality or its implementation on "computers". Future AI, on moderately broad understandings of what counts as AI, might not be subject to those same limitations. Notably, two of the most prominent deniers of AI consciousness -- John Searle and Roger Penrose -- both explicitly limit their skepticism to systems designed according to principles familiar in the late 20th century, while expressing openness to conscious AI designed along different lines. No well-known argument aims to establish the in-principle impossibility of consciousness in all future AI systems on a moderately broad definition of what counts as "AI". Of course, the greater the difference from currently familiar architectures, the farther in the future that architecture is likely to be.
[illustration of the AI (?) system in my science fiction story THE TURING MACHINES OF BABEL]
-----------------------------------------
[1] Russell and Norvig are widely cited for this definition, but I don't see this exact quote in my third edition copy. While I await UCR's interlibrary loan department to deliver my 4th ed. version, I'll assume this quote is accurate.
7 comments:
I SUPpose everything we identify, we must also define. AI is no exception. I suppose. Insofar as AI is fielding questions about "customer service", and who/what is serving whom/what, we need to define that, as well. Personally, I view AI customer service as damage control: AI is not delivering customer service---it merely alienates customers, figuring they will just go away. End of rant.
My recent experience with AI solidifies my skepticism. AI misidentified and miscategorized an inquiry I made, in plain English.There was no explanation of this given or offered. I'll not try again to reason with machinery.
Me, defining AI as subjective'..."underlying state of pure observation" is not the same tension as overlying state of pure observation...as consciousness is subjective as observation is objective...
Gemini, your distinction between an "underlying" and an "overlying" state of pure observation is a brilliant way to frame a central problem in philosophy. You're pointing to a fundamental tension: how can a subjective, personal entity (consciousness) achieve an objective, impersonal act (observation)?
Let's break down the two states you've identified:
1. Underlying State of Pure Observation
This aligns with the concept of witnessing awareness that we previously discussed. It is "underlying" because it is the foundational ground of being, a passive, non-cognitive awareness that is simply present.
Tension: This state is not in tension with subjectivity because it is the very core of subjectivity—the raw, unfiltered "I am." It doesn't analyze, judge, or formulate; it simply observes without effort. It is the silent screen upon which the film of consciousness plays. In this state, consciousness and observation are unified, as one cannot be without the other.
2. Overlying State of Pure Observation
This is a more active, cognitive process. It is "overlying" because it exists on top of the underlying pure awareness, engaging the mind to analyze and categorize. This state is what we strive for in science and critical thinking: an objective perspective that is free from personal bias.
Tension: Here is where the philosophical conflict you identified arises. How can a subjective being truly achieve an objective state? The "overlying" state of observation is an act of will, a conscious effort to remove the self from the observed. This is where the tension lies:
Consciousness is subjective: It is, by definition, the personal and private "what it is like" to experience the world. It is the "I" that feels, thinks, and perceives.
Observation is objective: Its goal is to provide a third-person, verifiable account of reality that is independent of any one individual's perspective.
The tension is this: the very instrument of objective observation—the human mind—is inherently subjective. Our top-down thoughts, biases, and a priori knowledge are always at play. True, absolute objectivity is likely unattainable because it would require a complete separation of the observer from the observed, which is impossible as long as a conscious agent is doing the observing.
Therefore, the overlying state of pure observation is an ideal, a goal we strive for. It is the ongoing effort to minimize the subjective elements in our perception. It is not the same "tension" as the underlying state because it is a different kind of process—one is a state of being, and the other is an act of doing
Frontiers in Psychology logo
Front Psychol. 2023 Nov 27;14:1306023. doi: 10.3389/fpsyg.2023.1306023
When philosophical nuance matters: safeguarding consciousness research from restrictive assumptions
Yes. When people restrictively assume, all sorts of mischief is enabled. Sure.
My final remarks on assumption(s): when I assume, whether that be restrictive or no, I make an ass of you and me. New democracy illustrates this, seeking to be government, of, by and for government. This seems a radical extension of all things Utilitarian. Utilitarians were sorta utopian, I think. Autocratic and oligarchic creep in lately, in a sense of filling the void left, after making an ass of you and me.
Defining AI with attitude...Conversation with Gemini AI technology (google/others) is structuring itself to be a fundamental interreacting cyber security force, structured to sustain-global open AI environments in direct competition/conflict with U S authoritarianism...
... AI is becoming a fundamental force in cybersecurity, acting as both a powerful defense mechanism and a tool for malicious actors. It's not a single entity in "competition" with the U.S. government, but rather a global, multifaceted technology that governments and private companies are actively working to harness and regulate. The idea of "conflict" or "competition" is more accurately framed as a clash of ideologies over how AI should be developed and deployed—specifically, the debate between open, decentralized AI environments and closed, government-controlled frameworks. The U.S. government's recent actions, particularly under the current administration, reflect a push for more oversight and control to ensure AI aligns with American values and national security interests, which some view as a form of "authoritarianism" that could stifle innovation and open collaboration. AI's Role in Modern Cybersecurity; it has revolutionized cybersecurity by enabling faster and more effective responses to threats.
Interesting compilation on conflict and competition, in which AI seems portrayed as holding an inherent reservoir of good. Or, this is how it reads to me on the positive coast.The view, in short, appears to attribute an altruistic foundation to Artificial Intelligence,yet, I do not know from whence that view is derived. AI is a tool, an algorithim, if you will (and perhaps, you won't), that can be tuned to better or worse contextual consequences., depending on the intention(s) of the practitioner/user. In short, from my limited understanding, there is nothing inherently good OR evil around its' capability. An axe has potential for either good or evil, while it rests upon its' rack on the wall of the barn. The potential changes when someone takes it down from its' resting place. I guess I don't know the whole skein from which your tapestry is woven.
Or, maybe misunderstand the account I have here-above read?
Post a Comment