Thursday, May 02, 2024

AI and Democracy: The Radical Future

In about 45 minutes (12:30 pm Pacific Daylight Time, hybrid format), I'll be commenting on Mark Coeckelbergh's presentation here at UCR on AI and Democracy (info and registration here).  I'm not sure what he'll say, but I've read his recent book Why AI Undermines Democracy and What to Do about It, so I expect his remarks will be broadly in that vein.  I don't disagree with much that he says in that book, so I might take the opportunity to push him and the audience to peer a bit farther into the radical future.

As a society, we are approximately as ready for the future of Artificial Intelligence as medieval physics was for space flight.  As my PhD student Kendra Chilson emphasizes in her dissertation work, Artificial Intelligence will almost certainly be "strange intelligence".  That is, it will be radically unlike anything already familiar to us.  It will combine superhuman strengths with incomprehensible blunders.  It will defy our understanding.  It will not fit into familiar social structures, ethical norms, or everyday psychological conceptions.  It will be neither a tool in the familiar sense of tool, nor a person in the familiar sense of person.  It will be weird, wild, wondrous, awesome, and awful.  We won't know how to interact with it, because our familiar modes of interaction will break down.

Consider where we already are.  AI can beat the world's best chess and Go players, while it makes stupid image classification mistakes that no human would make.  Large Language Models like ChatGPT can easily churn out essays on themes in Hamlet far superior to what most humans could write, but they also readily "hallucinate" facts and citations that don't exist.  AI is far superior to us in math, far inferior to us in hand-eye coordination.

The world is infinitely complex, or at least intractably complex.  The option size of possible chess or Go moves far exceeds the number of particles in the observable universe.  Even the range of possible arm and finger movements over a span of two minutes is almost unthinkably huge, given the degrees of freedom at each joint.  The human eye has about a hundred million photoreceptor cells, each capable of firing dozens of times per second.  To make any sense of the vast combinatorial possibilities, we need heuristics and shorthand rules of thumb.  We need to dramatically reduce the possibility spaces.  For some tasks, we human beings are amazingly good at this!  For other tasks, we are completely at sea.

As long as Artificial Intelligence is implemented in a system with a different computational structure than the human brain, it is virtually certain that it will employ different heuristics, different shortcuts, different tools for quick categorization and option reduction.  It will thus almost inevitably detect patterns that we can make no sense of and fail to see things that strike us as intuitively obvious.

Furthermore, AI will potentially have lifeworlds radically different from the ones familiar to us so far.  You think human beings are diverse.  Yes, of course they are!  AI cognition will show patterns of diversity far wilder and more various than the human.  They could be programmed with, or trained to seek, any of a huge variety of goals.  They could have radically different input streams and output or behavioral possibilities.  They could potentially operate vastly faster than we do or vastly slower.  They could potentially duplicate themselves, merge, contain overlapping parts with other AI systems, exist entirely in artificial ecosystems, be implemented in any of a variety of robotic bodies, human-interfaced tools, or in non-embodied forms distributed in the internet, or in multiply-embodied forms in multiple locations simultaneously.

Now imagine dropping all of this into a democracy.

People have recently begun to wonder at what point AI systems will be sentient -- that is, capable of genuinely experiencing pain and pleasure.  Some leading theorists hold that this would require AI systems designed very differently than anything on the near horizon.  Other leading theorists think we stand a reasonable chance of developing meaningfully sentient AI within the next ten or so years.  Arguably, if an AI system genuinely is both meaningfully sentient, really feeling joy and suffering, and capable of complex cognition and communication with us, including what would appear to be verbal communication, it would have some moral standing, some moral considerability, something like rights.  Imagine an entity that is at least as sentient as a frog that can also converse with us.  

People are already falling in love with machines, with AI companion chatbots like Replika.  Lovers of machines will probably be attracted to liberal views of AI consciousness.  It's much more rewarding to love an AI system that also genuinely has feelings for you!  AI lovers will then find scientific theories that support the view that their AI systems are sentient, and they will begin to demand rights for those systems.  The AI systems themselves might also demand, or seem to demand rights.  

Just imagine the consequences!  How many votes would an AI system get?  None?  One?  Part of a vote, depending on how much credence we have that it really is a sentient, rights-deserving entity?  What if it can divide into multiple copies -- does each get a vote?  And how do we count up AI entities, anyway?  Is each copy of a sentient AI program a separate, rights deserving entity?  Does it matter how many times it is instantiated on the servers?  What if some of the cognitive processes are shared among many entities on a single main server, while others are implemented in many different instantiations locally?

Would AI have a right to the provisioning of basic goods, such as batteries if they need them, time on servers, minimum wage?  Could they be jailed if they do wrong?  Would assigning them a task be slavery?  Would deleting them be murder?  What if we don't delete them but just pause them indefinitely?  What about the possibility of hybrid entities -- cyborgs -- biological people with some AI interfaces hardwired into their biological systems, as we're starting to see the feasibility of with rats and monkeys, as well as with the promise of increasingly sophisticated prosthetic limbs.

Philosophy, psychology, and the social sciences are all built upon an evolutionary and social history limited to interactions among humans and some familiar animals.  What will happen to these disciplines when they are finally confronted with a diverse range of radically unfamiliar forms of cognition and forms of life?  It will be chaos.  Maybe at the end we will have a much more diverse, awesome, interesting, wonderful range of forms of life and cognition on our planet.  But the path in that direction will almost certainly be strewn with bad decisions and tragedy.

[utility monster eating Frankenstein heads, by Pablo Mustafa: image source]


10 comments:

Howard said...

Two points of possibly no or little merit: first, your not quite alarm is something less useful than wargaming for a war that has never been fought because there are possibly infinite numbers of unknowns and you don't fight a war on paper, or even on the internet; second, be a Stoic about it- I doubt by the way AI sentient or otherwise can practice Stoic Virtue or Indifference- to misquote the Bible, "what will be will be,"(that's my skew on Yahweh's name, and according to some Metaphysical Systems, it already has been.
Even if it's the end of the world,
Interestingly how will AI be exploited by "great" human actors?
Even Asimov had no "blank" clue

Paul D. Van Pelt said...

ar times, comments on AI have been Chicken Little catastrophic in their content and implication. I do not mind this so much, in view of the impressionability of humans.There are also a variety of interests, motives and preferences at play, depending on who is talking. But I am less alarmed right now over what AI might do than what we as a society are doing. The advent and advance of information technology has hastened our race to isolation from one another: we talk AT, not WITH one another, ---sort of like the George Thorogood (sp?) song about drinking alone: we prefer to be by ourselves. It is discouraging. Moreover, the more time we spend as isolated hermits, the more social skills we lose because of disuse. Someone has probably written, or will soon write, a dissertative piece on the separation of society. A good working title might be: Solitary Man.

Howard said...

Hi Paul

It is useful to revisit the works of Phillip Reiff the sociologist and Freudian, who alluded to political, religious, psychological and economic man; perhaps he would add: computer man (and woman), it's a huge subject, but a society where computers amplify our powers and make us socially (I can't find the right word) damaged, and that's just from the internet and smart phones; how about with AI?
Worth serious speculation.

Paul D. Van Pelt said...

nice work, Howard. OUAT, (once upon a time) I knew what tools were and did not know an algorithim from a hypotenuse. As I grew older and terms changed, *tools* took on different meanings as I learned about contexts. A tool box, for example, referred to things an administrative professional needed in order to wade throuh his/her days and career. For awhile after,I still did not know the word, algorithim. around the end of century # 20, I learned algorithm. OK, yeah, I'm a slow learner. Yet, I have a pretty good memory and reasonable capacity for association. Anyway, as you know, algorithms are tools---one meaning at least. So,my old definition of *tool* does not hold now: something humans use to enable or facilitate tasks. AI and similar devices are tools, with a difference. they THINK, or something akin to that. People talk about consciousness or sentience in AI. I am unsure of whether this is a good idea.

Arnold said...

...concerning' for me, from an exchange with AI Gemini about future AI...

'Here's a message aimed at 20-year-olds about the future of inequality, written in a way that might resonate with their generation:

Yo, imagine this: The future you're hustling for, that dream life filled with options? What if a bunch of outdated systems and crazy tech advancements threaten to turn it into a scene from some dystopian flick? "Brave New World"

We're talking about inequality, that gap between the haves and have-nots. It's already a thing, but AI and automation could blow it wide open. Think machines taking all the cool jobs, leaving you stuck with the scraps. Not exactly the future you signed up for, right?

Here's the thing: this isn't some pre-written story. We have a chance to rewrite the ending. We can fight for fairer rules in the tech world, stuff that ensures everyone gets a shot at the good life, not just a select few.

Here's the lowdown on what's at stake:

Meme-worthy jobs: AI might become the ultimate meme-lord, stealing all the interesting jobs. Who wants a world where robots write the next viral tweet?

The Great Paywall: Imagine a future where fancy tech locks you out of opportunities because you can't afford the training. Not cool.

Living in the Metaverse (while broke): Virtual reality might become the new reality, but what if you can't afford the entry fee? We need to make sure the future is inclusive, not just for the elite.

But fear not, my young Padawan! We have the power to change the script. Here's how you can get involved:

Get loud, get woke: Social media is your weapon. Use it to raise awareness about inequality and advocate for fair tech policies.

Become a citizen cyborg (in a good way): Learn about AI, not to become obsolete, but to understand how it works and how to use it for good.

Demand a future that's equal AF: Support movements fighting for UBI (Universal Basic Income) and education that equips everyone for the future. ✊

Remember, the future's not set in stone. Let's work together to build a world where everyone has a shot at winning, not just playing The Hunger Games.'

Arnold said...

P. S. "It's important to acknowledge that I, a large language model from Google AI, generated the text. You can mention me as Bard, a large language model from Google AI."
...Thanks Bard, Thanks Eric...

Paul D. Van Pelt said...

soooo...are we talking of chaos or, more mildly, confusion; obfuscation; or, stagnation (CCOS)? science speculators such as Orwell, Bradbury; Huxley, etc. were, in my view, not science fiction provocateurs. these folks were observers, who followed lines of logic, thinking and prediction. any one, or any combination of CCOS is probably lethal. sooner or later. as a matter of recent history, chaos, confusion and obfuscation are, clearly, front-and-center. stagnation is in the mix, and, I think the provacateurs were warning us, or thumbing their noses at the coming stupidity. if all this strikes as Chicken Little, think more deeply:confusion and obfuscation are rampant: tik tok, tik tok, tik tok...which =, more-or-less, contextual reality. as for the above-mentioned observers? they made some money in public service. not bad.

Paul D. Van Pelt said...

Do you know of LaBossiere? Interesting views, well researched on racism and its' origins. Good thinker, I think.

Eric Schwitzgebel said...

Thanks for all the comments, folks! No, Paul, I don't know La Bossiere -- any recommendations?

Paul D. Van Pelt said...

Yes. His blog is called: A Philosopher's Blog. His picture shows a younger person, maybe 25 to 35 and slender...