Thursday, March 21, 2024

Mind-Bending Science and AI Rights

Today I'm leaving the Toronto area (where I gave a series of lectures at Trent University) for the Southern Society for Philosophy and Psychology meeting in Cincinnati. A couple of popular op-eds I've been working on were both released today.

The longer of the two (on how to react to weird scientific theories) is behind a paywall at New Scientist (but if you email me I'd be happy to share the final manuscript for personal use). The other (on AI rights) is open access at Time.com.

------------------------------------------

How to wrap your head around the most mind-bending theories of reality

From the many worlds interpretation to panpsychism, theories of reality often sound absurd. Here’s how you can figure out which ones to take seriously

By Eric Schwitzgebel

20 March 2024

ARE there vastly many near-duplicates of you reading vastly many near-duplicates of this article in vastly many parallel universes? Is consciousness a fundamental property of all matter? Could reality be a computer simulation? Reader, I can hear your groans from here in California.

We are inclined to reject ideas like these on the grounds that they sound preposterous. And yet some of the world’s leading scientists and philosophers advocate for them. Why? And how should you, assuming you aren’t an expert, react to these sorts of hypotheses?

When we confront fundamental questions about the nature of reality, things quickly get weird. As a philosopher specialising in metaphysics, I submit that weirdness is inevitable, and that something radically bizarre will turn out to be true.

Which isn’t to say that every odd hypothesis is created equal. On the contrary, some weird possibilities are worth taking more seriously than others. Positing Zorg the Destroyer, hidden at the galactic core and pulling on protons with invisible strings, would rightly be laughed away as an explanation for anything. But we can mindfully evaluate the various preposterous-seeming ideas that deserve serious consideration, even in the absence of straightforward empirical tests.

The key is to become comfortable weighing competing implausibilities, something that we can all try – so long as we don’t expect to all arrive at the same conclusions.

Let us start by clarifying that we are talking here about questions monstrously large and formidable: the foundations of reality and the basis of our understanding of those foundations. What is the underlying structure…

[continued here]

-------------------------------------------------

Do AI Systems Deserve Rights?

BY ERIC SCHWITZGEBEL

MARCH 21, 2024 7:00 AM EDT

Schwitzgebel is a professor of philosophy at University of California, Riverside, and author of The Weirdness of the World

“Do you think people will ever fall in love with machines?” I asked the 12-year-old son of one of my friends.

“Yes!” he said, instantly and with conviction. He and his sister had recently visited the Las Vegas Sphere and its newly installed Aura robot—an AI system with an expressive face, advanced linguistic capacities similar to ChatGPT, and the ability to remember visitors’ names.

“I think of Aura as my friend,” added his 15-year-old sister.

My friend’s son was right. People are falling in love with machines—increasingly so, and deliberately. Recent advances in computer language have spawned dozens, maybe hundreds, of “AI companion” and “AI lover” applications. You can chat with these apps like you chat with friends. They will tease you, flirt with you, express sympathy for your troubles, recommend books and movies, give virtual smiles and hugs, and even engage in erotic role-play. The most popular of them, Replika, has an active Reddit page, where users regularly confess their love and often view that love to no less real than their love for human beings.

Can these AI friends love you back? Real love, presumably, requires sentience, understanding, and genuine conscious emotion—joy, suffering, sympathy, anger. For now, AI love remains science fiction.

[read the rest open access here]

6 comments:

Paul D. Van Pelt said...

Toronto will always hold a warm place in my heart. I lived there, off and on, from 1969. Married with a pretty red-haired girl, though that was ill-advised. Heard lots of really good music including Johnny Winter @Massey Hall, and Jim Hall, @Bourbon Street. Vibrant city. Transformative years, for meself. And, Sonny Rollins, who had vanished in 1959. Saw him play at the Colonial Tavern on Yonge Street. And, yes, the Blues. Torontonians loved the Blues. Thanks for the memories.

Paul D. Van Pelt said...

On Falling in Love:
The idea or notion or fascination around AI highlights a deeper concern....one which has emerged from the shadows of IT, information proliferation, isolation and reliance upon *personal devices*. I watch younger men and women as they struggle with person-to-person contacts, trying to connect with associates, friends and strangers. I have a step-grandson who is seventeen. He can do many things with AI, IT and the rest. But, I don't know what he reads or what fires his imagination. I know he can read, write, spell and reason, but at last account, he could barely write his name in cursive. Did not have to. It is not a requirement. He may 'love' playing music---has played woodwind some guitar and percussion. Interested? Yes, he is interested. But perhaps mostly in the camaraderie of band activity? Be choosy about 'love' interests. They are always fickle. I can't get a tablet to write a coherent paragraph without editing scrupulously. It makes for a slow creative process. So, yes, be choosy about what you call love...

Eric Schwitzgebel said...

Paul: Yes, every generation is different — and I agree it’s worth being choosy about love!

Paul D. Van Pelt said...

I began thinking broadly on this post and others appearing recently on this blog. I am not sure we can talk convincingly about a physics of consciousness...too much room for philosophical doubt, different IMPs and speculations. Perhaps there is a better cause for considering consciousness as metaphysical? Insofar as it is a hard problem, likely to be insoluble in a near-future sense, that seems a possible reason why thinkers walk away from prognostications and preposterous claims. When they have gotten toes chewed, the inclination is to avoid that pond. There have been many sore feet over this.

Well, these are musings. Neuroscience is young and there is probably yet more to learn, beyond the impossible spectrum of the unknowable. Let's hope so, anyway.

Paul D. Van Pelt said...

something dour from another blogger today, around AI killing us. don't know how that sits with *mind-bending*. Such admitted, we might reflect on science fiction writers who warned us about getting too confident. arguably, we could begin with Mary Shelley and her tale of tampering with life, itself. Bradbury wrote 451, in which Firemen burned things. Orwell added pounds to the load....hopefully, my point is taken.

Paul D. Van Pelt said...

an intrepid researcher has had a thesis posted, suggesting a *phenomenology of ChatBotGPT* "interesting" does not quite capture the significance of this. a portion of the paper speaks to passivity in the bot. Yet, the inference suggests more to me. the 'more' circles 'round what the bot is thinking...I omitted some quotationals here,against my better judgment. the innuendo from the writer was to confer judgmental aptitude on AI. that seems shallow to this elder philosopher. so opined, a further inference APPEARED to confer capacity for something called passive aggression...we are not even out of the woods on these issues and, weed fields lie beyond. disclaimer: words don't always convey what is meant.

the researcher mentioned Husserl's methodology(ies). not sorry. don't buy it. those methodologies are sound, I think. but, AI is after Husserl's time. I don't think it applicable to some evaluation of AI, good, bad, or indifferent.