Tuesday, November 21, 2023

Quasi-Sociality: Toward Asymmetric Joint Actions with Artificial Systems

Anna Strasser and I have a new paper in draft, arising from a conference she organized in Riverside last spring on Humans and Smart Machines as Partners in Thought.

Imagine, on one end the spectrum, ordinary asocial tool use: typing numbers into a calculator, for example.

Imagine, on the other end of the spectrum, cognitively sophisticated social interactions between partners each of whom knows that the other knows what they know. These are the kinds of social, cooperative actions that philosophers tend to emphasize and analyze (e.g., Davidson 1980; Gilbert 1990; Bratman 2014).

Between the two ends of the spectrum lies a complex range of in-between cases that philosophers have tended to neglect.

Asymmetric joint actions, for example between a mother and a young child, or between a pet owner and their pet, are actions in which the senior partner has a sophisticated understanding of the cooperative situation, while the junior partner participates in a less cognitively sophisticated way, meeting only minimal conditions for joint agency.

Quasi-social interactions require even less from the junior partner than do asymmetric joint actions. These are actions in which the senior partner's social reactions influence the behavior of the junior partner, calling forth further social reactions from the senior partner, but where the junior partner might not even meet minimal standards of having beliefs, desires, or emotions.

Our interactions with Large Language Models are already quasi-social. If you accidentally kick a Roomba and then apologize, the apology is thrown into the void, so to speak -- it has no effect on how the Roomba goes about its cleaning. But if you respond apologetically to ChatGPT, your apology is not thrown into the void. ChatGPT will react differently to you as a result of the apology (responding for example to phrase "I'm sorry"), and this different reaction can then be the basis of a further social reaction from you, to which ChatGPT again responds. Your social processes are engaged, and they guide your interaction, even though ChatGPT has (arguably) no beliefs, desires, or emotions. This is not just ordinary tool use. But neither does it qualify even as asymmetric joint action of the sort you might have with an infant or a dog.

More thoughts along these lines in the full draft here.

As always, comments, thoughts, objections welcome -- either on this post, on my social media accounts, or by email!

[Image: a well-known quasi-social interaction between a New York Times reporter and the Bing/Sydney Large Language Model]

9 comments:

Paul D. Van Pelt said...

I don't suppose I will ever have much regard for AI, chat bots, LLMs, or even virtual reality. The old grey matter, and or white matter upstairs, just does not appreciate, much less grasp, the significance or big picturehood of THAT Big Picture. Professor Carroll might get that. Although I have not read his update, I thoroughly enjoyed the original version. The atmosphere at Johns Hopkins must be conducive to creativity. On a yet lighter note, a famous, fast food chain (perhaps the oldest?) has installed something like chat bot technology at my neighborhood store. Instead of being asssaulted by staccato human 'hood' speech when placing an order, the bot is succinct and asks the patron for the same courtesy. How novel. I love it. Aside: the bot tried to upsell me after I placed my order. Regrettable. But, that is commerce. Such habits never go away, do they?

chinaphil said...

I like this. One criticism I'd level is that it still seems very much concerned with defining-in human superiority. The paper's quasi-social interactions involve children and other incapable partners. But from ChatGPT's point of view, we are already spectacularly dumb. We just don't know much about the world, and we can't speak enough languages. Might our interactions with ChatGPT be quasi-social because we're the incapable partner?!
It might be more productive to make the label quasi-social apply to any interaction with some social elements that doesn't fit the normal human model (whatever that may be!), without necessarily defining one interactor as superior to the other.

Paul D. Van Pelt said...

Interesting, squared. Supposing further, for a moment: If it is agreed that Chat GPT has a, uh, *point-of-view*,such as 'we are all incredibly dumb...', from whence does that POV emanate? Under ordinary circumstances, I, for one, think that a propensity to have *point-of-view* requires, a priori, ability to THINK. I am not persuaded, as yet, that any level of AI, thinks. At least, not thought in a responsively conscious human sense. Sorry,...no, not sorry...I am not among the BNW (braver, newer, world) community. My POV says: those who ride the train are those who have interests, motives and preferences (IMPs) and a stake in the pot. In my opinion, AI and its' offshoots/adjuncts, is/are analytical tools. Sophisticated algorithms. They do not think. BNW folks only want others to think they do.

Paul D. Van Pelt said...

One more thing and I will shut-the-f--- up: There was a lead to another commentary I could not access for comment: " we need a culturally aware approach to AI/Nature Human Behavior...". Does anyone understand this? I don't. First of all, I don't connect cultural awareness with AI. Secondly, AI/Nature Human Behavior have no identifiable referrent: AI/Nature Human Behavior are self-contradictory---as Gould might have said---they are NOMA. Look it up. Or, not.

William S. Robinson said...

I wonder whether describing human-AI exchanges as ‘interactions’ doesn’t rather load the dice toward an overly rich interpretation. What if we rephrased your key definition this way?
Quasi-social exchanges require even less from the junior partner than do asymmetric joint actions. These are exchanges in which the senior partner’s social reactions influence the output of the junior partner, calling forth further social reactions from the senior partner, but where the junior partner might not even meet minimal standards for having beliefs, desires, emotions or performing actions.

Bill Robinson

Anonymous said...

Regarding the first sentence of Mr. Robinson's comment: my sentiments exactly.

Arnold said...

Towards the limits of large language intelligence (LI), (AI) and relative intelligence (RI)...
...our observation seems to present us with always more unknown intelligence (UI)...

In the relativity of the dimension, here where we all have place...
...then living in joy sorrow and in between, rather than casual-could be cause...

For relating being here now, to philosophical thought and imagination...

Paul D. Van Pelt said...

OK. I lied. Everyone does, according to one former president. Quasi-this-or-that is familiar to me. When officially employed, my role varied. I was a quasi-legal beagle or, quasi-judicial hearing official (administrative law judge), depending on where I sat: the legal beagle investigated things; the law judge ruled on evidence and procedural/substantive due process. These were matters of justice, albeit with a small j. I,for one,would not want to entrust my fate to AI or any variation, thereof. The attention to substance and procedure are not there yet. Don't see how they can be. For reasons articulated, here and elsewhere.

Eric Schwitzgebel said...

Thanks for the comments, everyone! I seem to be a bit slow replying these days, but I do look at everything.

chinaphil: Good point! I try to be alert for that sort of error, especially now that I have a PhD student writing on that very issue. I do think "quasi" works in this case, though. Between asocial and fully social, there must be a spectrum; and LLMs, though better than us in some ways, seem to be in that in-betweenish space with respect to sociality.

Bill: Right, that's fair. "Interactions" does sound a little on the rich side. As you suggest, I think we can make the very same points while avoiding that loaded word.

Paul: There are some advantages to the quasi, for sure -- another example is when we don't want to be judged by our partners. How and when we'll get real "attention to substance and procedure" is a tricky question -- perhaps simply beyond us, at least in the medium term.