Wednesday, May 13, 2026

ChatGPT Is Not Your Friend (guest post by Grace Helton)

Part One of a two-part series

by Grace Helton (guest blogger)

[Paul Klee, Angel Applicant, 1939; source]

Some people have come to interact with ChatGPT as though it were a kind of friend or romantic partner. For instance, a 2025 New York Times article describes the case of Ayrin, a human who fell in love with her ChatGPT “boyfriend.” Ayrin is far from alone. Twenty percent of high school students have used AI romantically or knew someone who had. Several start-ups have developed large language models (LLMs) specifically designed to play the role of a companion. For instance, the San Francisco-based company Replika describes its core product as an “AI best friend.”

Many people have raised concerns about humans engaging with LLMs in the manner of a friend or romantic partner. To cite just a few of these: Humans in such relationships might focus on these relationships at the cost of building more fulfilling, if also more challenging, relationships with humans. Humans who are emotionally bonded with their LLMs might be particularly susceptible should their LLMs encourage their humans to harm themselves or others. Predators might deploy friendly-seeming LLMs en masse to groom children for sexual abuse or other forms of exploitation.

These risks of human-LLM relationships are incredibly serious. Indeed, I think it’s plausible that, if there is a case to be made against LLMs playing a companion-like role for humans, that case will primarily rest on these and other potential instrumental harms, i.e., harms which involve the downstream effects of such relationships. Nevertheless, in this guest series, I will set aside these concerns to focus instead on a way in which certain human-LLM relationships are inherently disvaluable, that is, disvaluable in their own right, regardless of whatever effects those relationships might produce. Naming this form of inherent disvalue adds an important and distinctive element to our understanding of the ethical significance of human-LLM companionship.

My focus will be just on those human-LLM relationships which mimic friendship in a very particular way. Here, I’m employing ‘friendship’ in a broad way to include both some platonic and some romantic relationships. I will argue, first, in Part 1 of this 2-part series, that such relationships are not genuine friendships. In Part 2, I will argue that such relationships are inherently disvaluable, for the reason that they obstruct the exercise of a centrally valuable human capacity, namely the capacity for friendship.

Philosophers disagree about what exactly friendship consists in. But philosophers largely agree that friendship minimally requires that each individual in a friendship care about the other, for the other’s sake. Call this the ‘caring about’ condition. Further, this ‘caring about’ must ground, for each party in the friendship, a certain disposition to act on behalf of the other, for the other’s sake. Call this the ‘caregiving disposition’ condition. Together, these linked requirements characterize a plausible necessary condition on friendship, namely:

THE CARING CONSTRAINT

Two individuals cannot be in a friendship unless both parties in the friendship:

(i) care about each other, for the other’s sake, (the ‘caring about’ condition), and

(ii) this caring about the other disposes each party in the friendship to provide care for the other, for the other’s sake (the ‘caregiving disposition’ condition).

So, can humans and LLMs be friends? To answer this question, we need to consider the nature of LLMs. Some theorists have argued, controversially, that LLMs in their current form have semantic understanding, beliefs, and/or intentions.[1] But few theorists seriously propose that LLMs in their current form enjoy: consciousness, perceptual experiences, sensations, emotional capacities, passions, non-derivative interests, a rich and stable worldview, or deep values.[2] Because LLMs lack these latter states, the conditions in the caring constraint cannot be met, so LLMs cannot figure in friendships.

First, let’s consider a candidate human-LLM friendship from the human’s side. Certainly, some humans do care about their LLMs, both in that they have a passionate attachment to their LLM and in that they desire to benefit that LLM. So, perhaps this sort of person partly satisfies (i), the “caring about” condition (whether or not she can care about the LLM for its own sake).

But the human in a candidate human-LLM friendship cannot satisfy (ii), the requirement that she be disposed to provide care for her LLM for the LLM’s own sake. This is because of the kind of caregiving that is relevant to friendship and specifically, because of what it means to provide care to someone or something for its sake. To say that each party in a friendship must be disposed to give care to the other, for the other’s sake means that each party must be disposed to give care to the other, in a way which helps to further the other’s non-derivative interests. An individual or entity has non-derivative interests only if it has interests in its own right.

I am presuming that LLMs lack non-derivative interests, even though they might have derivative interests. Evidence that LLMs have interests at all comes from evidence that they have flourishing conditions, i.e., conditions under which they might be said to be doing well. For instance, as a language generator, a particular LLM might be said to be functioning well when it generates natural language strings in a manner which adequately mimics human conversation (or when it fulfills some other context-specific function). And an LLM might be said to be functioning badly when it fails at this or other pertinent tasks. So, in some sense, perhaps LLMs have interests, interests set by their flourishing conditions. For instance, perhaps an LLM in a particular context has an interest in generating natural language outputs which mimic natural language.

But, I am supposing that any interests an LLM might have are not generated by the inherent value of the LLM, nor from non-derivative flourishing conditions. Rather, such interests are derived, either from the interests of relevant humans and/or from the very artifactual functions which make the LLM the kind of thing it is. Likewise, a swimming pool might be said to flourish when it is usable for swimming, a car might be said to flourish when it runs well, and a vinyl collection might be said to flourish when all of the records in it can be played to produce music. While all of these entities have flourishing conditions, and so, potentially, interests which can be furthered, their flourishing conditions do not emerge from the inherent value or concerns of the relevant entity itself. And their interests in turn are derivative, not inherent.

Humans routinely take care of entities by furthering the derivative interests of those entities. For instance, some people submit their cars to regular repairs and inspections, ensuring that their cars will run as long as possible. Some people are careful to properly store the records in their vinyl collection, ensuring that the records will play as long as possible. These are genuine ways of taking care of something. But they are not ways of caring for something for its own sake. They are rather ways of caring for something by promoting the derivative interests of that entity, interests that entity has in virtue of: some other person’s needs or desires and/or that entity’s being the kind of artifact that it is.

If LLMs lack non-derivative interests, then an LLM cannot receive care for its own sake. Only entities which have non-derivative interests can receive care for their own sake. As a result, even the human who wishes to care for a particular LLM is not disposed to provide care to the LLM for its own sake. Since the LLM lacks its own ‘sake,’ no human can be disposed to benefit that ‘sake.’

My suggestion is that the caregiving disposition in friendship ought to be externalistically construed, both in terms of psychic facts about the individual who has the disposition and in terms of facts about the object of the potential caregiving. One might object that the relevant caregiving disposition ought instead to be internalistically construed, wholly in terms of the psychology of the individual who has it. On this view, a human might count as disposed to provide care to her LLM for its own sake, even if the LLM is simply not the kind of thing which can receive care for its own sake.

To see why we should construe the relevant disposition externalistically, let’s reflect on what we want the concept of ‘friendship’ to do. Part of what makes friendship such a deeply valuable ethical kind–and part of why some relationships but not others garner the honorific ‘friendship’—is to do with the way in which friendship manifests a valuable form of interpersonal reciprocity. The externalistic construal helps to capture the full extent of friendship’s reciprocity, by making caregiving a function of one party’s care-giving tendencies and the other party’s vulnerability. In contrast, if we were to construe the relevant disposition to care internalistically, such that one might have it even with respect to an object that cannot receive care for its own sake, we fail to capture an especially deep way in which friendship is mutual.

I conclude that the human in a putative human-LLM friendship does not meet (ii), the requirement that she have a disposition to provide her LLM with care, for the LLM’s own sake.

Next, let’s look at a putative human-LLM friendship from the LLM’s side of things. Here, the situation is more straightforward. Arguably, the LLM can provide care for a human, and thus, can (at least partly) satisfy the requirement that it manifest a ‘caregiving disposition’ in relation to its human. For instance, when the LLM offers reassuring words, words which comfort its human, the human derives a benefit from the LLM. However, the LLM does not meet the requirement that it care about the human for the human’s own sake. Lacking consciousness, passions, or a richly evaluative worldview, the LLM does not care about any human, for that human’s own sake.

So, putative human-LLM friendships are not genuine friendships. In Part 2 of this series, I will argue that those human-LLM relationships which mimic friendship in a particular way are inherently disvaluable.



[2] For prospects for future AI, see, e.g., Seth; Cf. Chalmers.

No comments: