If I use autocomplete to help me write my email, the email is -- we ordinarily think -- still written by me. If I ask ChatGPT to generate an essay on the role of fate in Macbeth, then the essay was not -- we ordinarily think -- written by me. What's the difference?
David Chalmers posed this question a couple of days ago at a conference on large language models (LLMs) here at UC Riverside.
[Chalmers presented remotely, so Anna Strasser constructed this avatar of him. The t-shirt reads: "don't hate the player, hate the game"]Chalmers entertained the possibility that the crucial difference is that there's understanding in the email case but a deficit of understanding in the Macbeth case. But I'm inclined to think this doesn't quite work. The student could study the ChatGPT output, compare it with Macbeth, and achieve full understanding of the ChatGPT output. It would still be ChatGPT's essay, not the student's. Or, as one audience member suggested (Dan Lloyd?), you could memorize and recite a love poem, meaning every word, but you still wouldn't be author of the poem.
I have a different idea that turns on segmentation and counterfactuals.
Let's assume that every speech or text output can be segmented into small portions of meaning, which are serially produced, one after the other. (This is oversimple in several ways, I admit.) In GPT, these are individual words (actually "tokens", which are either full words or word fragments). ChatGPT produces one word, then the next, then the next, then the next. After the whole output is created, the student makes an assessment: Is this a good essay on this topic, which I should pass off as my own?
In contrast, if you write an email message using autocomplete, each word precipitates a separate decision. Is this the word I want, or not? If you don't want the word, you reject it and write or choose another. Even if it turns out that you always choose the default autocomplete word, so that the entire email is autocomplete generated, it's not unreasonable, I think, to regard the email as something you wrote, as long as you separately endorsed every word as it arose.
I grant that intuitions might be unclear about the email case. To clarify, consider two versions:
Lazy Emailer. You let autocomplete suggest word 1. Without giving it much thought, you approve. Same for word 2, word 3, word 4. If autocomplete hadn't been turned on, you would have chosen different words. The words don't precisely reflect your voice or ideas, they just pass some minimal threshold of not being terrible.
Amazing Autocomplete. As you go to type word 1, autocomplete finishes exactly the word you intend. You were already thinking of word 2, and autocomplete suggests that as the next word, so you approve word 2, already anticipating word 3. As soon as you approve word 2, autocomplete gives you exactly the word 3 you were thinking of! And so on. In the end, although the whole email is written by autocomplete, it is exactly the email you would have written had autocomplete not been turned on.
I'm inclined to think that we should allow that in the Amazing Autocomplete case, you are author or author-enough of the email. They are your words, your responsibility, and you deserve the credit or discredit for them. Lazy Emailer is a fuzzier case. It depends on how lazy you are, how closely the words you approve match your thinking.
Maybe the crucial difference is that in Amazing Autocomplete, the email is exactly the same as what you would have written on your own? No, I don't think that can quite be the standard. If I'm writing an email and autocomplete suggests a great word I wouldn't otherwise have thought of, and I choose that word as expressing my thought even better than I would have expressed it without the assistance, I still count as having written the email. This is so, even if, after that word, the email proceeds very differently than it otherwise would have. (Maybe the word suggests a metaphor, and then I continue to use the metaphor in the remainder of the message.)
With these examples in mind, I propose the following criterion of authorship in the age of autocomplete: You are author to the extent that for each minimal token of meaning the following conditional statement is true: That token appears in the text because it captures your thought. If you had been having different thoughts, different tokens would have appeared in the text. The ChatGPT essay doesn't meet this standard: There is only blanket approval or disapproval at the end, not token-by-token approval. Amazing Autocomplete does meet the standard. Lazy Emailer is a hazy case, because the words are only roughly related to the emailer's thoughts.
Fans of Borges will know the story Pierre Menard, Author of the Quixote. Menard, imagined by Borges to be a 20th century author, makes it his goal to authentically write Don Quixote. Menard aims to match Cervantes' version word for word -- but not by copying Cervantes. Instead Menard wants to genuinely write the work as his own. Of course, for Menard, the work will have a very different meaning. Menard, unlike Cervantes, will be writing about the distant past, Menard will be full of ironies that Cervantes could not have appreciated, and so on. Menard is aiming at authorship by my proposed standard: He aims not to copy Cervantes but rather to put himself in a state of mind such that each word he writes he endorses as reflecting exactly what he, as a twentieth century author, wants to write in his fresh, ironic novel about the distant past.
On this view, could you write your essay about Macbeth in the GPT-3 playground, approving one individual word at a time? Yes, but only in the magnificently unlikely way that Menard could write the Quixote. You'd have to be sufficiently knowledgeable about Macbeth, and the GPT-3 output would have to be sufficiently in line with your pre-existing knowledge, that for each word, one at a time, you think, "yes, wow, that word effectively captures the thought I'm trying to express!"
9 comments:
It seems to me that using autocomplete, using GPT and completely plagiarizing exist on a continuous spectrum and we only view them as qualitatively different because they are very far apart on that spectrum. So why not try to distinguish the different cases with a more quantitative measure.
For example, we can think of each method as corresponding to a probability distribution on the set of all possible essays: the probability of each essay is the chance that a student would produce that particular essay when using the given method (note that this probability distribution depends not just on the method chosen but also the student using the method). We can then look at the Kullback-Leibler divergence of the probability distributions produced by two different methods. The higher the Kullback-Leibler divergence is from the distribution produced by the unaided student, the closer the method is to plagiarism. In the case of your "amazing autocomplete" thought experiment, you are (arguably) thinking of a case where, miraculously, the unaided student and the student using autocomplete happen to produce identical probability distributions.
You are author to the extent that for each minimal token of meaning the following conditional statement is true: That token appears in the text because it captures your thought. If you had been having different thoughts, different tokens would have appeared in the text.
I’m going to feast on some irony here professor because I don’t think that you quite said what you meant. The point I take from your post is not exactly that the text captures your thought, but rather that each potential token goes through your approval/disproval assessment.
For example, you could write something that generally gets at what you were thinking, though not exactly of course since that’s not possible. You might even flub a final concluding statement. Then I could read what you’ve written and improve it so that it better gets to what you were thinking. Here it would be me however that makes the improvement rather than you. So I seem to have just disproven the notion that the key element of authorship is “what your were thinking”. Instead the key element seems to be “approves each token”.
So now the question is, is this a token that you approve of?
Thank you, I didn't have the knowledge or statistical skill to propose this, so I can forego posting my inept comment grasping at what you wrote
Thanks for letting us known about philosophical questions of the day...
Are fate and understanding others a step towards...
...Seeing: the beginnings of words/thoughts in-of-for transformation...
What is the Nature of transformation: Is it students and education...
...they/we are moving to transformative experiences...
Like the rest of our cosmos-universes' values...
Suppose someone takes the output of ChatGPT as a starting point but then goes on to edit it to more accurately reflect what they want expressed? Does it matter that the initial draft didn't come from the person? (Personally I find the drafts GPT puts out to be more trouble than they're worth, but some people might find them useful.)
Thanks for the comments, folks!
Patrick: I like the spectrum idea, though I don’t know the mathematical model you’re describing. Something like that!
Phil E: That was actually my original idea for the post, but then I decided something more than approval was necessary, since it would be too easy to just think that’s good, that’s good, that’s good, word by word as a Shakespeare sonnet or whatever came out. As for revision by someone else, even if someone else’s words better captures my thoughts, my words still pass the counterfactual test that different tokens would have appeared in the text if I’d been having different thoughts, and the existing tokens at least somewhat capture my thoughts (imperfectly) even if other tokens would have done so better. As Patrick notes, it’s a matter of degree.
Arnold: It’s a little early to know how transformative this technology will be. It does seem to be at least a good tool for advising, summarizing, and text generation.
SelfAware: I’ll go back to Patrick’s point about matter of degree. How thorough is the revision? You could also earn creative credit without having written the text in a strict sense yourself. Compare: You don’t draw a Midjourney image, but you can deserve some creative credit for promoting and editing a good one. In some cases, maybe we can adapt the idea of coauthorshup.
Thought I had you there professor. So you’re not saying authorship depends upon what specifically is thought, but rather that what’s thought influences what’s chosen for each token regardless of how well those tokens reflect that thought? Hard for me to dispute that authorship definition.
AI is certainly-leading in the news today...
...Some of us mortals think the composition of the words used for AI processing stories... should always include, be embedded with, its ownership...
...instead of implications of its existence as a vague legale psychological entity...
and being already thought of in words like Corporation and Internet ...
Google "transformative technology meaning"...
...for processing: who is here-who am I... me or it...
Any theory which accepts the existence of thoughts or meanings or intentions provides a way to distinguish between the autocomplete and GPT cases, I would have thought. I was writing something about Austin the other day, so I could put it in terms of speech acts: if the output is a speech act that you intended to make, then it's original; if you never had the intention to make that act, then it's not original.
I do assume, though, that we're moving towards a state in which we'll have a more extended mind, and we'll be developing our ideas through more interactive processes with AI, so the distinctions will get more and more blurred. I mean, I feel like a 19th century scholar looking at the way we write today would think that most of us are mere secretaries and intellectual curators. We're so inundated with information from existing books and the internet that most of us don't need to do anything other than transfer ideas from one space into another. With AI, that state of affairs will just be sharpened.
Post a Comment