A philosophical discussion arc (as discussed in Tuesday's post) is the trend, over time, in the use of a term or name as a keyword in philosophy articles and books. As I mentioned Tuesday, discussion of prominent philosophers' work tends to peak around age 55-70. However, not all philosophers show this trend.
Given that the dataset starts in 1940, the only age cohort suitable for examining breakaways -- philosophers whose discussion arcs continue to rise after age seventy -- is the cohort born around the year 1900. Only for that cohort do we have both peak-career discussion rates and later discussion rates in the dataset.
I examined the discussion arcs of eight leading philosophers born between 1885 and 1915: Wittgenstein, Heidegger, Carnap, Ryle, Popper, Sartre, Merleau-Ponty, and Quine. On the x-axis is date, in five-year slices. On the y-axis is a ratio: The number of times the philosopher's name appears as a keyword, divided by a broad index of philosophical keywords in the database, times 100. Five of these philosophers show the usual career arc:
Heidegger and Merleau-Ponty, however, and maybe Wittgenstein, seem to show a different pattern:
To see more clearly how Heidegger and Wittgenstein broke away from the pack, let's remove the clutter by taking averages:
From about 1940-1954, Heidegger, Wittgenstein, Carnap, Ryle, Popper, Sartre, and Quine were all receiving about an equal amount of discussion. But only Heidegger and Wittgenstein increased and sustained that level of discussion in the ensuing decades.
In the previous generation of philosophers, we seem to see the tail end of a similar story, with Frege and Husserl breaking away while Royce, Bergson, Dewey, and Whitehead decline (though declined Dewey is still discussed as much as is broken-away Frege).
Wednesday, April 28, 2010
Tuesday, April 27, 2010
Discussion Arcs
A philosophical discussion arc, as I'll use the term, is a curve displaying how often a topic or author is used as a "keyword" in a philosophical journal article or book abstract (i.e., in the article's or book's title, abstract, or list of key words). By looking at discussion arcs we can see what topics have been hot and what philosophers have been influential.
Let's begin with topical discussion arcs. On the x-axis is publication year, in five-year slices. The y-axis is a ratio: It's the number of articles in the Philosopher's Index containing the keyword, divided by a representative universe of articles, multiplied by 100. The data begin in 1940.
(A ratio is a much more accurate indicator of influence than is raw number, since the number of philosophy articles has increased about twenty-fold since the 1940s. I generated the representative universes, which serve at the denominators, by broad keyword searches, as indicated with each graph.)
Some topics have generated consistent interest over the decades. Dualism is one, as you can see below. The * is a truncation symbol, so this chart tracks any keyword starting with "dualis".
[Representative universe: language + epistemology + mind + metaphysics -- lemmings for short.]
Interest in the leading 17th and 18th century philosophers is also steady across the period (with perhaps Kant gaining discussion and Locke losing discussion):
[Representative universe: Lemmings + ethic* + moral* + polit*, or EMPlemmings for short]
Voguish topics, in contrast, are arc shaped.
Here, for example, is "twin earth" (a thought experiment about what the word "water" would mean in a world virtually identical to ours but with a different chemical formula for water):
[Representative universe: Lemmings]
And here is "ordinary language" (a way of thinking about philosophical issues popular in the middle of the twentieth century):
[Representative universe: Lemmings]
Here's a chart that displays the rise of Nietzsche from the second tier of historical figures into the first tier. (Note Nietzsche's final y-axis numbers are higher than those of Descartes, Locke, or Hume in the chart above.)
[Representative univerise: EMPLemmings]
Twentieth century philosophers also have arcs. Here are five influential philosophers born in the 1910s. Note that discussion of their work tended to peak at about age sixty, with the exception of Donald Davidson:
[Representative universe: Lemmings]
In fact, there's a fairly consistent pattern for the influence of 20th century analytic philosophers, as measured by discussion arc, to peak around age 55-70. The following chart shows average discussion-arc data from 26 prominent 20th century philosophers, with age on the x-axis. I normalized each philosopher's peak influence to 1. I did not truncate the philosophers' discussion arcs at death.
[Representative universe: Lemmings; all included philosophers are Lemmings specialists]
I find it interesting that influence tends to peak at age 55-70, while the age at which philosophers tend to do their most influential work is about 35-40. (Here's a preliminary discussion of that last point; I hope to have fuller data on the matter soon.) I guess it takes time for word to get around!
Update, May 12:
In this more recent post, I (mostly) retract my claim about the age at which the most influential philosophical work tends to be done. Check out this cool chart!
Let's begin with topical discussion arcs. On the x-axis is publication year, in five-year slices. The y-axis is a ratio: It's the number of articles in the Philosopher's Index containing the keyword, divided by a representative universe of articles, multiplied by 100. The data begin in 1940.
(A ratio is a much more accurate indicator of influence than is raw number, since the number of philosophy articles has increased about twenty-fold since the 1940s. I generated the representative universes, which serve at the denominators, by broad keyword searches, as indicated with each graph.)
Some topics have generated consistent interest over the decades. Dualism is one, as you can see below. The * is a truncation symbol, so this chart tracks any keyword starting with "dualis".
[Representative universe: language + epistemology + mind + metaphysics -- lemmings for short.]
Interest in the leading 17th and 18th century philosophers is also steady across the period (with perhaps Kant gaining discussion and Locke losing discussion):
[Representative universe: Lemmings + ethic* + moral* + polit*, or EMPlemmings for short]
Voguish topics, in contrast, are arc shaped.
Here, for example, is "twin earth" (a thought experiment about what the word "water" would mean in a world virtually identical to ours but with a different chemical formula for water):
[Representative universe: Lemmings]
And here is "ordinary language" (a way of thinking about philosophical issues popular in the middle of the twentieth century):
[Representative universe: Lemmings]
Here's a chart that displays the rise of Nietzsche from the second tier of historical figures into the first tier. (Note Nietzsche's final y-axis numbers are higher than those of Descartes, Locke, or Hume in the chart above.)
[Representative univerise: EMPLemmings]
Twentieth century philosophers also have arcs. Here are five influential philosophers born in the 1910s. Note that discussion of their work tended to peak at about age sixty, with the exception of Donald Davidson:
[Representative universe: Lemmings]
In fact, there's a fairly consistent pattern for the influence of 20th century analytic philosophers, as measured by discussion arc, to peak around age 55-70. The following chart shows average discussion-arc data from 26 prominent 20th century philosophers, with age on the x-axis. I normalized each philosopher's peak influence to 1. I did not truncate the philosophers' discussion arcs at death.
[Representative universe: Lemmings; all included philosophers are Lemmings specialists]
I find it interesting that influence tends to peak at age 55-70, while the age at which philosophers tend to do their most influential work is about 35-40. (Here's a preliminary discussion of that last point; I hope to have fuller data on the matter soon.) I guess it takes time for word to get around!
Update, May 12:
In this more recent post, I (mostly) retract my claim about the age at which the most influential philosophical work tends to be done. Check out this cool chart!
Thursday, April 22, 2010
Chalmers's Fading/Dancing Qualia and Self-Knowledge
David Chalmers defends what he calls a principle of organizational invariance according to which if a system has conscious experiences then any other system with the same fine-grained functional organization will have qualitatively identical experiences. His main arguments for this principle are his "Fading Qualia" and "Dancing Qualia" arguments.
Both arguments are reductios. Let's start with Fading Qualia. Suppose, contra the principle of organizational invariance, that there could be a fine-grained functional isomorph of you without conscious experience -- perhaps a robot (call him Stu) with a brain made of silicon chips instead of neurons. If this is possible, then it should also be possible to create a series of intermediate beings between You and Stu -- perhaps, for example, beings in which different proportions of the neurons are replaced by silicon chips. If You have a hundred billion neurons in your brain, then maybe we can imagine a hundred billion minus one intermediate cases, each with one less neuron and one more silicon chip. The question is: What kind of consciousness do these intermediate beings have? Chalmers argues that there is no satisfactory answer.
There seem to be two ways to go: Consciousness might suddenly disappear somewhere in the progression, say between fifty billion and one neurons and fifty billion. But that seems bizarre. How could the replacement of one neuron make the difference between consciousness and its absence? You and Fifty-Billion-and-One are having vivid visual experience of a basketball game, say, while poor Fifty-Billion is a complete experiential blank. Surely we don't want to accept that.
Seemingly more plausible is a second option: Consciousness slowly fades out between You and Stu. But then what does Fifty-Billion experience? Half of a visual field? An entire visual field, but hazy or in unsaturated color? Note that since You, Stu, and Fifty-Billion are all identical at the level of functional organization, you will all exhibit exactly the same outward behavior. You will all, when asked to introspect, presumably say something like "I am having vivid visual experience of a basketball game". Stu is wrong about this, of course, if it makes sense to attribute assertions to him at all; but he is just a silicon robot without consciousness, so maybe that's okay. But Fifty-Billion is not just a silicon robot. He has some consciousness. But he seems to be badly wrong about it. His visual experience is not, as he says, vivid and sharp, but rather indistinct, or incomplete, or unsaturated. And Chalmers suggests that it's absurd to attribute that kind of radical error to him. Thus Chalmers completes the reductio: There's an absurdity in assuming the denial of the principle of organizational invariance. You, Stu, and Fifty-Billion all have qualitatively identical conscious experience.
I object to the last move in this argument, to the idea that it is absurd that Fifty-Billion could make that kind of mistake. My reason is this: Many of us make exactly the same mistake in ordinary instances of introspection. Some people, for example, when asked how detailed their conscious experience is at any one moment, say that it is extremely rich -- full of precise detail through a wide visual field, and simultaneously full of auditory detail, tactile detail, and detail in other modalities. Others say that their experience is very sparse -- they only experience one or a few things at a time. On the sparse view, when one is attending to the visual environment, one has no experience of the feet in one's shoes; when one is attending to one part of the visual field, one has no experience of the areas outside of attention; etc. I have argued that this dispute does not turn merely on a disagreement about terminology, and does not reflect radical differences in different people's experiences, but rather is a real substantive, phenomenological dispute. One or both parties must therefore be radically wrong about their experience. This is at least, I think, not an absurd view, given the potential sources of error about the richness of experience, such as the refrigerator light illusion (the possibility that thinking about experience in some modality or region creates experience in that modality or region where none was before, causing us to mistakenly think it was there all along). And if it's not absurd to suppose that ordinary people could be mistaken about how rich and detailed their experience is, it's not absurd to suppose that Fifty-Billion could be mistaken.
Dancing Qualia is a variation of Fading Qualia. It requires two visual processing systems with the same functional organization but different associated visual phenomenology, and it requires the capacity for you to switch swiftly between these systems. Since the functional organization of the systems is the same, you won't report any difference in experience when you switch from one to the other, thereby implying that some of your reports will be mistaken -- implausibly mistaken, in Chalmers's view. Therefore, by reductio, the systems cannot really differ in their associated visual phenomenology.
But in cases of "change blindness" -- for example here -- people will fail to notice substantial changes in their visual experience. (Or at least this is true if experience is relatively rich.) Such failures aren't perhaps as severe as what might be created by a visual system switch, and, as Chalmers notes, many of them require that your attention not be on the object of change. However, not all change blindness cases seem to require lack of attention to the changed stimulus -- like when the person you are talking to changes after brief interruption without your noticing (though determining what exactly qualifies as a target of attention may be a difficult matter in such scenarios); and in any case consideration of such cases should, I think, loosen our commitment to the seeming absurdity of failing, especially in weird scenarios, to notice radical changes in experience.
Furthermore, the Dancing Qualia case seems problematically pre-built to frustrate our ability to notice differences, much like radically skeptical brain-in-a-vat scenarios are pre-built to frustrate the sensory abilities on which we depend by giving the same sensory input despite a large change in the far-side objects. The following model is too simplistic, but conveys the idea I have in mind here: Imagine that introspection works by means of an introspection module located near the front of the brain, which receives input from the visual cortex in the back of the brain. The back of the brain has been changed so that experience is radically different (on the assumption of the reductio), but changed only in such a way that the input from the back to the front of the brain is exactly the same. In such a case, it seems not at all absurd to suppose that introspection would fail to notice a difference, despite a real difference in experience. Thus, the Dancing Qualia reductio fails.
Both arguments are reductios. Let's start with Fading Qualia. Suppose, contra the principle of organizational invariance, that there could be a fine-grained functional isomorph of you without conscious experience -- perhaps a robot (call him Stu) with a brain made of silicon chips instead of neurons. If this is possible, then it should also be possible to create a series of intermediate beings between You and Stu -- perhaps, for example, beings in which different proportions of the neurons are replaced by silicon chips. If You have a hundred billion neurons in your brain, then maybe we can imagine a hundred billion minus one intermediate cases, each with one less neuron and one more silicon chip. The question is: What kind of consciousness do these intermediate beings have? Chalmers argues that there is no satisfactory answer.
There seem to be two ways to go: Consciousness might suddenly disappear somewhere in the progression, say between fifty billion and one neurons and fifty billion. But that seems bizarre. How could the replacement of one neuron make the difference between consciousness and its absence? You and Fifty-Billion-and-One are having vivid visual experience of a basketball game, say, while poor Fifty-Billion is a complete experiential blank. Surely we don't want to accept that.
Seemingly more plausible is a second option: Consciousness slowly fades out between You and Stu. But then what does Fifty-Billion experience? Half of a visual field? An entire visual field, but hazy or in unsaturated color? Note that since You, Stu, and Fifty-Billion are all identical at the level of functional organization, you will all exhibit exactly the same outward behavior. You will all, when asked to introspect, presumably say something like "I am having vivid visual experience of a basketball game". Stu is wrong about this, of course, if it makes sense to attribute assertions to him at all; but he is just a silicon robot without consciousness, so maybe that's okay. But Fifty-Billion is not just a silicon robot. He has some consciousness. But he seems to be badly wrong about it. His visual experience is not, as he says, vivid and sharp, but rather indistinct, or incomplete, or unsaturated. And Chalmers suggests that it's absurd to attribute that kind of radical error to him. Thus Chalmers completes the reductio: There's an absurdity in assuming the denial of the principle of organizational invariance. You, Stu, and Fifty-Billion all have qualitatively identical conscious experience.
I object to the last move in this argument, to the idea that it is absurd that Fifty-Billion could make that kind of mistake. My reason is this: Many of us make exactly the same mistake in ordinary instances of introspection. Some people, for example, when asked how detailed their conscious experience is at any one moment, say that it is extremely rich -- full of precise detail through a wide visual field, and simultaneously full of auditory detail, tactile detail, and detail in other modalities. Others say that their experience is very sparse -- they only experience one or a few things at a time. On the sparse view, when one is attending to the visual environment, one has no experience of the feet in one's shoes; when one is attending to one part of the visual field, one has no experience of the areas outside of attention; etc. I have argued that this dispute does not turn merely on a disagreement about terminology, and does not reflect radical differences in different people's experiences, but rather is a real substantive, phenomenological dispute. One or both parties must therefore be radically wrong about their experience. This is at least, I think, not an absurd view, given the potential sources of error about the richness of experience, such as the refrigerator light illusion (the possibility that thinking about experience in some modality or region creates experience in that modality or region where none was before, causing us to mistakenly think it was there all along). And if it's not absurd to suppose that ordinary people could be mistaken about how rich and detailed their experience is, it's not absurd to suppose that Fifty-Billion could be mistaken.
Dancing Qualia is a variation of Fading Qualia. It requires two visual processing systems with the same functional organization but different associated visual phenomenology, and it requires the capacity for you to switch swiftly between these systems. Since the functional organization of the systems is the same, you won't report any difference in experience when you switch from one to the other, thereby implying that some of your reports will be mistaken -- implausibly mistaken, in Chalmers's view. Therefore, by reductio, the systems cannot really differ in their associated visual phenomenology.
But in cases of "change blindness" -- for example here -- people will fail to notice substantial changes in their visual experience. (Or at least this is true if experience is relatively rich.) Such failures aren't perhaps as severe as what might be created by a visual system switch, and, as Chalmers notes, many of them require that your attention not be on the object of change. However, not all change blindness cases seem to require lack of attention to the changed stimulus -- like when the person you are talking to changes after brief interruption without your noticing (though determining what exactly qualifies as a target of attention may be a difficult matter in such scenarios); and in any case consideration of such cases should, I think, loosen our commitment to the seeming absurdity of failing, especially in weird scenarios, to notice radical changes in experience.
Furthermore, the Dancing Qualia case seems problematically pre-built to frustrate our ability to notice differences, much like radically skeptical brain-in-a-vat scenarios are pre-built to frustrate the sensory abilities on which we depend by giving the same sensory input despite a large change in the far-side objects. The following model is too simplistic, but conveys the idea I have in mind here: Imagine that introspection works by means of an introspection module located near the front of the brain, which receives input from the visual cortex in the back of the brain. The back of the brain has been changed so that experience is radically different (on the assumption of the reductio), but changed only in such a way that the input from the back to the front of the brain is exactly the same. In such a case, it seems not at all absurd to suppose that introspection would fail to notice a difference, despite a real difference in experience. Thus, the Dancing Qualia reductio fails.
Thursday, April 15, 2010
The Moral Behavior of Super-Duper Artificial Intelligences
David Chalmers gave a talk today (at the Toward a Science of Consciousness conference in Tucson) arguing that it is fairly likely that sometime in the next few centuries we will create artificial intelligence (perhaps silicon, perhaps biological) considerably more intelligent than ourselves -- and then those intelligent creatures will create even more intelligent successors, and so on, until there exist creatures that are vastly more intelligent than we are.
The question then arises, what will such hyperintelligent creatures do with us? Maybe they will be us, and we needn't worry. But what if human beings in something like the current form still exist alongside these hyperintelligent artificial creatures? If the hyperintelligent creatures don't care about our welfare, that seems like a pretty serious danger that we should plan ahead for.
Perhaps, Chalmers suggests, we should build only intelligences that value human flourishing or have benign desires. He also advises creating hyperintelligent creatures only in simulations that they can't escape. But, as he points out, these strategies for dealing with the risk might be tricky to execute successfully (as numerous science fiction works attest).
More optimistically, Chalmers notes that on certain philosophical views (e.g., Kant's; I'd add Socrates's) immorality is irrational. And if so, then maybe we needn't worry. Hyperintelligent creatures might necessarily be hypermoral creatures. Presumably such creatures would treat us well and allow us to flourish.
One thing Chalmers didn't discuss, though, was the shape of the moral trajectory: Even if super-duper hyperintelligent artificial intelligences would be hypermoral, unless intermediate stages en route are also very moral (probably more moral than actual human beings are), we might still be in great danger. It seems like we want sharply rising, monotonic improvement in morality, and not just hypermorality at the endpoint.
So the question arises: Is there good empirical evidence that bears on this question, evidence concerning the relationship between morality and intelligence? By "intelligence" let's mean something like the capacity to learn facts or reason logically or design complicated plans, especially plans to make more intelligent creatures. Leading engineers, scientists, and philosophers would tend to be highly intelligent by this definition. Is there any reason to think that morality rises sharply and monotonically with intelligence in this sense?
There is some evidence for a negative relationship between IQ and criminality (though it's tangled in complicated ways with socioeconomic status and other factors). However, I can't say that my personal and hearsay knowledge of the moral behavior of university professors (perhaps especially ethicists?) makes me optimistic about a sharply increasing monotonic relationship between intelligence and morality.
In which case, look out, great-great-great-great-great-great-grandchildren!
The question then arises, what will such hyperintelligent creatures do with us? Maybe they will be us, and we needn't worry. But what if human beings in something like the current form still exist alongside these hyperintelligent artificial creatures? If the hyperintelligent creatures don't care about our welfare, that seems like a pretty serious danger that we should plan ahead for.
Perhaps, Chalmers suggests, we should build only intelligences that value human flourishing or have benign desires. He also advises creating hyperintelligent creatures only in simulations that they can't escape. But, as he points out, these strategies for dealing with the risk might be tricky to execute successfully (as numerous science fiction works attest).
More optimistically, Chalmers notes that on certain philosophical views (e.g., Kant's; I'd add Socrates's) immorality is irrational. And if so, then maybe we needn't worry. Hyperintelligent creatures might necessarily be hypermoral creatures. Presumably such creatures would treat us well and allow us to flourish.
One thing Chalmers didn't discuss, though, was the shape of the moral trajectory: Even if super-duper hyperintelligent artificial intelligences would be hypermoral, unless intermediate stages en route are also very moral (probably more moral than actual human beings are), we might still be in great danger. It seems like we want sharply rising, monotonic improvement in morality, and not just hypermorality at the endpoint.
So the question arises: Is there good empirical evidence that bears on this question, evidence concerning the relationship between morality and intelligence? By "intelligence" let's mean something like the capacity to learn facts or reason logically or design complicated plans, especially plans to make more intelligent creatures. Leading engineers, scientists, and philosophers would tend to be highly intelligent by this definition. Is there any reason to think that morality rises sharply and monotonically with intelligence in this sense?
There is some evidence for a negative relationship between IQ and criminality (though it's tangled in complicated ways with socioeconomic status and other factors). However, I can't say that my personal and hearsay knowledge of the moral behavior of university professors (perhaps especially ethicists?) makes me optimistic about a sharply increasing monotonic relationship between intelligence and morality.
In which case, look out, great-great-great-great-great-great-grandchildren!
Friday, April 09, 2010
What's in People's Stream of Experience During Philosophy Talks?
As you may know, Russ Hurlburt and I recently published a book centering on a woman's reports about her experience as she went about her normal day wearing a random beeper. When the beep sounded, her job was to try to recall her "last undisturbed moment of inner experience" just before the beep. Russ and I then interviewed her about these experiences, trying to get both at the truth about them and at methodological issues about the value of this sort of approach in studying consciousness.
Russ and I have presented our joint work in a number of venues now (including at an author-meets-critics session at the APA last week), and normally when we do so, we "beep" the audience. That is, we set up a random beeper to sound when Russ or I or a critic is presenting material. When the beep sounds, each audience member is to think about what was going on in her last undisturbed moment of inner experience before the beep. We then use a random number generator to select an audience member to report on her experience. We interview her right there, discussing her experience and the method with the audience and each other. We'll do this maybe three times in a three-hour session.
As a result, we now have a couple dozen samples of reported inner experience during our academic talks, and the most striking thing we've found is that people rarely report thinking about the talk. The most recent six samples are representative (three from a presentation by me at Claremont Wednesday, three from the APA).
(1.) Thinking that he should put his cell phone away (probably not formulated either in words or imagery); visual experience of cell phone and whiteboard.
(2.) Scratching an itch, noticing how it feels; having a visual experience of a book.
(3.) Feeling like he's about to fade into a sweet daydream but no sense of its content yet; "fading" visual experience of the speaker.
(4.) Feeling confused; listening to speaker and reading along on handout, taking in the meaning. [I'm counting this as an instance of thinking about the talk.]
(5.) Visual imagery of the "macaroni orange" of a recently seen flyer; skanky taste of coffee; fantasizing about biting an apple instead of tasting coffee; feeling need to go to bathroom; hearing the speaker's sentence. The macaroni orange was the most prominent part of her experience.
(6.) Reading abstract for next talk; hearing an "echo" of the speaker's last sentence; fighting a feeling of tiredness; maybe feeling tingling on tooth from permanent retainer.
Where is the cooking up of objections, the thinking through of consequences, the feeling of understanding the meaning of what is being said, the finding of connections to other people's work? In only one of these samples was taking in the meaning of the talk the foremost part of the experience.
It could just be that Russ and I and our critics are unusually deadening speakers, but I don't think so. My guess is that most audience members, listening to most academic talks, spend most of their time with some distraction or other at the forefront of their stream of experience. They may not remember this fact because when they think back on their experience of a talk, what is salient to them are those rare occasions when they did make a novel connection or think up an interesting objection. (I think the same is true of sex thoughts. People often say they spend a lot of time thinking about sex, but when you beep them they very rarely report it. It's probably that our sex thoughts, though rare, are much more frequently remembered than other thoughts and so are dramatically overrepresented in retrospective memory.)
Here are two hypotheses about understanding academic talks that harmonize with these observational data:
(1.) Our understanding of academic talks comes mostly from our ability to take them in while other things are at the forefront of consciousness. The information gets in there, despite the near-constant layer of distraction, and that information then shapes skilled regurgitations of the content of the talks.
(2.) Our understanding of academic talks comes mostly from those few salient moments when we are actually not distracted. Maybe this happens three or twelve or thirty times, for very brief stretches, during the course of the talk. The understanding we walk away with at the end is a reconstruction of what must plausibly have been the author's view based on our recollection of those few instances when we were actually paying attention to what she was saying.
Any bets on (1) vs. (2)? Or candidates for a (3)? If (2) is closer to the truth, then it may be possible to discover strategies to get much more out of talks by discovering ways to better focus our attention on the content.
Russ and I have presented our joint work in a number of venues now (including at an author-meets-critics session at the APA last week), and normally when we do so, we "beep" the audience. That is, we set up a random beeper to sound when Russ or I or a critic is presenting material. When the beep sounds, each audience member is to think about what was going on in her last undisturbed moment of inner experience before the beep. We then use a random number generator to select an audience member to report on her experience. We interview her right there, discussing her experience and the method with the audience and each other. We'll do this maybe three times in a three-hour session.
As a result, we now have a couple dozen samples of reported inner experience during our academic talks, and the most striking thing we've found is that people rarely report thinking about the talk. The most recent six samples are representative (three from a presentation by me at Claremont Wednesday, three from the APA).
(1.) Thinking that he should put his cell phone away (probably not formulated either in words or imagery); visual experience of cell phone and whiteboard.
(2.) Scratching an itch, noticing how it feels; having a visual experience of a book.
(3.) Feeling like he's about to fade into a sweet daydream but no sense of its content yet; "fading" visual experience of the speaker.
(4.) Feeling confused; listening to speaker and reading along on handout, taking in the meaning. [I'm counting this as an instance of thinking about the talk.]
(5.) Visual imagery of the "macaroni orange" of a recently seen flyer; skanky taste of coffee; fantasizing about biting an apple instead of tasting coffee; feeling need to go to bathroom; hearing the speaker's sentence. The macaroni orange was the most prominent part of her experience.
(6.) Reading abstract for next talk; hearing an "echo" of the speaker's last sentence; fighting a feeling of tiredness; maybe feeling tingling on tooth from permanent retainer.
Where is the cooking up of objections, the thinking through of consequences, the feeling of understanding the meaning of what is being said, the finding of connections to other people's work? In only one of these samples was taking in the meaning of the talk the foremost part of the experience.
It could just be that Russ and I and our critics are unusually deadening speakers, but I don't think so. My guess is that most audience members, listening to most academic talks, spend most of their time with some distraction or other at the forefront of their stream of experience. They may not remember this fact because when they think back on their experience of a talk, what is salient to them are those rare occasions when they did make a novel connection or think up an interesting objection. (I think the same is true of sex thoughts. People often say they spend a lot of time thinking about sex, but when you beep them they very rarely report it. It's probably that our sex thoughts, though rare, are much more frequently remembered than other thoughts and so are dramatically overrepresented in retrospective memory.)
Here are two hypotheses about understanding academic talks that harmonize with these observational data:
(1.) Our understanding of academic talks comes mostly from our ability to take them in while other things are at the forefront of consciousness. The information gets in there, despite the near-constant layer of distraction, and that information then shapes skilled regurgitations of the content of the talks.
(2.) Our understanding of academic talks comes mostly from those few salient moments when we are actually not distracted. Maybe this happens three or twelve or thirty times, for very brief stretches, during the course of the talk. The understanding we walk away with at the end is a reconstruction of what must plausibly have been the author's view based on our recollection of those few instances when we were actually paying attention to what she was saying.
Any bets on (1) vs. (2)? Or candidates for a (3)? If (2) is closer to the truth, then it may be possible to discover strategies to get much more out of talks by discovering ways to better focus our attention on the content.