Wednesday, January 21, 2009

The Dust Hypothesis

Consider the following argument:

(1.) It doesn't matter what your mind is made of, as long as the functional relationships between your mental states and the inputs and outputs are right. A conscious person could be made of carbon-based molecules with an organic brain, or of silicon chips in a robot body, or of suitably complex magnetic iron structures. If a being dependably acts like a sophisticated, conscious, intelligent being, it is a sophisticated, conscious, intelligent being. (Searle would disagree with this, but it is the majority view in philosophy of mind and standard in fictional portrayals of android and alien intelligences.) Let's call each temporal slice of such a being a "cognitive state".

(2.) The cognitive states (or temporal slices) of people can be temporally or spatially distributed. If a being of the sort in (1) exists for only one second out of every ten, it is still a conscious, intelligent being, just one with temporal gaps in it -- gaps the being itself may not notice. Likewise, if the being is partly instantiated in Paris and partly in Rio, with the two parts in constant communication, reacting in a co-ordinated way to produce the right sort of behavior, that also does not deprive it of consciousness and intelligence. (Something like this is suggested by Dennett.)

(3.) Furthermore, the objective temporal order of cognitive states is irrelevant. If input 1 ("How are you?") is followed by cognitive states 2, 3, and 4, then by output 5 ("Better, now that you've stopped kicking me!"), it shouldn't matter if as measured by the objective time of the outside world, state 3 comes before state 2, as long as in terms of subjective time and cognitive sequence state 2 comes first. (Dennett, again, is useful here. It's a little tricky to figure out what subjective time and cognitive sequence are independent of objective temporal order; but the conclusion of the argument can be weakened to dispense with this premise if necessary.)

(4.) Also, actual connection to the outside world is irrelevant. You could still have the intelligence and consciousness you now in fact have if your cognitive states were instantiated in a brain in a vat. (Here we may be departing from Dennett; but even if, as Dennett's student Noe and others have argued, some environmental connections are essential to mindedness, we can probably still run the argument I'm interested in. We just turn it into brain-and-relevant-bit-of-the-environment in a vat. We could also dispense with certain "externally defined" mental states and still have an interesting version of the conclusion if necessary.)

(5.) If all this is true, it appears to invite the following conclusion: As long as somewhere in the universe, in some temporal order there exists a functional equivalent of each of your cognitive states, no matter where, in what material, or how grossly distributed over time and space, then there is a mental duplicate of you in existence.

(6.) Then, finally: In all the spatial and temporal vastness of the universe, each of your cognitive states will be instantiated somewhere other than your own brain, in vastly different times and locations.

(7.) So, there is a mental duplicate of you spread out across space and time.

(8.) And this generalizes: There are many, many such people; the universe (at least the complex bits of it) is permeated with them; they include many possible alternative versions of you; etc.

Call this the Dust Hypothesis, after science fiction writer Greg Egan's similar Dust Hypothesis in his book Permutation City.

Assuming the conclusion is absurd, the question is where to put on the brakes. My own inclination is either (1), following Searle, or (5), or (6). On 6: Perhaps the functional relationships necessary for sophisticated, conscious thought are so complex that even in the vast universe they would not be instantiated except in coherent, brain-like packages. But maybe that underestimates the vastness and complexity of the universe?

On (5): Perhaps actual causation between cognitive states is necessary to mentality and consciousness, not just the instantiation of those states with the right counterfactual and dispositional relationships. But I worry. Couldn't there be a mental being causally truncated on one end (brought suddenly into being by freak quantum accident, like Swampman), or on the other (destroyed suddenly by lightning), or both (thus existing for only a moment)? Or what if you have an idea due to stroke or quantum accident (and then maybe the idea vanishes for similar reasons)? Or suppose that you are destroyed and merely by chance a duplicate of you is simultaneously created elsewhere -- wouldn't there be a stream of mentality that transitioned from one to the other? (Could you tell? Would it matter deeply to you whether the duplicate came about by chance or design?) Then generalize. It's a complex issue, but for reasons like those, I'm inclined to think that the actual instantiation of dispositional and functional structures, even if they're not actually causally connected, is enough for interesting and subjectively continuous mentality (even if some externally defined states like genuine [as opposed to apparent] memory require actual causation). But then if we grant (1) and (6) and the others, we seem to be back to the Dust Hypothesis.

71 comments:

  1. This reminds me of Arnold Zuboff, e.g., 'One Self: The Logic of Experience', Inquiry, 33 (1990)

    Gil

    ReplyDelete
  2. Where to put on the breaks, indeed! It seems that there is a weird jump in here from the criteria for the existence of "a mind" to the existence of "your, specific mind." I'm not hugely familiar with philosophy of mind, so I may be missing something in the argument, but why is it that some form of spatio-temporal criterion cannot be applied as far as the uniqueness of a mind? I always thought that was a pretty standard tactic/criterion in event theory and identity theories more broadly.

    Could we, potentially, still view a mind as a temporally persisting entity, albeit with active and inactive states? Or, even if we don't want a flat persistence criterion, what about "mapping" active functionality onto a set of [s, t] coordinates, and then using the pattern to differentiate (though, I don't know what criterion would be used to connect the [s, t] slices)?

    If there's something I'm missing in the argument, I'd love to hear it!

    ReplyDelete
  3. I have issues with (5). There is an assumption here that the universe is infinite, which I feel to be questionable. Wikipedia, at least, suggests that the universe has a diameter of 93 billion light years (which is huge, but not infinite).

    If the amount of stuff in the universe is finite (the volume of the universe is another story), then the argument fails, because it is based on the assumption that, in an infinite universe, all (physically possible) things are not only possible, but indeed likely.

    Thought-stuff - whether computer-based or carbon-based - is likely to be physically incredibly complicated in whatever substantiation. The human brain, with its 100 billion neurons, each with connections to thousands of other neurons, is as complicated in its way as the universe is large. Something at least approaching that level of complexity is likely to be necessary for a human cognitive state to be replicated. In a finite universe, there is not much likelihood of such a thing just randomly coming into existence.

    That's my feeling, anyway.

    ReplyDelete
  4. Well, I'm not sure I'm getting this, but:

    I just don't see why we have reason to think that a functional duplicate of any of my cognitive states exists, much less all of them. The temporally and spatially distributed states in 2 require "constant communication" so they can react "in a coordinated way to produce the right sort of behavior." How do distributed dust states do this? Why think that distributed dust states have any functional dispositions at all?

    I'm missing the jump from multiple-realizability to dust clouds counting as realizations, even of your so-called "cognitive states."

    I'm also not sure what to make of this sort of state notion. How thick a slice is it? Can such states really occur in any order and still be the same states? Isn't the ordering part of their identity conditions--i.e., they follow from the right combination of inputs and other states?

    With the short-lived swampman, if it only exists for .000000001 second, why think it experiences anything or thinks anything? If it exists for a few seconds, the causation between the states it does have may be enough to count it as thinking or experiencing. Depends on the thickness of the slices, I guess.

    Or maybe short-lived swamp experiences are just like Lewis's mad pain: wrong inputs and outputs, but relevantly similar internal constitution to count as experience. This puts some pressure on 1, but if it avoids radical liberalism, that might be a reason to follow Lewis here.

    (Or maybe I just missed it! Oh, well...)

    ReplyDelete
  5. An example of subjective experience of time:

    http://blogs.discovermagazine.com/cosmicvariance/2008/12/29/richard-feynman-on-boltzmann-brains/#comment-56111

    Also:

    http://richarddawkins.net/forum/viewtopic.php?f=18&t=67749&p=1627853&hilit=fubaris#p1621121

    ReplyDelete
  6. Another description of Dust Theory:

    http://richarddawkins.net/forum/viewtopic.php?f=18&t=44296&p=889927#p887556

    ReplyDelete
  7. And further thoughts on "states" and consciousness:

    http://richarddawkins.net/forum/viewtopic.php?f=18&t=44297&p=899934#p887569

    ReplyDelete
  8. Tim Byron:

    >> In a finite universe, there is not much likelihood of such a thing just randomly coming into existence.

    If you have a finite universe but infinite time, you have the exact same problem. See Boltzmann Brains:

    http://www.nytimes.com/2008/01/15/science/15brain.html?_r=1

    AND:

    http://en.wikipedia.org/wiki/Boltzmann_brain

    ReplyDelete
  9. It seems to me that the most logical conclusion to draw is that consciousness is associated with information states, similar to Chalmer's "double-aspect theory of information" (http://consc.net/papers/facing.html).

    I don't think that consciousness is dependent on the causal chains that connect the states however, because you can think of many situations where you can interrupt, shortcut, or modify the causal chains that connect the states and yet still get conscious seeming behavior (and therefore presumably actual consciousness).

    Which leads me to conclude that it is the state information, and not the causal connections between states, that is the source of consciousness.

    ReplyDelete
  10. Okay, last post, I promise.

    This article by Hans Moravec has some good stuff related to this topic as well I think:

    http://www.frc.ri.cmu.edu/~hpm/project.archive/general.articles/1998/SimConEx.98.html

    ReplyDelete
  11. Hi Eric,

    Fun post!

    This perhaps echoes some of what Josh wanted to question, but I'm extremely reluctant to hang much on intuitions about SwampSlice (my intrinsic doppleganger with causal truncation fore and aft). I would have thought you'd share a similar reluctance.

    What, as philosophers of mind or cognitive scientists, should we conclude *about minds* from the fact that at least some people are willing to attribute mental states to SwampSlice? I guess an analytic metaphysician would think this constrains theorizing about minds like the way data constrains science. However, I'm inclined to a different interpretation. The ease with which we conceive of weird entities having minds comes from our lack of knowing what minds are. It's my young daughter's ignorance of the scientific nature of rainbows and gold that make it easy for her to conceive of walking on a rainbow or meeting a living breathing person made of solid gold. We shouldn't conclude from her imaginings that there are possible worlds wherein rainbows are tread upon.

    ReplyDelete
  12. What about volition? I'm having trouble seeing how a dust-mind can have it. Or does contemporary philosophy view volition as utterly separable from cognition, or reducible to cognition or natural impulse, or what? (If this is an intolerably novice-type question, I won't be offended if you say so bluntly.)

    ReplyDelete
  13. Thanks for all the comments, folks!

    Gil: Thanks for the tip, I'll have to go look that up!

    Sean: I'm not sure I entirely follow your comment. However, on your first point: My real interest in is the plethora of potential minds this suggests. "Your mind" is just a tool to get there, since if we can show that a duplicate of your mind exists spread out elsewhere then it follows that other minds exist spread out elsewhere. On spatiotemporal continuity, I agree one could dig in one's heels and insist on this, but it has some pretty counterintuitive consequences in science fiction cases (starting with Dennett's "Where Am I?").

    Tim: I agree that that's a problem with the argument. I think it's a problem with the move from (5) to (6) -- exactly the problem I meant to convey (perhaps too telegraphically) in my concern about (6). It hinges on physical assumptions about their universe and their mathematical consequences. For example, if the assumptions behind the Boltzmann's brain scenario (google it to learn more!) are plausible, then the universe should be large and complex enough for the move from (5) to (6) to work.

    ReplyDelete
  14. Josh: I don't think you missed it. Make the temporal slices as thick as you want. Thicker slices decrease the probability of realization (step 6) but make the existence of experience despite distribution seem more plausible (step 2). Any compromise that saves both (6) and (2) works for the argument. In a rich enough universe, you could get the composition and decomposition of arbitrarily complex states, including all their dispositional properties, by quantum accident. For example, you might even get an *actual brain* (or something physically identical for some duration), which presumably would have the same narrowly individuated dispositions. (Throw in some environment, too, if you want.) Of course, by premise (1) it doesn't need to be anything like an actual human brain. The more similar the states necessary must be to our own physical organization for the mind to be functionally equivalent in the right way, the more pressure that puts on (6). It's a matter of cosmology and math, but in a rich enough universe, there could be a full panoply of spatially and temporally distributed cognitive states molecule-for-molecule identical to each of your own actual states.

    ReplyDelete
  15. Allen: Thanks for all the comments and links. I was originally planning to work Boltzmann brains into this post, but it was already longer than I like. I'm glad you brought up the topic! (I didn't get that last link to work, by the way.)

    Pete: I'm completely with you about the trustworthiness of intuitions about SwampSlice. I'd throw in skepticism about our intuitions about standard aliens and androids too. My overall metaphilosophical view is that our intuitions on such matters are ill-founded and incoherent and that consequently a metaphysics of consciousness is impossible, except to say that one of about eight broad hypotheses -- each counterintuitive and crazy seeming -- must be true. I call this view crazyism. Panpsychism is among the crazy disjuncts. Following our intuitions through to their wacky consequences, as here, ultimately helps make the case for crazyism.

    Kurt: I'm a "compatibilist" about volition: I think it comes along for the ride whenever you get a flow of cognitive states of the right sort between input and response, and there's no separate kind of entity or distinct kind of causation involved in volition. I believe this is the majority view in contemporary philosophy, but it's not universally accepted. "Libertarians" about free will think you need something more than the ordinary causal flow to have genuine freedom; and perhaps such libertarians do have a way to put the brakes on the Dust argument that compatibilists do not.

    ReplyDelete
  16. Hmmmm. I still think I'm confused, but I'll take your word for it!

    I'm still not seeing why we think that a dust state is a realization of any of my cognitive states.

    Here's a way to put the worry, and you can point out which step in your argument I've missed:

    A. Mental states are identified by their causal dispositions--they are disposed to be caused by some inputs, by some interactions with other mental states, and to cause some outputs.

    B. Dust states do not have these dispositions.

    C. Therefore, dust states are not mental states.

    Obviously, B is the question begged. But even if we allow that some spatial and temporal dispersion is acceptable, why think ANY spatial or temporal dispersion is ok? I think you're making it too easy to have the requisite dispositions.

    (Side note: Dennett is now not as accepting of his "Where am I?" distribution idea--see *Kinds of Minds*.)

    ReplyDelete
  17. "Dust theory" is, I think, similar to "Triviality" arguments against functionalism.

    A pretty readable example of which is:

    http://www.doc.gold.ac.uk/~mas02mb/Selected%20Papers/2004%20BICS.pdf

    ReplyDelete
  18. Josh:

    Your brain is formed of particles that evolve in time according to the laws of physics and thereby apparently produce consciousness.

    The "dust cloud" is also formed of particles that evolve in time according to the laws of physics.

    The general idea is that if there exists a mapping from the dust cloud particles' evolution to the evolution of the particles of your brain, then it could be concluded that the dust cloud also "hosts" a consciousness that is equivalent to the one hosted by the particles of your brain.

    Note that the mapping doesn't have to hold for extended periods of time...if it holds for 1 minute then you'd assume that 1 minute of consciousness was "experienced" by the dust cloud version of you.

    The dust cloud version of you would not realize that he was being "simulated" by a dust cloud. He would think he was living the same life that you are actually living.

    Basically if an accurate computer simulation of your brain would be conscious in the same way that you are, then the dust cloud would also.

    So if you have a large enough dust cloud, and you wait long enough, EVENTUALLY it will drift in to the correct configuration where it's particle motions can be mapped (though perhaps only through a fairly elaborate mapping function) to the particle motions of your brain.

    Again, flavors of the Boltzmann Brain argument.

    Might also through in the Many Worlds Interpretation of QM as another way to increase the likelyhood of hitting the correct configuration. After all, why limit ourselves to one branch of the universal wavefunction?

    ReplyDelete
  19. Quick question?

    Is functionalism really the majority view?

    ReplyDelete
  20. Ah, thanks Allen. Very helpful.

    The problem here is mistaking the map for the territory.

    Look, there is a mapping from my mind to the real numbers. Lots of mappings. Do those mathematical structures therefore support consciousness? I doubt it. So something about the realization base and its causal powers matters. This argument trades on a very abstract notion of function.

    (In the movie "The Mummy" the Mummy creates an evil dust storm version of himself--and it sure *looked* mad. Not sure if it really was mad...)

    ReplyDelete
  21. Josh and Allen: I'm with Allen on this one. Here's one possibility (related to Boltzmann brains): A quantum accident creates a brain molecule-for-molecule identical to yours. Would you disagree, Josh, that this brain has many of the same dispositions (not the historically individuated ones of course) as yours? If necessary we could toss in some environment too. This isn't a map-territory thing, is it?

    Clark: Yes, I think some version of functionalism is (still) the majority view, as long as one is pretty liberal in what one means by functionalism -- that is, if you allow a version of functionism that permits evolutionary history and external events to be relevant to the individuation of mental states and if you don't insist on strict reduction to stimulus and behavior.

    ReplyDelete
  22. Some thoughts inspired by the responses to Josh...

    I'm starting to have another worry about this SwampSlice stuff. It is probably false as a general rule that if a property is instantiated over some duration then that property is instantiated by an arbitrarily selected timeslice from that duration.

    to illustrate: There are likely timeslices of me doing the Charleston ("DanceSlices")that are intrinsically indiscernible from me doing the Funky Chicken. If a SwampSlice version of my DanceSlice popped into existence off the arm of Orion, it may make little sense to say that it counts as a chunk of Charleston. Or what I'd rather say: it makes little sense to suppose that there's some fact of the matter about what dance chunk, if any, it counts as.

    This sort of point likely extends quite naturally to all sorts of psychological stuff. Take some duration wherein I am activating some short-term memory of what I wrote a few moments ago. This may very well be smeared out in time in such a way that it makes little sense to say that arbitrarily small timeslices have what it takes, intrinsically, to count as memory slices. The contrary position is a kind of HyperInternalism that seems to have little going for it beyond mere imaginability.

    ReplyDelete
  23. So it's functionalism informed by Davidson then.

    ReplyDelete
  24. I don't see this position as functionalism at all. I see it as the exact opposite of functionalism. Functionalism says that the mind is fundamentally an abstract pattern, which admittedly always supervenes on something, but that something is not ontologically relevant to it. When you divide a mind into time slices, and scatter those slices throughout the universe, the network of relationships is lost, so the mind is lost. It is the network of relationships that make the mind a mind, not its parts. You would have had a somewhat stronger case if you had talked about dividing the mind into functionally relevant parts, not just time slices. But even those parts would no longer be functionally relevant once they are removed from the network.

    A bullet that I will bite when I acknowledge this is that a mind is a mind only because it has some sort of relationship with a world. Just as neurons become thoughts only when they perform brain functions, so brain-functions become thoughts only when they are about something in a world. Accepting this dissolves this whole problem. One more reason to take the Hypothesis of Extended Cognition seriously

    ReplyDelete
  25. Pete--

    That's awesome. Didn't I get an electronic card of you and Ray dancing the Funky Charleston Chicken off the arm of Orion?

    Eric and Allen--

    What Pete said.

    ReplyDelete
  26. Eric and Allen--

    And the first part of what Teed said.

    ReplyDelete
  27. Yeah that's the story of my life. Everybody acknowledges the premises that lead to HEC, but no one wants to accept the conclusion, or explain why they don't.

    ReplyDelete
  28. > I'm starting to have another worry about this SwampSlice stuff.

    First I will address the simplest case of Dust Theory...where we are dealing with one giant but finite cloud of dust, but over long (but perhaps not infinite) amount of time.

    In this case "Dust Theory" is really no different than the idea that an accurate computer simulation of your brain would be conscious. The exact same reasoning holds. If you say that Dust Theory in this case is incorrect, then I think you're also saying that an accurate computer simulation of your brain would not be conscious either, despite the fact that it would respond to inputs in exactly the same way as your real brain would (a necessary condition of being an "accurate" simulation).

    As I mentioned, the state of the dust cloud particles evolves in time according to the laws of physics. There is a causal connection between the state of the dust cloud at time t1 and any subsequent time (t2). Why? Because it's the same cloud of dust with particles drifting around affecting each other, in the same way that the particles of your brain drift around and affect each other.

    But, taking the total state of the dust cloud at time t2, you should be able to work back to the state of the cloud t1 (setting aside possible problems due to quantum indeterminancy). Starting at t2 you would follow the causal chain of each particle back and eventually find out where it was at t1. Though, you would need a massive amount of processing power to do this of course, due to the n-body problem. But this is a thought experiment, so assume that we have that processing power.

    So, as is true of the dust cloud, a computer simulation of your brain is "accurate" if there exists a mapping of the state of the computer simulation to the state of your brain at a given time. A simulation run on a digital computer is done as arbitrarily small, discrete time slices, so at the end of each time slice you would compare the state of the computer to the state of your brain, and if there was a consistent mapping, then the computer is accurately simulating your consciousness.

    In this case, if you agree that the computer simulation is conscious, then where does consciousness exist in the simulation process? We do the calculations for each time slice one processor instruction at a time. There are probably billions of processor instructions executed per time slice. Which processor instructions cause "consciousness" to occur?

    Keep in mind that the processor may be interrupted many times to run programs other than the brain simulation. So at any time in the process of calculating the next time slice, the processor might be interrupted and diverted to another program's calculations for an arbitrary length of time. When the computer returns to the simulation it will pick up where it left off, and the integrity and accuracy of the simulation will not be affected.

    This is equivalent to the dust cloud *only sometimes* being in a state that maps to the state of your brain. So the dust cloud spends 10 minutes drifting in a configuration that maps to the state of your brain. Then it drifts out of synch and the dust cloud particles no longer map to the state of your brain. A billion years go by with the dust cloud particles out of synch. Finally the dust cloud again drifts into a configuration that maps to the state of your brain for another 10 minutes. And so on. There is a causal connection between the two 10 minute interludes in the same way that there is causal continuity with the computer simulation even when the computer is occasionally diverted to execute other programs due to the demands of preemptive multitasking.

    Also note that the speed of the computer has no impact on how the simulated consciousness perceives the passage of time. If the computer takes a year to compute 1 minute of subjective time for the simulated consciousness, that will still only feel like 1 minute for the consciousness being simulated. Conversely, if the computer runs faster than the human brain and it only takes 1 minute to compute 1 year of subjective time for the simulated consciousness, that year will still feel like a year to the simulated consciousness, even though it actually only took 1 minute of "external" time.

    So, I think this pretty much shows that the basic idea of Dust Theory is correct, IF you accept that an accurate computer simulation of a brain would also be conscious. If you don't accept that a computer simulation would be conscious, then I have a whole seperate set of arguments I can make on that subject (ha!).

    So, this is just step 1 of my multi-step response to Pete and Teed. I also want to address the non-causal-chain case of Dust Theory, and also Teed's Hypothesis of Extended Cognition. But I need to go eat supper.

    Again, Hans Moravec covers a lot of this in:

    http://www.frc.ri.cmu.edu/~hpm/project.archive/general.articles/1998/SimConEx.98.html

    (if the link doesn't come through, google for: hans moravec simulation consciousness existence simconex)

    ReplyDelete
  29. If a being dependably acts like a sophisticated, conscious, intelligent being, it’s in a wee, tiny, miniscule, microscopically small minority.

    Stay on groovin' safari,
    Tor

    ReplyDelete
  30. Oh,
    The Dust Hypothesis: to promte or demote a mote.

    I just thunk-a that.

    ReplyDelete
  31. Okay, next up, causality.

    In my opinion, causality is a physical implementation detail whose specifics vary from system to system, and even from possible universe to possible universe, but which is ultimately not important to the experience of consciousness.

    So in the previous post my goal was to show that mappings from a dust cloud to a brain are as valid as mappings from a computer simulation to a brain. And I'm making the assumption that an accurate computer simulation of a brain would produce consciousness just as an actual brain would.

    It's difficult to say much about dust cloud dynamics, whereas it's relatively easy to talk about how computers work. So assuming that there is an equivalence between computers and dust clouds, from here forward I'll mainly talk on computers.

    So, returning to the previously mentioned computer simulation, the simulation consists of two parts: data and program. The data describes a brain in arbitrarily fine detail, the program describes the steps that should be taken to change the data over time in such a way as to maintain a consistent mapping to a real brain that is also evolving over time.

    A physical computer that implements a simulation basically sets up a system of physical events that when chained together, map the input data (brain at time slice t1) to a set of output data (brain at time slice t2). The "computation" is just a mapping process, or an arbitrarily long sequence of mapping processes.

    Consider the boolean logic gates that make up a digital computer. A NAND gate for example. So any physical system that takes two inputs that can be interpretted as "0" or "1" and maps those inputs to some sort of output that also can be interpretted as "0" or "1", and does so in such a way that two "1" inputs will produce a "0" output and all other combinations of inputs will produce a "1" output, must be said to implement the computations defined by the boolean NAND operation.

    In a digital computer, this might be done by combining two NMOS transistors and two PMOS transistors in such a way that the direction of currently flow at the output line is interpretted as "0" or "1". BUT, you could also implement this same operation using dominos, with the state of the "output domino" as fallen or standing indicates "0" or "1". Or you could do it with water, pipes, and valves with the direction of water flow indicating "0" or "1" at the output pipe.

    Note that there doesn't need to be just two discrete values for input and output, "0" and "1". The range of values for input and output just have to be mappable to "0" and "1".

    Also note, that we only need the mapping of input to output to hold for the times that we rely on it to produce correct values. We don't care if a year later the NAND gate implementation has broken. We don't care if a day later it no longer works. We don't care if 1 second later the mapping of inputs to outputs by the physical system no longer holds. All we care about is that at the time we needed it to do our NAND calculation, the mapping held and the NAND implementation produced the correct results (regardless of why it produced the correct results).

    Okay, so we have a lot of data that describes a brain, and we have a program which describes in abstract terms the sequence of steps necessary to transform the brain data over time in such a way as to maintain a consistent mapping to an actual brain. And we want to run our program on a computer.

    There are many, many types of computers, with a large range of architectures, that would be capable of running our simulation. And depending on which we choose, we will end up with a wide variety of physical representations for the data, and also a wide variety of execution methods for the program.

    We could run the simulation on a powerful digital computer, with the data stored as bits in RAM, and the program executed on one processor sequentially or on many processors in parallel. Or we could run the simulation on a huge scaled up version of a Babbage Analytical Engine with millions of punch cards. Or we could print out the source code and the data and execute the program by hand, using a pencil and paper to store and update various values in memory (similar to Searle's Chinese Room scenario). OR we could even construct something like a mechanical brain whose structure mimics the structure of an actual human brain, with mechanical neurons paralleling the operation of actual neurons, and also with analogues for neurotransmitters and glial cells and all the rest.

    In all of these cases, the causal structure of the executing simulation would be vastly different from case to case. And yet, if there always existed a mapping from the simulation back to the original human brain, then I would assume that the simulation was accurate and was resulting in subjective experience for the simulated consciousness.

    In fact, due to things like optimizing compilers, out-of-order-execution circuitry, and branch-prediction circuitry, not to mention automatic parallelization and various forms of hyperthreading, PLUS the added causal interruptions due to preemptive multitasking -- the actual causal structure of the executing program might bear relatively little resemblence to what you would expect from examining the source code of the simulation program.

    Also note that we could do things to optimize the simulation execution like cache intermediate results in lookup tables to avoid recomputing frequently occuring values, OR even restructure the entire simulation program in a way that is mathematically equivalent to the original and produces equivalent output, but which in fact shares none of the original program's causal structure.

    A final scenario:

    Say that we are running our simulation on a digital computer. The simulation is doing the calculations necessary to transform the brain state data at t1 into the state at t2. At a crucial moment, a cosmic ray zings in from outer space and disrupts the flow of electrons in the CPU that is doing an important calculation, and so the calculation is not done. HOWEVER, but sheer coincidence, the correct output value that would have been produced is already on the output lines of the boolean logic gates that provide the data to be written to memory, and indeed this random, but in this case correct, value is written to memory, and the simulation goes on as if nothing improper had happened.

    Now, in this case, the causal chain was broken, but do to an unlikely but not impossible stroke of good fortune, the integrity of the simulation was maintained, and presumably consciousness was still produced. Obviously the simulated consciousness wouldn't notice anything amiss, because noticing something amiss would require a change of data. And no data was changed.

    So the bottom line for all of the above is that it is possible to think of many different scenarios where the causal structure differs, and even of examples where the causal chains that form the structure are broken, but as long as the correct outputs are produced in such a way that the mapping from the simulation to the original brain holds, then I think that consciousness would still result from the simulation.

    From this I conclude that causality is an implementation detail of the system used to calculate the outputs, and that any system (even those that involve random breaks in the causal chain) that produces outputs that can be mapped to a human brain, will produce consciousness in the same way that the human brain does. Only the outputs matter. Which is to say that only the information matters.

    Being able to follow the causal chain of the simulation is important in being able to interpret the outputs of the simulation, and also is important to being able to have confidence that the simulation is actually running correctly, AND is also important in terms of knowing how to feed inputs into the simulation (assuming that the simulated consciousness isn't living in a simulated world with provides the inputs).

    So causality is critical to us in viewing, interpretting, and interacting with the simulation.

    HOWEVER, I don't see that causality is important in producing consciousness.

    Is anybody still reading this? Does this sound too crazy?

    So there is another step to come where I directly address Pete and Teed.

    ReplyDelete
  32. Josh:

    >> Look, there is a mapping from my mind to the real numbers.

    I intend to address this also.

    Tor:

    >> The Dust Hypothesis: to promte or demote a mote.

    Pretty good!


    ALSO, on the subject of Boltzmann Brains, I wonder has anyone considered the idea of a "Boltzmann Simulator"? Where instead of randomly moving particles coming together to form a brain, instead they come together to form an abstract simulation of a brain, which perceives some reality other than the maximal-entropy universe which actually hosts the simulator?

    Just a thought.

    ReplyDelete
  33. Actually, now that I've thought about it, I can address Pete's point without reference to all of the stuff I wrote above (which is still good stuff, I think, and I will use it to address Teed and Josh).

    So, Pete writes:

    >> There are likely timeslices of me doing the Charleston ("DanceSlices")that are intrinsically indiscernible from me doing the Funky Chicken.

    This seems to be wrong. If the slices are intrinsically indiscernable, then in the real world why do you continue on to do the Charleston? The "causal seeds" of the subsequent Charleston Specific slices must be contained within your supposedly totally generic dance slice...otherwise, what would explain the fact that the original Pete goes on to do Chaleston-specific moves?

    If the "generic" danceslice is truely generic, then the only explanation for him continuing on to do the Charleston instead of the Funky Chicken (or slices that are neither the Charleston OR the Funky Chicken) would have to be that some external force not included in the generic danceslice (a dance partner?) pushed him into doing the Charleston.

    Right?

    ReplyDelete
  34. Allan, I think Brood Comb had a post about that case a few months back.

    ReplyDelete
  35. > Functionalism says that the mind is fundamentally an abstract pattern, which admittedly always supervenes on something, but that something is not ontologically relevant to it.

    All that is also basically true of Dust Theory.


    > When you divide a mind into time slices, and scatter those slices throughout the universe, the network of relationships is lost

    There is no CAUSAL relationship between the particles representing the information that we use to define these scattered slices, but that doesn't mean that there is NO relationship between the information in the slices.

    So let's label each slice by its position in the original timeline. Chronologically ordered, S1 is the first slice taken at time t1, S2 is the second slice taken at time t2, and so on. So let's say that each slice "S" represents the entire mental state of a guy named "Ben" at that given time "t".

    So Ben at t8 has memories of what he was doing at t1. These memories are encoded within the state of Ben at t8. Therefore encoded in some way in S8 there is some representation of the information in S1. Therefore, there is a relationship between S8 and S1 that is independent of any causal connection between the particles that were used to represent S8 and S1.

    In addition to the fact that the information represented by later slices contain "memories" of the earlier slices, there is also the fact that all of the slices contain information to the effect that this guy thinks his name is Ben and that he lives in Wisconsin, and all of the other biographical data. All of that is found within each slice and ties the various slices together. That common information provides another relationship between the slices.

    Right?

    Let's consider an alternate version of "Swampman". Let's get rid of the lightning bolt though (too dramatic), and just go with quantum fluctuations. Virtual particles. That sort of thing.

    In this version, Davidson is vaporized by the unlikely but not impossible simultaneous quantum tunneling of all of his constituent particles for random distances in many random directions. BUT, by another completely unrelated and hugely unlikely (but not impossible) quantum event, a whole new set of exactly identical particles materialize from the background vacuum energy of space and Swampman is born. However, the process happened so rapidly that Swampman is not even aware that he isn't Davidson, and no one watching would have noticed the change either.

    Now, 10 minutes go by, as Swampman continues his oblivious enjoyment of the murky swamp, and then suddenly the same thing happens again. Swampman is replaced by Swampman-2. And again it happens so fast that neither Swampman-2 nor any observers notice it.

    Now, 10 seconds go by and Swampman-2 is replaced by Swampman-3, via the same mechanism. And 1 second later, the same thing again. And so on until you get down to whatever timeslice you like.

    So in this case, neither Davidson/Swampman nor anyone else who is observing him OR interacting with him will have any idea that anything is amiss. And the reason that no one notices anything wrong is that there is a continuity, and a relationship, between the information that is represented by the particles of the various versions of the Swampmen and Davidson, EVEN IF there is no causal connection between the particles themselves.

    I think this is related to the idea that every year you replace 98% of the atoms in your body (http://www.npr.org/templates/story/story.php?storyId=11893583, and also http://stevegrand.wordpress.com/2009/01/12/where-do-those-damn-atoms-go/).

    I refer you to my previous posts to make the case that dust clouds are equivalent to Swampmen.

    Okay, next up, Josh, on mapping.

    ReplyDelete
  36. Clark: Which case is that?

    ReplyDelete
  37. Teed:

    Just in case you missed it, my post at Sun Jan 25, 09:00:00 PM PST is my response to your points.

    ReplyDelete
  38. From Eric's original post:

    >> On 6: Perhaps the functional relationships necessary for sophisticated, conscious thought are so complex that even in the vast universe they would not be instantiated except in coherent, brain-like packages. But maybe that underestimates the vastness and complexity of the universe?

    The human brain weighs a little less than three pounds. The sun weighs two billion billion billion tons, and isn't particularly big for a star. There are approximately 70,000 million million million stars in the visible universe.

    We don't know how far the universe extends beyond the cosmic horizon, maybe it's infinite.

    If the universe is large enough, a crude estimate suggests that the closest identical copy of you is about ~ 10^10^29 m away. About ~ 10^10^91 m away, there should be a sphere of radius 100 light-years identical to the one centered here, so all perceptions that we have during the next century will be identical to those of our counterparts over there. About ~ 10^10^115 m away, there should be an entire Hubble volume identical to ours.

    This is an extremely conservative estimate, simply counting all possible quantum states that a Hubble volume can have that are no hotter than 10^8K. 10^115 is roughly the number of protons that the Pauli exclusion principle would allow you to pack into a Hubble volume at this temperature (our own Hubble volume contains only about 10^80 protons). Each of these 10^115 slots can be either occupied or unoccupied, giving N = 2^10^115 ~ 10^10^115 possibilities, so the expected distance to the nearest identical Hubble volume is N^1/3 ~10^10^115 Hubble radii ~ 10^10^115 meters. Your nearest copy is likely to be much closer than 10^10^29 meters, since the planet formation and evolutionary processes that have tipped the odds in your favor are at work everywhere. There are probably at least 10^20 habitable planets in our own Hubble volume alone.

    http://en.wikipedia.org/wiki/Sun
    http://www.cnn.com/2003/TECH/space/07/22/stars.survey/
    http://arxiv.org/PS_cache/astro-ph/pdf/0302/0302131v1.pdf

    ReplyDelete
  39. >Functionalism says that the mind is fundamentally an abstract pattern, which admittedly always supervenes on something, but that something is not ontologically relevant to it.

    Well, that's the issue. I would have thought that the supervenience base is relevant, but only so far as it underwrites the right causal powers. Dust arguably does not do this. Now, if you are talking about a quark-for-quark duplicate (or whatever) of my brain, including the temporal/spatial layout, than it arguably does have the right causal powers. I would think that a constraint on a theory of the right causal powers would be to include such swamp duplicates as thinkers/experiencers. So maybe we can abstract away from actual causal history to some extent. But I still don't think that arbitrarily small slices can have experiences. Consciousness seems to me to be a process--an actualized, causally-connected series of states. That we can abstract away form the process and talk about states/slices/etc. does not licenses a separate existence, much less consciousness, for these slices.

    One thing our brain seems to use is feedback and re-enterant loops. Such loops occur when the causal activity of various parts of the brain work in reciprocal synchrony with each other. That could be the realization of experience. Now, anything that can perform this function--that can achieve the right kind of synchronous feedback--can have experience; that is, the substrate does not matter, function does. But not ANY substrate can actualize this function. And the function is defined as a process, not a series of abstract slices.

    So, if the dust actualizes this process temporally and spatially the way my brain does, sure, it's thinking, and maybe exactly what I'm thinking. But that, I take it, is not Eric's intended conclusion. He wants something stronger, which claims that it's way easier to get duplicates than we might have thought. I'm still not seeing the argument for this level of abstract, nor am I seeing why we should accept the temporal/spatial spread claim--at least as it applies to disjoint slices.

    (I, too, share Pete and Eric's worries about the intuitions at work here--settling questions about the ontology of states and processes is a mess, for example. But what the hey, it's a philosophy blog!)

    ReplyDelete
  40. Josh:

    > I would have thought that the supervenience base is relevant, but only so far as it underwrites the right causal powers.

    So what do you think the magic is that causal connections between slices provides, that the information within the slice does not?

    I would have thought the key thing for a functional component would be that it produces the correct output slice. Regardless of whether there is a causal connection between the output slice and the input slice.

    But you seem to come back over and over to "causality" as an essential ingredient for consciousness. What is your reasoning behind placing such weight on causal linkages between slices?

    ReplyDelete
  41. I agree with Pete on this:
    It is probably false as a general rule that if a property is instantiated over some duration then that property is instantiated by an arbitrarily selected timeslice from that duration.

    It is more than plausible that our cognitive states depend not just on the positions of its constituent particles, but also on their [relative] velocities. (Recall that electromagnetic forces, for example, depend on velocities.) But a dependence on velocities leads to other problems.

    ReplyDelete
  42. Stibbons:

    > It is probably false as a general rule that if a property is instantiated over some duration then that property is instantiated by an arbitrarily selected timeslice from that duration.

    The arbitrarily selected timeslice from that duration MUST have some property which marks it as being part of that exact sequence, OTHERWISE how do you explain that the particles in subsequent time slices end up doing the Charleston instead of the Funky Chicken?

    > It is more than plausible that our cognitive states depend not just on the positions of its constituent particles

    I don't think any one has made the claim that only the particle positions matter. I think we have been using the more generic and encompassing term "state".

    Again, all that matters is that there is a mapping. Position in one system might map to velocity in another. It's the information that's represented that ultimately matters, not the details of the physical substrate.

    ReplyDelete
  43. Allen, I think that you are not attending sufficiently closely to what a SwampSlice is supposed to be. It is intrinsically identical to a cross section of a spacetime worm but "is causally truncated fore and aft".

    Your remarks on the Charleston chunk make it sound like the intrinsic properties of the slice alone necessitate the bringing about of a unique temporal successor. But if this were the case, the stipulated causal truncation would be impossible.

    You've twice raised the question of how it is possible that a generic chunk can have a Charleston successor on one occasion and a Funky Chicken successor on another. The first time you raised this question you also supplied what I take to be a natural (and obvious) answer. *Extrinsic* factors do the trick (environment, dance partner, etc). This is what I took to be the main point of my brief case against what I dubbed "HyperInternalism".

    But now, in your response to Stibbons, you are ignoring the externalist option that you yourself had previously hit upon. Why? Was there supposed to be something obviously untenable about this option that I'm missing?

    ReplyDelete
  44. P.S. It's worth mentioning, perhaps, that we've both been ignoring until now an indeterminist option to explaining how one slice could have different futures.

    ReplyDelete
  45. Pete:

    > It is intrinsically identical to a cross section of a spacetime worm but "is causally truncated fore and aft".

    I'm with you there. My point is that a slice is collection of particles, these particles have states, such as position, momentum, angular velocity, whatever.

    The total state of the system at time t1 fully determines it's state at any subsequent time t2.

    You can introduce quantum interderminancy, the exact meaning of which depends on what interpretation of quantum mechanics you subscribe to, but regardless I don't think that buys you anything.

    Because, all that takes you to is: The wavefunction of the system at time t1 fully determines the possible states the system could be in at any subsequent time t2.

    It seems to me that you are doing what Stibbons said...only looking at the position of the particles at t1, and thereby saying "based on these particle positions, we can't tell what dance this slice is from." Which is true. BUT, when you look at a slice you should be considering more than particle positions, you should examine all states of it's constituent particles. Which would then tell you that the slice could only be from a sequence of someone doing the Charleston.

    The particle positions of the slice are consistent with the Funky Chicken, but the particle velocities are NOT.

    ReplyDelete
  46. Allen--

    >I would have thought the key thing for a functional component would be that it produces the correct output slice. Regardless of whether there is a causal connection between the output slice and the input slice.

    I'm not sure what 'produces' means here if it does not mean 'causes.' That there is an abstract mapping between two slices not entail that one produced the other, as far as I can see. Again, there is a mapping between my mental states and the real numbers, etc. Are patterns of the reals therefore thinking, or do they require the "magic" of causality, or what have you?

    >But you seem to come back over and over to "causality" as an essential ingredient for consciousness. What is your reasoning behind placing such weight on causal linkages between slices?

    Here I guess it's back to intuitions, about which I have sympathy with Pete and Eric's earlier comments. But, to give it a shot, our commonsense idea of mind is based on the example of normal human subjects. Stop brain activity and you stop the mind, at least in real life. This gives some prima facie weight to the causal process idea. Of course, good scientific theory can trump this prima facie appearance. But that hasn't been offered here. Rather, truncated time-slice machine functionalism has been assumed, with at best philosophical though experiments for support. That's a pretty thin reed.

    Further, as I mentioned, it seems to me that things like feedback loops matter in the brain. I would have thought that causation is required for feedback, Yes, we can describe arbitrarily thin slices of this process and abstractly describe the relations between such states. But look, do you really think that if I winked into existence for a nanosecond, then disappeared, that I would have an experience of that event? If not, why not? What missing? If yes, why? What is your reasoning placing such weight on these wafer-thin doo-dads?

    ReplyDelete
  47. Whoa! I get distracted from my blog for a few days, and miss this great, roaring conversation! It's going to take me some time to catch up.

    ReplyDelete
  48. Josh:

    >> I'm not sure what 'produces' means

    Produces means "results in an output slice with the correct state, regardless of the presence or absence of causal connections to the input slice". A random process CAN produce the right output values...it just isn't likely to, and won't do so reliably. But if it does so at the instant that we needed it to, then it fulfulled it's functional role.

    It seems to me that you could do a "fading qualia" thing here. If you replace one neuron with a substitute that produces the correct outputs without any causal linkages to the inputs, does the person lose consciousness? How about 10,000 neurons? How a billion neurons? If the neurons continue to produce the correct outputs, but without reference to the inputs, at what point does the person begin to lose consciousness?

    How to get correct output without causal linkages to the inputs? Good question. I leave it as an exercise for the reader. I also refer you back to my previous post on causality and the computer simulation.


    >> if it does not mean 'causes.'

    Again, what do you mean by "causes"? What happens between causally connected slice A (at t1) and slice B (at t2) that is essential for consciousness? What if we take a new slice at t1.5, halfway between t1 and t2? What happened between t1 and t1.5 that is essential to consciousness?

    How about between t1 and t1.25? Between t1 and t1.125? What if we get down to the smallest physically possible increment of time (Planck time?) and take two slices seperated by a that minimum increment of time? What happened during that minimum time increment that created consciousness? OR, if there is no minimum time increment, how about when we get down to such small time increments that there is no noticeable state change between slices (because the particles don't have time to change state)?


    I agree that if you hand me a system and tell me to explain how it works, I will begin by following the causal links from the inputs to whatever outputs it has. But me understanding the mechanical linkages of a system and it actually being conscious are two different things altogether.

    It seems to me that causality is a physical implementation detail of a partical system (the means to an end), and has nothing to do with the information represented by the system at a particular time (the end).

    What you perceive is determined by the informational content of your brain. If your brain lacks a representation of a particular piece of information, then you cannot be consciously aware of that information. Information is the key to consciousness (in my opinion). Not causality.

    To put it a slightly different way, more Chalmeresque way: causality is the key to the "easy" problems of consciousness, information is the key to the "hard" problem of consciousness.

    ReplyDelete
  49. Josh:

    You mention the real numbers thing again:

    >> Again, there is a mapping between my mental states and the real numbers, etc.

    So we began with the idea of a mapping between a dust cloud and a brain. And then I threw in the idea of mapping between a dust cloud and a computer simulation of a brain, and then also the mapping between the computer simulation of the brain and an actual brain.

    Dust clouds, computers, and brains are all physical systems. And all of these physical systems could be in states whereby the information that they represent could all be mapped to the same abstract concept...your "mental state".

    For instance let's say you write an arabic numeral 2 on a piece of paper. And let's say you carve a roman numeral II into a piece of rock. And let's say you have a byte with a binary value of 2 (00000010) stored on your hard drive. All three of these map to the same abstract concept of "2", even though they have very different physical representations.

    Obviously, a piece of paper with ink on it is not the same as a rock with carvings on it. And a dust cloud is not the same as a brain. But the paper and the rock can serve the same purpose (function)...representing the abstract value "2".

    The "reals" (http://en.wikipedia.org/wiki/Real_number) are themselves abstract concepts, like your mental state. But any phsyical system that mapped to your mental state could also be described by some set of numbers, and you could thereby say that the numbers also describe your mental state.

    Given that dust clouds are physical systems, and computers are physical systems, and brains are physical systems, BUT the real numbers are NOT a physical system - I am, for now, going to say that the set of reals are not conscious instantiations of your mental state.

    ReplyDelete
  50. Josh:

    Last post for tonight! ( I think )

    >> Rather, truncated time-slice machine functionalism has been assumed, with at best philosophical though experiments for support.

    But they're really GOOD thought experiments! You saw the "Quantum Swampman" one, right?

    Also, Eric! Welcome back!

    ReplyDelete
  51. One more, for Pete:

    On my previous "dance" response:

    >> The particle positions of the slice are consistent with the Funky Chicken, but the particle velocities are NOT.

    Also note that your "brain" particles are part of the slice.

    So, no matter how thin the slice, your brain particles are almost certainly in a state that is consistent with you thinking about what your next few moves should be in order to execute the Charleston.

    This "Charleston focused" brain state should be measureably different than that of a brain that was in the process of planning out a spectacular finish to the Funky Chicken.

    ReplyDelete
  52. One more, for Pete:

    >> I take to be a natural (and obvious) answer. *Extrinsic* factors do the trick (environment, dance partner, etc).

    Let's say we give you verbal instructions to dance the Charleston.

    Then we put you in a room with a record player and music suitable for dancing either the Charleson OR the Funky Chicken, and then isolate the room from the rest of the universe. Being obedient, you then begin to dance the Charleston.

    Now, we divide your dance session up into chronological slices. Each slice includes the state of you and the entire room with all it's contents. All of which is isolated from the rest of the universe.

    Now, no matter how thin the time slice, that slice MUST contain the information necessary to determine that you are indeed dancing the Charleston, and not the Funky Chicken. Even a slice from BEFORE you begin to dance will show that you INTEND to dance the Charleston.

    Why? Because in subsequent slices you do indeed go on to dance the Charleston. These subsequent Charleston moves must have been "caused" by earlier slices, because there is no other source of causal influence after you enter the room, since the room is isolated.

    If, for a given arbitrarily thin slice, we have access to ALL information (not just particle positions, but everything) about the state of the room and it's contents (including your brain), we will be able to determine that the slice is a Charleston slice, and not a Funky Chicken slice.

    Eric, back me up on this!

    ReplyDelete
  53. Josh:

    Last post tonight, for sure.

    >> But look, do you really think that if I winked into existence for a nanosecond, then disappeared, that I would have an experience of that event?

    I do think you would have some conscious experience, though not necessarily of "winking into existence". There wouldn't be enough time to process anything about that. Whatever conscious state you appeared with, that's you would experience.

    I like this section from Brian Greene's Fabric of the Cosmos:

    "Just as we envision all of space as really being out there, as really existing, we should also envision all of time as really being out there, as really existing too. As Einstein once said: 'For we convinced physicists, the distinction between past, present, and future is only an illusion, however persistent.' The only thing that's real is the whole of spacetime.

    In this way of thinking, events, regardless of when they happen from any particular perspective, just are. They all exist. They eternally occupy their particular point in spacetime. This is no flow. If you were having a great time at the stroke of midnight on New Year's Eve, 1999, you still are, since that is just one immutable location in spacetime.

    The flowing sensation from one moment to the next arises from our conscious recognition of change in our thoughts, feelings, and perceptions. Each moment in spacetime - each time slice - is like one of the still frames in a film. It exists whether or not some projector light illuminates it. To the you who is in any such moment, it is the now, it is the moment you experience at that moment. And it always will be. Moreover, within each individual slice, your thoughts and memories are sufficiently rich to yield a sense that time has continuously flowed to that moment. This feeling, this sensation that time is flowing, doesn't require previous moments - previous frames - to be sequentially illuminated."

    ReplyDelete
  54. Does anyone know where I can find the 'One Self: The Logic of Experience', Inquiry, 33 (1990)? Google didn't help...

    ReplyDelete
  55. Allen--

    >Given that dust clouds are physical systems, and computers are physical systems, and brains are physical systems, BUT the real numbers are NOT a physical system - I am, for now, going to say that the set of reals are not conscious instantiations of your mental state.

    And what's so great about being a physical system? My guess is that the idea of causation is going to matter in answering this question. As for what causation is, good question. People have invoked laws, regularities, counterfactuals, and other notions to try and explain it. Obviously, it's a matter of great controversy. But be that as it may, I think your idea of noncausal "production" is even less clear.

    >I do think you would have some conscious experience, though not necessarily of "winking into existence". There wouldn't be enough time to process anything about that. Whatever conscious state you appeared with, that's you would experience.

    Right--I didn't mean experience of the fact that you're winking into existence. I meant any experience at all. My intuitions is that a nanosecond swamp slice would not have any experience--experience is a process that takes time to unfold. Needless to say, intuition isn't much to go on here!

    As for Greene's eternal events, his view is certainly not uncontroversial in metaphysics and I would imagine even in physics. Look, the math works without indicating any temporal asymmetry. So physics (perhaps!) does not license a commonsense notion of time or change. But moving from the math to an interpretation, in everyday terms, of the math is tricky, to say the least. Greene has his philosopher's hat on here, and is down in the mud with the rest of us (though I do enjoy his books!). Further, there is the strong temptation, which I worried about above, of mistaking the map (the mathematical theory) for the territory (reality). Math can slice and dice ad infinitum. But whether nature is so jointed is an open question, especially when we start talking about wafer-thin mental slices.

    ReplyDelete
  56. Allen,

    I don't see that continued focus on the Funky Chicken is advancing the discussion much, so let's try something different.

    There are many clear cases in which properties instantiated by the time slice of a system would not be instantiated by that slice's SwampSlice. The swamp slice of a twenty dollar bill isn't worth twenty dollars, the swamp slice of the 44th president of the USA isn't the 44th president of the USA, the swamp slice of the uniquely best violin player in the universe wouldn't be the uniquely best violin player in the universe.. and so on.

    So its false as a general rule that a property instantiated over some duration would be instantiated by an arbitrarily selected time slice.

    Maybe there are specific cases in which the rule is true. Relative position of proper parts is a clear case in which the rule is true. And, while I think velocity is more problematic than you seem to assume, I'm happy to grant it for purposes of discussion. The more interesting question, and one of the main questions up for grabs in this thread, is whether mental properties are specific cases in which the rule is true.

    Given the falsity of the general rule, it seems that the HyperInternalist has extra work to do in establishing their case (since they can't just derive it from the general rule). And I don't see that this extra work has yet been done. But maybe I missed it in this enormous thread.

    ReplyDelete
  57. Josh:

    > And what's so great about being a physical system?

    Physical systems can represent information by way of assuming particular spatial and/or temporal patterns. But abstract systems ARE information. If you have an abstract mapping from one abstract system that isn't conscious to another abstract system that IS, I'm not sure what that even means. It's all too...abstract.

    But, on your previous point:

    > Do those mathematical structures therefore support consciousness?

    Is it really any stranger to say that abstract mathematical structures support consciousness than it is to say that structures of quarks and electrons support consciousness?

    I would say that consciousness isn't in the atoms of the physical system (brain or dust cloud or computer). It's in the information represented by the relative states of large numbers of atoms. Which is to say: the human brain isn't conscious...it's just a bunch of atoms. Rather, the information represented by the human brain is conscious.

    As I've alluded to a few time above, I think first person experience is a fundamental aspect of information. As Chalmers says: "Experience arises by virtue of its status as one aspect of information, when the other aspect is found embodied in physical processing."

    The same information can be physically "embodied", (i.e., represented) in many different ways, and the same information can be processed in many different ways. As an example, I refer you to the idea of domino boolean logic gates, http://www.youtube.com/watch?v=SudixyugiX4.

    > mistaking the map (the mathematical theory) for the territory (reality)

    Perhaps instead the physical world that we perceive is the map, and the realm of information is the territory (and reality)?

    It seems to me that we have have no direct access to the physical world. Information about the physical world is conveyed to us via our senses. BUT, we don't even have direct conscious access to our sensory data. All of that sensory data is instead apparently heavily processed by various neural subsystems and "feature detectors", the outputs of which are then reintegrated into a simplified mental model of reality, and THAT is what we are actually aware of. That mental model is what we think of as "the real world". So it seems to me that we can already think of ourselves as living in a world of abstract information.

    And that's what makes the Dust Hypothesis plausible to me. If the abstract information that represents my thoughts and perceptions is present (in some sense) in the dust cloud, then my consciousness should also be present.

    It seems to me that any theory of reality has to have something fundamental at the foundation that is taken as a given.

    With materialism, the foundation is energy, or maybe spacetime, or quantum fields, or some combination of all three. But unless you just accept the existance of these things as fundamental brute facts, the next question is obviously "What is energy?", or "Where did spacetime come from?", or "why does it work that way?". Even if you introduce a more basic concept (e.g., strings, or spin networks, or whatever), then you can ask the exact same questions about that new fundamental concept.

    With a religious view, you say that some supreme being or supernatural force is at the foundation of reality. But this introduces the question of "What is God?" or "Where did God come from?" or "What is God's motivation?"

    In my view, the best candidate for the fundamental core of reality is: information. With the extra assumption (which is well grounded I think) that certain types of information have conscious first person subjective experience.

    ReplyDelete
  58. Allen--

    >As I've alluded to a few time above, I think first person experience is a fundamental aspect of information. As Chalmers says: "Experience arises by virtue of its status as one aspect of information, when the other aspect is found embodied in physical processing."

    I don't see any good reason to buy this, unless one is already in the grip of Chalmersian intuitions about consciousness. Accepting some examples of multiple realizability does not entail this view.

    >And that's what makes the Dust Hypothesis plausible to me. If the abstract information that represents my thoughts and perceptions is present (in some sense) in the dust cloud, then my consciousness should also be present.

    That's fine. What's being challenged is the idea that those representations are present.

    >With materialism, the foundation is energy, or maybe spacetime, or quantum fields, or some combination of all three. But unless you just accept the existance of these things as fundamental brute facts, the next question is obviously "What is energy?", or "Where did spacetime come from?", or "why does it work that way?".

    These concepts are justified by their role in successful scientific theory. They help predict and explain a vast range of phenomena. Why think that we need something new to explain first-person experience? That's the Chalmersian intuition again, and I don't share it.

    I'm not sure the first-person stuff is needed for Eric's original argument, though he needs more to prop up the liberal temporal/spatial realization claim, or so I've tried to argue. I'm not sure how much more there is to say, though this has been fun!

    ReplyDelete
  59. The problem of information has come up on another forum. I found it useful to think about information this way:

    Def. Information is any property of any object, event, or situation that can be detected, classified, measured, or described in any way.

    1. The existence of information implies the existence of a complex physical system consisting of (a) a source with some kind of structured content (S), (b) a mechanism that systematically encodes the structure of S, (c) a channel that selectively directs the encoding of S, (d) a mechanism that selectively receives and decodes the encoding of S.

    2. A distinction should be drawn between *latent* information and what might be called *kinetic* information. All structured physical objects contain latent information. This is as true for undetected distant galaxies as it is for the magnetic pattern on a hard disc or the ink marks on the page of a book. Without an effective encoder, channel, and decoder, latent information never becomes kinetic information. Kinetic information is important because it enables systematic responses with respect to the source (S) or to what S signifies. None of this implies consciousness.

    3. A distinction should be drawn between kinetic information and *manifest* information. Manifest information is what is contained in our phenomenal experience. It is conceivable that some state-of-the-art
    photo-->digital translation system could output equivalent kinetic information on reading English and Russian versions of *War and Peace*, but a Russian printing of the book provides *me* no manifest information about the story, while an English version of the book allows me to experience the story. The "explanatory gap" is in the causal connection between kinetic information and manifest information.

    I think the brain model described here constrains what is necessary for manifest (phenomenal) information:

    http://eprints.assc.caltech.edu/355/

    Is Swampman up to it?

    ReplyDelete
  60. Teed,

    Could you provide a link to a discussion of the Hypothesis of Extended Cognition? From this context, it sounds to me highly plausible bordering on obvious - so I must not be getting it.

    As to where to put on the brakes, I brake for premise 1. But I urge functionalists not to follow Allen. Hold on to causality for dear life.

    ReplyDelete
  61. Hi all! I like the Funky Chicken example, and I agree with Pete that there are properties that temporally-spread out beings have at T that their temporal slices at T don't have. But insisting on temporal integrity and continuity with respect to the implementation of a mind seems to me to be in tension with the idea that minds could be implemented by computer programs. As Allen has pointed out, computers that run programs through a single processing bottleneck will put one program on hold temporarily while they prioritize another. If a supercomputer is running your mind, couldn't your mind be put on hold in that way? And if it could, then couldn't it be put on hold for 999 milliseconds out of every 1000 (or whatever ratio and gap you like), giving your mind radical temporal discontinuity?

    Now of course while that's happening, there must be some way in which the information about you is stored over the gap, and maybe *that* continuity is important? But would it really matter (and could you tell a difference) if the computer holding that information were destroyed and replaced by chance by SwampComputer? And then we're back on track toward the Dust Hypothesis....

    ReplyDelete
  62. Dimasok: I don't know that particular article. Analysis doesn't seem to have online archives going back that far. But any good university library should have printed back volumes of that journal.

    ReplyDelete
  63. Pete:

    >> The swamp slice of a twenty dollar bill isn't worth twenty dollars

    I disagree...a SwampTwenty is worth $20 if you can spend it before it disappears. They only reason it wouldn't be worth $20 to you is if you knew the slice would end before you could get to the store to spend it.

    SO, any limitations on your example of SwampSlices comes from the "Slice" part, not the "Swamp" part.


    >> while I think velocity is more problematic

    It's not just velocity, it's the overall state. Really I'm just talking about causality here. If you have slice A and slice C of a system, and between them slice B...the states of the system in slice B must be such that it will cause slice C. Therefore whatever properties C has are implicit in B, barring outside interference. Also, there's an information vs. physical substrate difference when talking about A, B, and C.


    OR, maybe we're talking about completely different things? Maybe instead you are asserting that at no time t1, t2, OR t3 am I conscious, but only when taken all together could they actually be called conscious.

    In which case, a better analogy might have been to say that single pixel of an image isn't a picture, but when a lot of pixels are all taken together they do form a picture. But whereas the pixels combine spatially to form the image, "slices" combine temporally to form a conscious experience.

    Which I think is true with respect to a certain definition of "conscious experience". In that I go from (A) "not conscious of seeing red" , to (B) "conscious of seeing red", to (C) "consciously thinking: boy that's red".

    So, transitioning all those states happens over time (whatever time is), sure. And what does one nanosecond long slice of brain processing contribute to this? I do say that you should theoretically be able take any slice of any duration and determine whether it's from the (A), (B), or (C) portion of the experience.

    But I still think that each "slice" stands alone as a discrete instance of consciousness, in a way that a pixel from an image does not. Mainly because while if you take my whole life as a single "Allen Slice", and then divide it into hour long slices, I feel confident that each hour slice stands alone as an example of consciousness. Same for minute slices. And also for second-long slices. And I'm willing to go ahead and extrapolate the same principle down through nanoseconds down to single instants (Plank-length slices?).

    You could counter with the example of a pound of gold. And ounce of gold is still gold. An atom of gold is still gold. BUT, if you spit the atom of gold...it's not gold anymore, it's whatever new atoms the protons and neutrons recombine into. You could say that there was no "Gold" atom really to begin with...it was always just a bunch of protons and neutrons and electrons, in a particular structure that for convenience you label as "Gold".

    But if there is a smallest possible "Allen Slice" and you split it again, what is nature of the new smaller sub-slice? And to on the same analogy, is there no "Allen", with "Allen" just a label we put on a bunch of slices for convenience? Mereological nihilism?

    >> But maybe I missed it in this enormous thread.

    Did you see my post on Tue Jan 27, 09:58:00 PM PST?

    ReplyDelete
  64. Arnold:

    > http://eprints.assc.caltech.edu/355/

    I will give it a look!

    ReplyDelete
  65. A better example than a picture or a dance might be a song, which being sound has no complex physical embodyment. The full song is recognizable, and a 5 second clip is recognizable. But is a 1 nanosecond clip recognizable? If there is only one instrument playing, then any time duration of shorter than a few notes will be unrecognizable, since many songs can share the same 2, 3, or even 4 note sequence. But if you had a 100 piece orchestra playing, then a very, very small slice probably would uniquely identify it as being from a specific song. The more instruments, the more complex the song, the more identifiability for a thinner time slice.

    What does this mean for consciousness? No idea!

    ReplyDelete
  66. Josh:

    >> Accepting some examples of multiple realizability does not entail this view

    If the same consciousness is found realized in two different systems with two different physical makeups and two different internal causal structures...then what would you say the common factor shared between them is?

    >> What's being challenged is the idea that those representations are present.

    I refer back to my earlier post that if a computer simulation is a valid representation, then so is a dust cloud. A computer simulation representation is easier to interpret because we configure it's causal structure so that we can get clear output about the information states the computer traverses.

    A dust cloud representation would be harder to interpret because you can't rely on your understanding of the causal structure to tell you where to look ahead of time for "output states". Instead you have to find them after the fact, by sort of knowing what you're looking for. But if the computer simulation has a first person point of view independent of third person interpretation, then so should the dust cloud.

    >> That's the Chalmersian intuition again, and I don't share it.

    Fair enough.

    >> I'm not sure how much more there is to say, though this has been fun!

    I've enjoyed it as well! I appreciate you taking the time to debate a bit with me on this. Thanks!

    ReplyDelete
  67. Beautiful! Dust minds absolutely do exist. To imagine they do not, is to imagine that reality is limited.

    A mind that wants to limit itself believes in a limited reality. Occam's Razor would say right off the bat that reality has no limits. Who or what would set those limits?

    I've enjoyed all the creativity and intelligence on this post. Thank you!

    ReplyDelete
  68. Nice post. Keep exploring!

    As for infinity, take a line, any length, on a piece of paper. Now name 2 points on this line. Mathematically, there will always, infinitely, be a point that can be found (1/5, 1/1000000, etc.) in-between any two points. It's the in-between factor. Infinity is always one more than now.

    http://heterodoxies.wordpress.com/

    ReplyDelete
  69. Bloody hell. I just thought of the dust theory myself and then found this page using Google.

    If any of this is true then this must say something significant about our reality, or the reality that is simulating us.

    Imagine consecutive brain states t1, t2, t3, t4 that produce a conscious thought. It seems like there should be no reason why these can't be distributed in time and/or space. This is bizarre. Somehow, these temporarily disconnected brain states can be connected together to create consciousness. Perhaps the time order doesn't even matter and they could go backwards and forward in time? If this is true then consciousness must surely be everywhere and constant in the universe. Bizzare!

    ReplyDelete
  70. I believe in all (1-8), so I believe dust-minds are possible, but the probability of this happening is so EXTREMELY small, that it almost never happens, compared to regular brain-based conscious beings. It's just like the fact that all the air molecules in your room could gather in one corner and you would suffocate, but the chance for this is so small, that no one worries about that. For us it's as if it never happened - even though in an infinite universe this should happen infinitely many times.

    Or you could imagine a version of the Earth, where everytime when someone says "I summon you Thor almighty!" a lightning would strike nearby. If universe is infinite, there are infinitely many such Earths. But of course it's not a reason to believe in Thor.

    It's a similar kind of argument people make with Many-Worlds Interpretation of quantum mechanics. Good example is here: https://youtu.be/kTXTPe3wahc?t=1015 (16:55-17:25)

    ReplyDelete
  71. To clarify my last comment:
    I agree that dust-minds are absurd. But their absurdity may come from the fact that they are absurdly improbable, not impossible.

    ReplyDelete