Friday, May 16, 2014

Group Organisms and the Fermi Paradox

I've been thinking recently about group organisms and group minds. And I've been thinking, too, about the Fermi Paradox -- about why we haven't yet discovered alien civilizations, given the vast number of star systems that could presumably host them. Here's a thought on how these two ideas might meet.

Species that contain relatively few member organisms, in a small habitat, are much more vulnerable to extinction than are species that contain many member organisms distributed widely. A single shock can easily wipe them out. So my thought is this: If technological civilizations tend to merge into a single planetwide superorganism, then they become essentially species constituted by a single organism in one small habitat (small relative to the size of the organism) -- and thus highly vulnerable to extinction.

This is, of course, a version of the self-destruction solution to Fermi's paradox: Technological civilizations might frequently arise in the galaxy, but they always destroy themselves quickly, so none happen to be detectable right now. Self-destruction answers to Fermi's paradox tend to focus on the likelihood of an immensely destructive war (e.g., nuclear or biological), environmental catastrophe, or the accidental release of destructive technology (e.g., nanobots). My hypothesis is compatible with all of those, but it's also, I think, a bit different: A single superorganism might die simply of disease (e.g., a self-replicating flaw) or malnutrition (e.g., a risky bet about next year's harvest) or suicide.

For this "solution" -- or really, at best I think, partial solution -- to work, at least three things would have to be true:

(1.) Technological civilizations would have to (almost) inevitably merge into a single superorganism. I think this is at least somewhat plausible. As technological capacities develop, societies grow more intricately dependent on the functioning of all their parts. Few Californians could make it, now, as subsistence farmers. Our lives are entirely dependent upon a well-functioning system of mass agriculture and food delivery. Maybe this doesn't make California, or the United States, or the world as a whole, a full-on superorganism yet (though the case could be made). But if an organism is a tightly integrated system each of whose parts (a.) contributes in a structured way to the well-being of the system as a whole and (b.) cannot effectively survive or reproduce outside the organismic context, then it's easy to see how increasing technology might lead a civilization ever more that direction -- as the individual parts (individual human beings or their alien equivalents) gain efficiency through increasing specialization and increased reliance upon the specializations of others. Also, if we imagine competition among nation-level societies, the most-integrated, most-organismic societies might tend to outcompete the others and take over the planet.

(2.) The collapse of the superorganism would have to result in the near-permanent collapse of technological capacity. The individual human beings or aliens would have to go entirely extinct, or at least be so technologically reduced that the overwhelming majority of the planet's history is technologically primitive. One way this might go -- though not the only way -- is for something like a Maynard Smith & Szathmary major transition to occur. Just as individual cells invested their reproductive success into a germline when they merged into multicellular organisms (so that the only way for a human liver cell to continue into the next generation is for it to participate in the reproductive success of the human being as a whole), so also human reproduction might become germline-dependent at the superorganism level. Maybe our descendents will be generated from government-controlled genetic templates rather than in what we now think of as the normal way. If these descendants are individually sterile, either because that's more efficient (and thus either consciously chosen by the society or evolutionarily selected for) or because the powers-that-be want to keep tight control on reproduction, then there will be only a limited number of germlines, and the superorganism will be more susceptible to shocks to the germline.

(3.) The habitat would have to be small relative to the superorganism, with the result that there were only one or a few superorganisms. For example, the superorganism and the habitat might both be planet sized. Or there might be a few nation-sized superorganisms on one planet or across several planets -- but not millions of them distributed across multiple star systems. In other words, space colonization would have to be relatively slow compared to the life expectancy of the merged superorganisms. Again, this seems at least somewhat plausible.

To repeat: I don't think this could serve as a full solution to the Fermi paradox. If high-tech civilizations evolve easily and abundantly and visibly, we probably shouldn't expect all of them to collapse swiftly for these reasons. But perhaps it can combine with some other approaches, toward a multi-pronged solution.

It's also something to worry about, in its own right, if you're concerned about existential risks to humanity.

[image source]

35 comments:

  1. I don't see the Fermi paradox as much of a paradox. Interstellar distances are very large, so the assumptions behind the paradox may well be unrealistic.

    ReplyDelete
  2. Neil: The paradox can be conceived of not in terms of visitation but also just in terms of visibility of organized permutations of the electromagentic spectrum, in which case the distances are fairly small. Also, even for visitation, although the distances might take hundreds or thousands or hundreds of thousands of years to travel, that's relatively quick on a cosmological scale -- and even if generational starships seem a bit too resource-intensive, unpopulated probes could presumably be fairly inexpensive. At least, that's some of the thinking behind the paradox.

    ReplyDelete
  3. Isaac Asimov might differ. He has stories on this topic. His science fiction is equivalent to a thought experiment

    ReplyDelete
  4. It's not clear to me that there's any substance in calling something a "superorganism" or not. A given collection of stuff can often be viewed as one organism or as many depending on contextual needs. For example, we can describe planet Earth as a superorganism, with some overall homeostasis, or we can pull the lens in and treat individual humans or dandelions as organisms.

    So when you suggest that intelligent life tends towards a superorganism, that needs to be cashed out in terms like: intelligent life tends to organize itself with certain kinds of internal homogeneities and risk tolerances, such that its continued existence is especially fragile.

    It's not clear why that should be the case. As you have argued, there seem to be some ways earth intelligence is now more subject to extinction risk than it was in earlier epochs. But the relationship is complicated and it seems like the curve could change direction.

    For most of the homo sapiens era, total human populations were in the range of five or six figures -- this suggests that comparatively small shocks could have wiped them out. The recent population explosion of homo would seem to damp out that kind of extinction risk. (At least for a while -- perhaps the risk declined for a long time and then began increasing.)

    Similarly, if we are imagining that the "superorganism" is something with a high degree of centralized coordination, it seems like it could mitigate monoculture risks through planning. Coordination is fairly expensive for current humanity -- so we can't agree on things like cutting industrial output to avoid global warming, or nuclear arms reduction.

    But if a superorganism has cheap coordination, it could (presumably) plan for or avoid many existential risks. It could ensure genetic diversity, make sure resources are banked in case of catastrophic shock, or include safeguards against suicide. If we are assuming some kind of superintelligence, intelligent and coordinated risk management seems even more likely.

    Maybe I am getting distracted by the "superorganism" language and should really be thinking of this in terms of increasing optimization. To the extent that all the members of the civilization depend on coordination with everyone else, to finer and finer tolerances, the danger of a catastrophic shock seems to increase.

    Vernor Vinge made this argument in the interstellar-civilization context in Deepness in the Sky, which is a true SF classic. The broader idea can be found (sort of) in the thought of Mancur Olson.

    ReplyDelete
  5. Howie: Absolutely, a lot of the best stuff on the Fermi paradox has been in science fiction.

    ReplyDelete
  6. Grobstein: Thanks for that thoughtful and detailed comment! I agree that the main issue is the extent to which "intelligent life tends to organize itself with certain kinds of internal homogeneities and risk tolerances, such that its continued existence is especially fragile" -- that's what's under the "superorganism" talk, and if you don't like that word, it's not essential to the argument.

    I agree that a centralized society, specialized and cross-dependent, will put a high premium on mitigating risks. But if it becomes very interdependent, then it goes extinct if it badly misjudges even once. And it's not clear what systemic pressures, other than the dubious long-term wisdom of the parts, are going to make it an excellent judge of long-term risk.

    I agree that Vinge has done some interesting thinking about both systemic risks to civilizations and about group minds in Deepness in the Sky (and Fire Upon the Deep).

    ReplyDelete
  7. Thanks for the response, Eric.

    Cf. Nick Bostrom on the "singleton."

    ReplyDelete
  8. Thanks for that link! I'm not at all surprised to see Bostrom working through some of these issues. I hadn't seen that one in particular.

    ReplyDelete
  9. The solution to the Fermi Paradox I like most is the one by psychologist Geoffrey Miller:

    "I suggest a different, even darker solution to Fermi's Paradox. Basically, I think the aliens don't blow themselves up; they just get addicted to computer games."

    Here is why:

    http://edge.org/q2006/q06_9.html

    ReplyDelete
  10. Rolf: I think that approach works best when combined with a group mind or superorganism approach I suggest. If there are millions of minds, the likelihood that they will *all* become addicted to video games (or similar useless self-stimulation) is less plausible than if they first merge into one group mind or superorganism first (especially if selective pressures favor the environmentally-attuned minority).

    ReplyDelete
  11. Actually, I believe the solution to the Fermi Paradox is much simpler; intelligent life is much rarer than we like to think. I've been making a movie on that subject (and other's related) and the section on Fermi's Paradox is viewable at the website; www.crossofthemoment.com. It comprised of interviews with Robin Hansen on the Great Filter and Don Brownlee and Peter Ward on Rare Earth theory. Check it out, I think you'll dig it!

    ReplyDelete
  12. Rolf Degen: I'm quite fond of that explanation although I disagree vehemently with his moral interpretation of it. Favoring the "real" world over virtual reality seems like pure chauvinism. And the more specific idea that organisms have an interest in being evolutionarily fit seems to get evolution wrong. An individual organism doesn't gain much from having its genes live on for millions of years after it's dead. It might like the idea, but it might like other things too. Organisms have no duty to go along with evolution's "plans." Nor do superorganisms, for that matter, following Eric's post.

    And if I'm right and plugging into virtual reality is inherently more reasonable, that makes it even more plausible. Since then, it would not be enough for people to be "practical-minded breeders," they'd have to be actively irrational. Even after a population is full of people with very sober practical-minded goals, why not plug into a sober practical-minded virtual reality? And a population incentivized to be unreasonable might find its civilization collapse for other reasons.

    ReplyDelete
  13. I'm also one of those folks who's not impressed by the Fermi Paradox. My favorite explanation for the absence of signals is the one that we've already done here: any advanced civilization quickly learns to compress or encrypt its signals to the point where they become almost indistinguishable from noise. If you were looking at earth's electromagnetic radiation today, it would mostly be noise, due to compression. A similar solution points out that we've replace most radiant signalling with cables anyway.

    ReplyDelete
  14. Thanks for the continuing comments, folks!

    Jacob: Cool, I'll check it out. I don't see why "rare earth" couldn't be an important part of the story -- though it seems a little strained, to me, to make it the *whole* story.

    UserGoogol: I'm inclined to think that to whatever extent there are trans-species universal norms governing intelligent beings, there's a norm against self-genocide (unless perhaps in the service of replacement by a better species) -- because intelligent life is an inherently valuable thing. I don't see anything *inherently* wrong with video games, but to the extent they lead you to privilege virtual reward over what the species needs for survival, that runs up against the norm against self-genocide. (This is central to my story in draft, "Last Janitor of the Divine".)

    Eric: Right, I think that's an appealing piece of the puzzle. But I think it's still a little odd not to expect *any* detectable organized leakage from *anywhere*, unless some other Fermi response is also part of the truth (such as rare earth or self-destruction). I'd have thought you'd be attracted to the zoo response, too.

    ReplyDelete
  15. This comment has been removed by the author.

    ReplyDelete
  16. My inclination is to think either that we are, unfortunately, one of the first of intelligent life in the universe or that we are merely one of the only intelligent life in the universe (which reading Power, Sex, Suicide about how multicellular organisms require mitochondria and how incredibly lucky we are to have mitochondria, makes me think that those who don't appreciate this fact expect intelligent life to be much more common than it is).

    Another thought I've had is that maybe the reason why we haven't had any visitations is that there is some very easy way to either communicate or travel vast distances, and that it is an easy technology for sufficiently intelligent civilizations to discover, much easier than conventional means of transporting oneself from point A to B. The aliens are just waiting for us to stumble upon this super transportation/communication technology so that we can join them at that level. There is no point to visit us because we will probably much more quickly stumble upon such technology and join the advanced civilizations. (However, this still doesn't rule out the question as to why we haven't detected any organized radiation, put off by intelligent civilizations with wireless communication broadcast mediums.)

    Your thought about the superorganism death reminds me about recently hearing how a fungus is currently killing the banana we are all used to eating and is likely to wipe it out because we have homogenized our banana crops too much. That banana we're used to eating may go extinct, but there are other types of bananas we can turn to (this has apparently happened before with what was previously the main banana kind everyone used to eat). Maybe it is something about parasites that they will eventually outrun every biological organism, beat them at evolution, and destroy them. (However, I'd imagine that intelligent civilizations would've observed the problem of a homogenized gene pool and taken measures to prevent risks associated with it.)

    But then what about silicone intelligence? If we get wiped out because of our biological vulnerabilities, I'd expect there to be at least some extinct alien species in which their civilization survives through the intelligent machines they gave rise to.

    ReplyDelete
  17. Thanks for the thoughtful comment, Carlos! It's hard to assess how lucky we have been, with an N of 1. The Fermi Paradox suggests maybe very lucky (and thus the rare earth hypothesis!) -- and maybe there's some evidence on earth for such luck, e.g., mitochondria. I do find it a bit hard to believe, without firm evidence, that we've been so lucky as to be the first technological civilization in the galaxy; but of course that's just a semi-informed hunch.

    I'm not sure about your point about visitation, though. Why wouldn't the aliens come to us, if it's so easy? There would have to be some sort of ban on contact, or something -- like Star Trek's prime directive, maybe? Is that what you're thinking, or something else?

    I agree that a sufficiently risk-averse technological civilization would avoid the risks of homogenization, but as in the banana case you mention, it's evident that sometimes short-term pressures favor homogenization. So there would have to be some *durable guarantee* that a merged superorganism / highly interdependent technological world would always keep the long view and avoid the short-term benefit. But without selective pressures favoring the long view, it's hard to see why that would be so. The first selective pressure would, as it were, also be the last.

    ReplyDelete
  18. Sorry, I should've more carefully specified what I meant to say. I meant to say that there might be some very easy way to travel vast distances, but it is a manner of travel that is only effective to a destination where the manner of travel has been discovered at the destination. On the other hand, conventional travel is very difficult, risky, and expensive.

    Therefore, aliens haven't visited us because visiting us is too costly, and because there is a very easy manner of alternative transportation, and because it is only a matter of time for a technological civilization to discover that manner of cosmic transportation. That manner of cosmic transportation is one that is, again, only effective to destinations at which the manner of transportation has been detected (because, say, it requires some sort of terminal to be built).

    ReplyDelete
  19. Unrelated: you got a shout-out from on your "IIT implies Conscious US" piece from Scott Aaronson's blog.

    http://www.scottaaronson.com/blog/?p=1799

    (It's an interesting argument, but the mathematics is entirely beyond my humble level to comment on)

    ReplyDelete
  20. Carlos: Ah, I see now! An interesting possibility.

    Jorge: Cool. Thanks for the heads-up!

    ReplyDelete
  21. Hi, Eric,

    Regarding detecting “leakage”, our present-day capabilities to detect signals – and potential “leaks” -, it seems to me they're very limited, and in my view often overestimated in discussions about the Drake Equation.

    The SETI program was designed to look for deliberate attempts of communication, and it seems at the present moment, apart from such attempts, it would be very difficult to detect anything at all.

    For example:

    http://www.seti.org/faq#obs12

    According to that – and at least as far as SETI capabilities go, but afaik no one else has anything much better, or better at all -, barring intentional transmissions, of the kind of transmissions we use here on Earth, only something like some high-powered radars might be detected, if used within tens or at best hundreds of light years, and if SETI is looking in that particular direction – which might not be the case.

    However, if no civilization within that range has anything like those radars (because they're more advanced, or not enough) or any other kind of powerful transmission (maybe one we don't use on Earth), or there are some such civilizations and SETI hasn't look in their direction yet, we wouldn't have detected anything. That's compatible with there being thousands of civilizations in the galaxy, or even many more.

    A question is of course whether civilizations that make it to another star would likely keep colonizing the galaxy, which is a matter difficult to assess given that we don't even know about their psychological makeup, in addition to some other factors. But when it comes to detecting signals, we're just getting started, and as long as there is no galactic civilization, I wouldn't expect we would detect alien civilizations even if plenty of them are out there, broadcasting.

    So, it seems to me that maybe we've not detected them yet because we've not been trying long enough or with sufficiently powerful means, and a few decades from now we or our successors will have detected them.

    ReplyDelete


  22. With regard to self-genocide and trans-species universal norms, it seems plausible to me that evolution would tend to favor norms against self-genocide, but on the other hand, I wouldn't agree with the trans-species universal norms – even if there would be common points, and perhaps a rule against self-genocide would be common.
    However, going into virtual reality instead of trying to spread one's genes is not genocide, in my view. As I see it, if every human on Earth decided not to have children, we wouldn't be guilty of genocide even though our civilization, and our species, would end.

    In any case, as long as a species has colonized one or perhaps a few planetary systems orbiting red dwarfs with trillions of years left to live, going into virtual reality for trillions of years wouldn't seem to be a threat to their existence any time soon (and eventually, all civilizations will end).
    No civilization in our galaxy has existed for trillions of years, so they might be in the beginning of a very long virtual reality life, before (perhaps) moving to another planetary system around a star with some remaining life in it, until eventually they run of of stars.
    By the way, our planetary system may not be a particularly interesting target for colonization for a civilization bent on long-term survival. Our star will not provide energy for more than a few billion years.

    That aside - and related to the issue of norms too -, there may be a reason for aliens to hide: namely, not giving away their location to potentially hostile aliens.

    For example, if by “intrinsically valuable”, you mean that an intelligent species is a good thing regardless of any relation with other things (if not, please clarify), I'm not sure about that (that would seem to depend on the species in my view), but even if true, I wouldn't count on other intelligent species caring about good or bad, intrinsic or not.
    Maybe members of species#183476 (to give it a number/name) care about species#183476-good and species#183476-bad, and a scenario in which some individuals of species#183476 hunt humans for sport is a bad situation, but maybe a species#183476-good thing, and so if we send a signal and the wrong (but maybe not species#183476-wrong!) aliens detect it, we're in trouble. Other civilizations may prefer to hide for that reason (i. e., they don't know what kind of entity might detect their signals).

    ReplyDelete
  23. Thanks for the thoughtful comments, Angra! I agree that it's quite possible that we simply haven't detected leakage because our means are too weak. That could definitely be a prong of a multi-pronged answer to Fermi, though it doesn't account for probes or the lack of highly visible intentional or unintentional events that one might think would sometimes occur if there were lots of technological civilizations.

    A agree that self-genocide would be selected out in group-level competitions among species; and I agree that ducking into videogames for a trillion years is not self-genocide. But I don't think that works as an answer to this post. If there were *lots* of separately viable technological group-organisms competing (as in my Ringworld example), then the selectionist answer works, but my thought in this post is that it *won't* work when you have a single organism. And my point about videogames in response to the earlier post was not that it *constituted* self-genocide but rather that tuning out of base-level reality could result in the death of the organism, either through suicide or through accident (as in my "Last Janitor" story).

    I also maybe wasn't clear enough that my claim about a universal norm against self-genocide was meant to capture a normative fact I accept, not as a descriptive fact about actual alien psychology.

    ReplyDelete
  24. This comment has been removed by the author.

    ReplyDelete
  25. Eric, thanks for the reply, and sorry I got the numbers wrong in the previous post (that's why I deleted it). I hope they're correct now.

    Regarding “highly visible” events, what do you have in mind?
    Given the info about SETI, I can easliy imagine that there are, say, 10000 advanced alien civilizations, none of them within (say) 1000 light years, and all of them invisible to presently used means.

    For example, the volume of the Milky Way is ( if this is is roughly correct http://spacemath.gsfc.nasa.gov/universe/5Page65.pdf ) roughly 7.9 * 10^12 cubic light years. Let's say that at least roughly 10% of the volume contains inhabitable systems, say 8 * 10^11 cubic light years to simplify.

    That's still roughly 200 times (somewhat less) greater than the volume of a ball that is 1000 light years in diameter.
    If there are, say, 100 or 200 civilizations, it wouldn't be unsurprising if we were the only one in such a ball. Also, 8 * 10^11 is over 1500 times greater than the volume of a 500 ball with a 500 light years radius, so even if there were over 1000 civilizations, it would be unsurprising if we were the only one in that radius. And it may be that more than 10% of the volume contains inhabitable systems.

    Of course, this is a very rough approximation; we would also need to consider other factors to figure out how they would more likely be distributed, but the point is I woudn't expect that we would have detected them by now, even if they number in the hundreds, a thousand or maybe more.

    Granted, some civilizations might make detectable structures (like Dyson spheres). However, maybe none would make them. And even if some did make them, maybe they did it around long-lived stars, instead of solar analogues, and SETI seems to be looking for them only around solar analogues for now. And even if some civilizations made Dyson spheres around solar analogues, we've only begun searching for them, so it would be unsurprising if we hadn't found any yet.

    Fermilab is looking in a different way, and actually there are some “ambiguous” cases that can't be confirmed one way or another with present-day tech, but that's as far as present-day tech goes – and in any case, Fermilab is just getting started as well.

    ReplyDelete

  26. Regarding evolution and self-genocide, I wasn't thinking about group-level competition necessarily, but just how social beings usually develop – they tend not to wipe out their group. But we don't seem to disagree on the result, however they got there.

    Regarding the norm about self-genocide and normative fact vs. descriptive fact, sorry if I misread, though I think there is a very different view about norms (roughly, I don't see them as non-descriptive; we can describe the norms of pecking order in vultures, or generally species-specific norms in wolves, bats, different primates, etc.; the same might be expected for aliens) lurking here, which may have contributed to the misunderstanding. I got the impression that you thought that aliens would likely conform to that norm.

    That aside, yes, I agree of course that being disconnected to the base reality would be dangerous, no doubt. But I don't see why any individual would want that. If the superorganism is actually like present-day countries, you would still get individuals trying to survive, and it's hard to see why those individuals would disappear, except perhaps if some strong AI takes over, in which case who knows what it would do.

    I agree that the parts may well not be wise when it comes to long-term survival. However, in some cases they may very well be – i. e., maybe in some civilizations the parts are good at assessing and acting upon long-term survival considerations, and in others, they're not. I wouldn't expect a universal answer to that.

    ReplyDelete
  27. Someone has already suggested something similar to you:
    http://www.science20.com/alpha_meme/deadly_proof_published_your_mind_stable_enough_read_it-126876

    ReplyDelete
  28. Anon, May 23: Thanks for the link. That looks interesting!

    ReplyDelete
  29. Angra: I'm not particularly invested in any one way of reading the SETI numbers, though not everyone seems to be as optimistic as you. It depends a lot on how likely strong emissions are. A supernova-sized patterned emission, even if very rarely produced in catastrophe or construction, could obviously be detected from huge distances.

    On being disconnected from reality: I agree that an organism probably won't want that. But if you're down to one germ line all it takes is one misjudgment. And then the question is whether the the sub-individuals (the human-like particular organisms from which the superorganism is made) could survive and reproduce without the larger organism.

    ReplyDelete
  30. Eric,

    I agree many people are much more pessimistic on the chances of long term survival, so I guess with respect to their my view would be seen as optimistic. On the other hand, if there are plenty of aliens out there – a matter I take no stance on – I wouldn't call that a good thing – I guess I'm pessimistic in the sense I don't expect most aliens to even care about good or evil, even if they may care about some species-specific analogue with some degree of extensional overlap.

    In any case, regarding our present-day limited detection capabilities, while I think that would explain why we have not detected anything in case there are many civilizations, it does not explain why the Earth has not been colonized yet – that would require a different explanation.

    Regarding the superorganism, a question is whether it would not have backups, like germ lines on Earth, Mars and some other parts of the Solar System. It seems reasonable to do that, so in case of mistakes, help can go from one planet to the next. But perhaps, you're suggesting civilization collapse before colonization of nearby planets?

    That might happen, though I'm not sure why all civilizations would collapse before reaching that state. It seems unlikely to me that not a single one of them – if many develop to our present-day level – would manage to have autonomous new settlements on a few places in their home planetary system.

    ReplyDelete
  31. That all seems sensible, Angra. I would suggest that even if there's a little planet colonization, if it's not very many planets it's still a small population size and thus vulnerable to extinction, since what kills one might kill another (e.g., a transmissible disease).

    ReplyDelete
  32. Eric,

    It may not be impossible that what kills one kills another, but it seems very improbable to me, especially when considering several civilizations but even for one.

    For example, the following events would not do it.

    1. Asteroid or comet.
    2. Supervolcano.
    3. Gamma ray burst.

    Moreover, an advanced civilization probably would have time to develop defenses against 1 and with more time 2, and probably 3 if they dig underground (which in any case they would do if they colonize other planets).
    Also, 3. is rare so it's not going to affect many civilizations, and 1. and 2. are probably rare on any planet on which a civilization developed in the first place.

    An AI going Skynet on them might do it, but then, the civilization or superorganism would only be replaced by a more advanced one. Similarly, an alien invasion might do it, but it would only replace an interplanetary civilization with an interstellar one, which is tougher.

    There is still the chance of wars, but that also seems unlikely to afflict all sufficiently intelligent species. It seems more likely that their propensity to wage war would be quite variable across species, given widely variable evolutionary conditions.
    Also, a rogue black hole could do it, but that's extremely rare.

    There is also the possibility that you mention: illness. However, a question is how the illness could reach all of the colonies, and then kill everyone. It seems unlikely that it would, for the following reasons:

    a. There is no airborne transmission, or any transmission that does not depend on spaceships.
    b. Interplanetary travel, even with advanced tech, takes a lot longer than air travel. It's also a lot more expensive in terms of resources. So, it seems probably that there would be relatively few passenger ships traveling when the illness is detected, and also in any case, chances are there will be time to stop them.
    c. It's probable that a civilization advanced enough to colonize a few planets or moons (and/or establish orbital colonies, etc.) will also have advanced enough to deal with any sort of illness.

    Still, it might happen in one colonized system, but it seems improbable to me. The chances that it will happen in not one but, say, dozens, appears much less probable still.

    ReplyDelete
  33. Angra: You might be thinking of "illness" too narrowly. As we've seen with computer viruses, and with speculations about "gray goo" nanotech disasters, as systems get more complex and develop new capacities, there are changes in the types of illness (or infection or self-replicating flaw or central-system collapse) to which they are open. If there is communication between the colonies, there are vectors for illness.

    ReplyDelete
  34. Eric,

    With regard to “gray goo” (or similar nanotech disasters), current analyses indicate that that's very improbable, as far as I know, but I've not read that much on this, and I may well have missed a number scenarios. Do you have any links to papers, articles, etc., defending the view that there is a significant risk?

    In any case, let's say a gray goo epidemic starts on a planet, moon or space station. Then, the governments/superorganisms would almost certainly quarantine the different planets/moons/space stations before it reaches them. The goo would likely be confined to the planet where it originates. In addition to quarantine measures, there is a self-quarantine mechanism of sorts, in that any spaceship with goo on it – if there were any, but that seems improbable due to quarantine – would probably be destroyed by the goo (the goo is not intelligent).

    So, it seems to me that goo going from one planet/moon/space station to another one would be improbable. If there are several inhabited planets/moons/space stations, it seems even much more improbable to me that the goo would be able to reach them all. And if it happened on one solar system, that's also unlikely to happen in others.

    With respect to central system collapse, that seems more plausible for a single superorganism, I'm thinking about a number of autonomous colonies across a planetary system, with no central system.

    As for computer viruses, if there is no centralized system and core systems on each colony are not accessible by new programs without some previous serious testing – as one could expect, at least for many, perhaps most civilizations -, it would be difficult for a virus to get through. And AI can figure out how to better defend themselves.
    That said, if there is a lot of conflict in that civilization, then it may be that computer warfare does the trick, but that's a subset of the “war” scenario, which I think may well happen in some civilizations, but probably not in all – it depends on the psychology of each species.
    Still, a question would be whether non-designed computer viruses (or mutated ones) are likely to get through the defenses of whatever advanced computers they might have. Maybe that does it, at least in many cases. But I think it's not probable.

    Incidentally, how about second-generation civilizations? (i. e., civilizations that evolve on a planet, and they find their planetary system littered with the remains of a previous civilization?)
    It seems to me that they would likely be a lot more cautious, and thus more likely to survive – on the other hand, perhaps no such civilization exists yet...

    ReplyDelete
  35. We don’t really understand curiosity, creativity, aesthetic appreciation and the drive to work hard. Perhaps without those things not much progress would get made – especially once the basic urges are reliably satisfied. It might be particularly difficult for such abilities to evolve, and they might be vulnerable to genetic degradation or to damage through environmental changes. And they might not scale up – it might be that a mega-organism would be functionally efficient and powerful, so able to deal with pre-existing threats (those which it was brought into existence to deal with) but lacking those virtues would fail to create solutions to new threats…

    ReplyDelete