tag:blogger.com,1999:blog-26951738.post1199531604427726304..comments2024-03-28T19:14:33.619-07:00Comments on The Splintered Mind: The Washout Argument Against LongtermismEric Schwitzgebelhttp://www.blogger.com/profile/11541402189204286449noreply@blogger.comBlogger14125tag:blogger.com,1999:blog-26951738.post-15169180955421313382023-07-02T15:22:26.373-07:002023-07-02T15:22:26.373-07:00Regarding the book "what we owe the future&qu...Regarding the book "what we owe the future", I think the reduce-suffering has a rather theoretical essay on erasing past suffering. the basic premise is "why should humans\living beings in the future merit our help more than those in the past"<br />food for thought! regardless whether its possible or not (to affect the past)!javierhttps://www.blogger.com/profile/06585551921719700354noreply@blogger.comtag:blogger.com,1999:blog-26951738.post-54957828627851063462022-10-30T04:51:34.090-07:002022-10-30T04:51:34.090-07:00Consider the man with a loaded gun in his mouth, w...Consider the man with a loaded gun in his mouth, who is typically too bored by the gun to bother discussing it. Rational?<br /><br />Now consider the man who is willing to discuss the gun in sweeping philosophical terms, but expresses little to no interest in practical steps for removing the gun. Rational?<br /><br />This is who is conjuring up theories about longtermism and other such philosophies, we who have thousands of hydrogen bombs aimed down our own throats, we who consider practical discussions of how to remove that gun to be mere politics largely unworthy of our attention.<br /><br />If we are unable or unwilling to understand ourselves and our relationship of denial with the existential threat that we face today, it's hard to imagine why anything we might have to say about the longterm should be considered credible.<br /><br />Critical thinking involves not only how we approach any particular topic, but also the ability to prioritize which topics most merit our attention.<br /><br />What is the rational argument for prioritizing any topic over the gun that is in our mouth right now today?<br /><br />If we're unable to achieve clarity on that question there's no point in discussing the long term, because there isn't going to be one.<br /><br /><br />Phil Tannyhttps://www.facebook.com/phil.tanny/posts/pfbid028vNnknjphbS3kQdGW8eat6KDp1teZTfMu2TAtj6eKQUg1cVE6VgFekzy8g38cp4jlnoreply@blogger.comtag:blogger.com,1999:blog-26951738.post-34491234395464655972022-08-29T10:08:50.260-07:002022-08-29T10:08:50.260-07:00Even 'indefinitely is a presumption of persist...Even 'indefinitely is a presumption of persistent effects'...<br />...but from affects there may be room to wonder...<br /><br />The affects of quanta and qualia and in between...<br />...what in movement and change is up to us...<br /><br />Philosophies of psychology and mind for...<br />...phenomena of wonder... from affects of persistence...Arnoldhttps://www.blogger.com/profile/02580641063222662041noreply@blogger.comtag:blogger.com,1999:blog-26951738.post-9204987597851150752022-08-24T21:47:07.048-07:002022-08-24T21:47:07.048-07:00I have a couple of quibbles. One is with the model...I have a couple of quibbles. One is with the model of causation and its interaction with utility across cosmological times. You assume some strong non-linearity cum classical chaos in the propagation of utility into the future from any single event so ending up with actual infinities. At one level, this is just the usual Knightian uncertainty argument against utilitarianism or any kind of planning, so we don't actually need the infinities - as you comment later, the immediate extinction of humanity _could_ be a long term gain for universal goodness depending on how you specify utility. Ditto my becoming quadriplegic after being hit by falling SpaceX debris - maybe my net long term life satisfaction might be increased!<br /><br />But in reality, much of the evolution of physics seems pretty regular, even if the fine details are completely indeterministic. So it seems just as attractive to assume a predictable moral progress made up of the massed actions of billions of well-intentioned people (including post-humans). So we must merely act in such a way as maximizing the opportunity of those future generations to do well (ObSF, you might see me as channelling Graydon Saunders here).David Duffyhttp://users.tpg.com.au/davidd02/noreply@blogger.comtag:blogger.com,1999:blog-26951738.post-5184581771135023092022-08-24T17:12:15.975-07:002022-08-24T17:12:15.975-07:00Longtermism seems to be the complete opposite of a...Longtermism seems to be the complete opposite of anti-natalism. It asserts a strong obligation to the wellbeing of a huge number of future generations. It further seems to assert an obligation for procreation. Presumably, such a position would have an answer to anti-natalism, given the enormous harm anti-natalism implies. <br />If anti-natalism is correct, or even has a tiny probability of being correct, then procreation should be approached with caution. <br />I am not sure whether anti-natalists like Benatar believe that human like could evolve to warrant abandoning their position. Daniel Polowetzky https://www.blogger.com/profile/04299950687312400826noreply@blogger.comtag:blogger.com,1999:blog-26951738.post-51042863316041818202022-08-24T12:44:15.176-07:002022-08-24T12:44:15.176-07:00Hi Eric
Do you doubt human survival in the centur...Hi Eric<br /><br />Do you doubt human survival in the centuries and millennia ahead, is that by some kind of Asimovian Psychohistory?<br />Is it simply making the case we are or some of us naturally violent, and will acquire even more violent tools? Some like Bostrom, make that argument; but I'm not sure his facts are facts by fiat, ceritus paribus.<br />I think these argument confuse the future with history- it makes for lively conversation, but even statistically the centuries ahead are on principle unknowableHowardnoreply@blogger.comtag:blogger.com,1999:blog-26951738.post-58997590359423886872022-08-24T05:55:42.996-07:002022-08-24T05:55:42.996-07:00I confess ignorance of the term. But, the premise ...I confess ignorance of the term. But, the premise for pursuit of it just does not feel right, somehow. Planning and organization trump stochasticism in any real world setting. But, not everything undertaken can be assumed or expected to achieve desired ends...that is pie-in-the-sky enthusiasm, not real-world reality.Paul D. Van Pelthttps://www.blogger.com/profile/13508874039164282696noreply@blogger.comtag:blogger.com,1999:blog-26951738.post-20798133552332093502022-08-23T16:16:50.198-07:002022-08-23T16:16:50.198-07:00Thanks for the comments, everyone!
Dan: Yes, I se...Thanks for the comments, everyone!<br /><br />Dan: Yes, I see a relation in the skeptical motivations for both.<br /><br />Arnold: Thanks for the kind words, and of course the "measurement problem" is a huge morass right at the core of philosophy of quantum mechanics.<br /><br />Drake: Thanks for the detailed comment! I agree with your Point 2, so I'll focus on Point 1. You write, "But preventing extinction now is not committing to any one course of action! It's buying us time to figure out what it is we want, what we value, what the right course of action is, how our choices will affect things billions of years down the line." I agree but I think that statement is incomplete, since it could be *costing* some better species time for the same types of reflection -- or rather, not just time, but the very chance at existence. I hope it's obvious that I don't actually support human extinction, but I think that my reasoning in this part of the argument stands: The longer we take to go extinct (as I think we probably will, and within a few hundred or few thousand years), the more likely that we will have done irreparable harm to the planet, such as wiping out good candidate replacement species, that will prevent a more durable species from taking our place. So although what you say is correct, the benefits of giving ourselves more time must be placed in the balance against the downsides of giving ourselves more time.<br /><br />Daniel: Yes, that's a good point. To be fair to MacAskill, he addresses that question explicitly and makes a case that future humans' lives would likely overall be worth living (because our lives are and because it's likely that the technological potential to improve lives will stay at present levels or higher -- though of course that's just a best guess).<br /><br />SelfAware: I suspect that it's salient to us when predictions about the future go wrong, but of course many go right. For example, the writers of the Constitution of the United States anticipated that there would still be countries in 200 years, and that it would make sense to have some form of representative government. Benjamin Franklin invested $1000 to collect interest and be paid out for the public benefit after 200 years of accumulation, and that turned out pretty well. Early works of science fiction in the 1700s that portrayed the distant future imagined cities would still have horses, which was wrong, but they also imagined that people would want transportation, care about clothing, get in arguments with each other, and live in a society of unequal wealth. Lots has *not* changed. While I can't be totally sure, I do feel like it's a good guess that a large scale nuclear war next year would be something that future humans if they exist would look back on with regret.Eric Schwitzgebelhttps://www.blogger.com/profile/16274774112862434865noreply@blogger.comtag:blogger.com,1999:blog-26951738.post-76278443647956442672022-08-23T14:01:08.185-07:002022-08-23T14:01:08.185-07:00It's not even clear to me that the next few ce...It's not even clear to me that the next few centuries are particularly predictable. Historically, it seems like predictions made more than a few decades out tend to be increasingly wrong. The predictions are typically dominated by the preoccupations of whatever time they're made in. <br /><br />Overall, my impression is that longtermers are rationalizing for things they want to see happen but can't find any reasonably foreseeable justifications for. SelfAwarePatternshttps://www.blogger.com/profile/11856665627652130336noreply@blogger.comtag:blogger.com,1999:blog-26951738.post-79448939588184576132022-08-23T13:04:30.065-07:002022-08-23T13:04:30.065-07:00I find the focus on preventing human extinction a ...I find the focus on preventing human extinction a bit peculiar. For me, any appeal of longtermism is based on the assumption that humans will exist in the distant future and that our actions should be aimed at causing them minimum suffering and maximizing their welfare, or something of that ilk. Merely, assuring their existence, independent of the quality of their lives, has a religious quality. <br />Would longtermism focus on preventing human extinction in the distant future if they were reason to believe that life would not be worth living?<br />Daniel Polowetzkynoreply@blogger.comtag:blogger.com,1999:blog-26951738.post-180203734331804672022-08-23T12:20:56.298-07:002022-08-23T12:20:56.298-07:002. We have some nonzero information about how to r...<b>2. We have some nonzero information about how to reduce the odds of extinction</b><br /><br />Predicting the future even a few years out is tough, but I don't think it's so intractable that we need to throw our hands up in defeat. I'll happily defend 60+% confidence in the truth of claims like "Careful non-capabilities-advancing research into AI alignment reduces the odds of human extinction this century" or "Creating far better PPE than we have in 2022 will make it more likely that extremely virulent pandemics don't kill everyone". A thriving human civilization is very hard to predict out a billion years, but a dead one is very easy - if I want to bring down the chance of the world where everyone is dead in 10 billion years, I can straightforwardly do that by making it less likely that everyone is dead in 2040. <br /><br />I think "the assumption that we will somehow see ourselves safely past catastrophic risk if we survive the next few centuries" is a pretty reasonable one. Perhaps it ends up being the case that in the limit of technological capability and cultural development, offense is fundamentally easier than defense, and single actors will be able to unilaterally bring down stable systems. But perhaps not - I would personally put well over 80% on the existence of stable enduring ways for the world to be that are extremely unlikely to destroy themselves - and if there are such possibilities, it seems to me that we stand a good chance of finding them given some time to work things out. <br /><br />I agree that in the absence of further information, you should expect that your odds of affecting extremely large groups of people substantially are quite small. But we don't have an absence of further information - there are all these giant, glaring, dangerous problems that we can go make material progress on! The pathways to reducing extinction risk aren't a result of armchair reasoning, but from people looking at the world and seeing concrete ways we might prevent that from happening.Drake Thomashttps://www.blogger.com/profile/12004425305561057648noreply@blogger.comtag:blogger.com,1999:blog-26951738.post-25890875141992792212022-08-23T12:20:47.686-07:002022-08-23T12:20:47.686-07:00[Posting this comment in two parts, to fit within ...[Posting this comment in two parts, to fit within the character limit]<br /><br />I agree that there's huge uncertainty over the long-term consequences of almost any action you take, such that in almost all respects it's basically hopeless to try and predict even the sign of things. (And that things get very murky with unbounded utility functions, especially once you add uncertainty over your world model.)<br /><br />But I think you're mischaracterizing the longtermist arguments for pursuing certain courses of action in the 21st century. The claim is not that we can work out, with high certainty, which butterfly wing flaps will end up averting more hurricanes than they cause. The (much weaker and more plausible, in my view) claim is specifically that (1) human extinction this century would be extremely bad and (2) that we have some nonzero information right now about how to reduce its chances.<br /><br /><b>1. Human extinction this century would be extremely bad</b><br /><br />As you point out, we have a lot of moral uncertainty over how things should go in the future - perhaps a more grown-up human civilization would decide that the best course of action involves things that seem bizarre or wrong to us intuitively.<br /><br />But preventing extinction <i>now</i> is not committing to any one course of action! It's buying us time to figure out what it is we want, what we value, what the right course of action is, how our choices will affect things billions of years down the line. Maybe human existence is very bad for the world in ways we don't understand, and it'll take us 10,000 years to work that out. But those extra 10,000 years seem quite a small price to pay when there's a good chance we might have wanted to continue on for 10^100. <br /><br />Not going extinct gives us an option on the future, one we can choose to exercise or not when we're older and wiser as a species. Unilaterally denying that choice to our potential descendants, to figure out how the future should go with much better information than we have now, seems to me a much more rash choice than extending our current state of affairs forward another few thousand years to figure stuff out.Drake Thomashttps://www.blogger.com/profile/12004425305561057648noreply@blogger.comtag:blogger.com,1999:blog-26951738.post-62251696505037046452022-08-23T12:15:20.759-07:002022-08-23T12:15:20.759-07:00I googled Jacob Barandes to see what he has found ...I googled Jacob Barandes to see what he has found about physics and knowledge...<br />... with the googled question: How does an electron know it's being observed?...<br /><br />This seems to me the state of affairs of philosophy today...<br />...that knowing and knowledge are different activities for us humans...<br /><br />Throw in understanding...<br />...and 'evolution in/of time' may begin to appear...<br /><br />Really great reads...Arnoldhttps://www.blogger.com/profile/02580641063222662041noreply@blogger.comtag:blogger.com,1999:blog-26951738.post-88675523645700077002022-08-23T11:49:28.972-07:002022-08-23T11:49:28.972-07:00Your position versus longtermism seems similar to ...Your position versus longtermism seems similar to that of rule utilitarianism versus act utilitarianism. <br />While one can endorse act utilitarianism as the ultimate basis for action, given the uncertainties of the consequences of a specific <br />act, for practical purposes we rely on rule utilitarianism. Daniel Polowetzkynoreply@blogger.com