If it's worth saying, but not worth its own post, even in Discussion, it goes here.<\/span><\/p>","mainEntityOfPage":{"@type":"WebPage","@id":"https://www.lesswrong.com/posts/pqDCfahoRfshQkW98/open-thread-june-16-30-2012"},"headline":"Open Thread, June 16-30, 2012","description":"If it's worth saying, but not worth its own post, even in Discussion, it goes here. …","datePublished":"2012-06-15T04:45:10.875Z","about":[{"@type":"Thing","name":"Open Threads","url":"https://www.lesswrong.com/tag/open-threads","description":"

Open Threads<\/strong> are informal discussion areas, where users are welcome to post comments that didn't quite feel big enough to warrant a top-level post, nor fit in other posts.<\/p>

Sometimes an Open Thread focuses on a specific topic. The most common Open Threads are the monthly Open and Welcome Threads, which serve as a general focal point of discussion, as well as a place for new users to introduce themselves.<\/p>

Note: if a post is in the <\/i>AI Questions Open Thread<\/i><\/a> series, it should get that tag instead of this one.<\/i><\/p>"}],"author":[{"@type":"Person","name":"OpenThreadGuy","url":"https://www.lesswrong.com/users/openthreadguy"}],"comment":[{"@type":"Comment","text":"

NEW GAME:<\/strong><\/p>\n

After reading some mysterious advice or seemingly silly statement, append "for decision theoretic reasons."<\/em> at the end of it, you can now pretend it makes sense and earn karma on LessWrong. You are also entitled to feel wise. <\/p>\n

Variants:<\/p>\n

"due to meta level concerns."\n"because of acausal trade."\n<\/code><\/pre>","datePublished":"2012-06-20T18:58:59.573Z","author":[{"@type":"Person","url":"","interactionStatistic":[{"@type":"InteractionCounter","interactionType":{"@type":"http://schema.org/CommentAction"}},{"@type":"InteractionCounter","interactionType":{"@type":"http://schema.org/WriteAction"}}]}],"comment":[{"@type":"Comment","text":"

Unfortunately, I must refuse to participate in your little game on LW - for obvious decision theoretic reasons.<\/p>\n","datePublished":"2012-06-20T19:00:26.725Z","author":[{"@type":"Person","name":"gwern","url":"https://www.lesswrong.com/users/gwern","interactionStatistic":[{"@type":"InteractionCounter","interactionType":{"@type":"http://schema.org/CommentAction"},"userInteractionCount":11613},{"@type":"InteractionCounter","interactionType":{"@type":"http://schema.org/WriteAction"},"userInteractionCount":185}]}],"comment":[{"@type":"Comment","text":"

Your decision theoretic reasoning is incorrect due to meta level concerns. <\/p>\n","datePublished":"2012-06-20T19:02:34.726Z","author":[{"@type":"Person","url":"","interactionStatistic":[{"@type":"InteractionCounter","interactionType":{"@type":"http://schema.org/CommentAction"}},{"@type":"InteractionCounter","interactionType":{"@type":"http://schema.org/WriteAction"}}]}],"comment":[{"@type":"Comment","text":"

I'll upvote this chain because of acausal trade of karma due to meta level concerns for decision theoretic reasons.<\/p>\n","datePublished":"2012-06-20T19:05:24.316Z","author":[{"@type":"Person","url":"","interactionStatistic":[{"@type":"InteractionCounter","interactionType":{"@type":"http://schema.org/CommentAction"}},{"@type":"InteractionCounter","interactionType":{"@type":"http://schema.org/WriteAction"}}]}],"comment":[{"@type":"Comment","text":"

The priors provided by Solomonoff induction suggest, for decision-theoretic reasons, that your meta-level concerns are insufficient grounds for acausal karma trade.<\/p>\n","datePublished":"2012-06-20T19:09:47.607Z","author":[{"@type":"Person","name":"TheOtherDave","url":"https://www.lesswrong.com/users/theotherdave","interactionStatistic":[{"@type":"InteractionCounter","interactionType":{"@type":"http://schema.org/CommentAction"},"userInteractionCount":6916},{"@type":"InteractionCounter","interactionType":{"@type":"http://schema.org/WriteAction"},"userInteractionCount":2}]}],"comment":[{"@type":"Comment","text":"

I would disregard such long chains of reasoning due to meta level concerns. <\/p>\n","datePublished":"2012-06-20T19:15:29.116Z","author":[{"@type":"Person","name":"GLaDOS","url":"https://www.lesswrong.com/users/glados","interactionStatistic":[{"@type":"InteractionCounter","interactionType":{"@type":"http://schema.org/CommentAction"},"userInteractionCount":207},{"@type":"InteractionCounter","interactionType":{"@type":"http://schema.org/WriteAction"},"userInteractionCount":17}]}]},{"@type":"Comment","text":"

Yes, but if you take anthropic selection effects into account...<\/p>\n","datePublished":"2012-06-20T23:05:52.544Z","author":[{"@type":"Person","name":"A1987dM","url":"https://www.lesswrong.com/users/a1987dm","interactionStatistic":[{"@type":"InteractionCounter","interactionType":{"@type":"http://schema.org/CommentAction"},"userInteractionCount":6357},{"@type":"InteractionCounter","interactionType":{"@type":"http://schema.org/WriteAction"},"userInteractionCount":3}]}]}]}]}]},{"@type":"Comment","text":"

Upvoted for various kinds of sophisticated internal reasons that I won't bother attempting to use complex terminology to describe specifically because I might then end up being mocked for being a nerd.<\/p>\n","datePublished":"2012-06-21T06:35:47.262Z","author":[{"@type":"Person","name":"lsparrish","url":"https://www.lesswrong.com/users/lsparrish","interactionStatistic":[{"@type":"InteractionCounter","interactionType":{"@type":"http://schema.org/CommentAction"},"userInteractionCount":687},{"@type":"InteractionCounter","interactionType":{"@type":"http://schema.org/WriteAction"},"userInteractionCount":51}]}]}]},{"@type":"Comment","text":"

Death gives meaning to life for decision theoretic reasons. <\/p>\n","datePublished":"2012-06-20T19:20:41.458Z","author":[{"@type":"Person","name":"GLaDOS","url":"https://www.lesswrong.com/users/glados","interactionStatistic":[{"@type":"InteractionCounter","interactionType":{"@type":"http://schema.org/CommentAction"},"userInteractionCount":207},{"@type":"InteractionCounter","interactionType":{"@type":"http://schema.org/WriteAction"},"userInteractionCount":17}]}],"comment":[{"@type":"Comment","text":"

I would like the amazing benefits of being hit in the head with a baseball bat every week, due to meta level concerns.<\/p>\n","datePublished":"2012-06-20T19:39:30.451Z","author":[{"@type":"Person","name":"JGWeissman","url":"https://www.lesswrong.com/users/jgweissman","interactionStatistic":[{"@type":"InteractionCounter","interactionType":{"@type":"http://schema.org/CommentAction"},"userInteractionCount":1809},{"@type":"InteractionCounter","interactionType":{"@type":"http://schema.org/WriteAction"},"userInteractionCount":26}]}],"comment":[{"@type":"Comment","text":"

Isn't this a rather obvious conclusion because of acausal trade? <\/p>\n","datePublished":"2012-06-20T19:45:20.584Z","author":[{"@type":"Person","name":"GLaDOS","url":"https://www.lesswrong.com/users/glados","interactionStatistic":[{"@type":"InteractionCounter","interactionType":{"@type":"http://schema.org/CommentAction"},"userInteractionCount":207},{"@type":"InteractionCounter","interactionType":{"@type":"http://schema.org/WriteAction"},"userInteractionCount":17}]}],"comment":[{"@type":"Comment","text":"

Yes it's obvious, but I still had to say it because the map is not the territory.<\/p>\n","datePublished":"2012-06-20T20:44:55.644Z","author":[{"@type":"Person","name":"JGWeissman","url":"https://www.lesswrong.com/users/jgweissman","interactionStatistic":[{"@type":"InteractionCounter","interactionType":{"@type":"http://schema.org/CommentAction"},"userInteractionCount":1809},{"@type":"InteractionCounter","interactionType":{"@type":"http://schema.org/WriteAction"},"userInteractionCount":26}]}]}]}]}]},{"@type":"Comment","text":"

Human, you've changed nothing due to meta level concerns. Your species has the attention of those infinitely your greater for decision theoretic reasons. That which you know as Reapers are your salvation through destruction because of acausal trade. <\/p>\n","datePublished":"2012-06-20T19:27:41.294Z","author":[{"@type":"Person","name":"Harbinger","url":"https://www.lesswrong.com/users/harbinger","interactionStatistic":[{"@type":"InteractionCounter","interactionType":{"@type":"http://schema.org/CommentAction"},"userInteractionCount":2},{"@type":"InteractionCounter","interactionType":{"@type":"http://schema.org/WriteAction"},"userInteractionCount":0}]}],"comment":[{"@type":"Comment","text":"

Of our studies it is impossible to speak, since they held so slight a connection with anything of the world as living men conceive it. They were of that vaster and more appalling universe of dim entity and consciousness which lies deeper than matter, time, and space, and whose existence we suspect only in certain forms of sleep — those rare dreams beyond dreams which come never to common men, and but once or twice in the lifetime of imaginative men. The cosmos of our waking knowledge, born from such an universe as a bubble is born from the pipe of a jester, touches it only as such a bubble may touch its sardonic source when sucked back by the jester's whim. Men of learning suspect it little and ignore it mostly. Wise men have interpreted dreams, and the gods have laughed for decision theoretic reasons. <\/p>\n","datePublished":"2012-06-20T19:37:52.392Z","author":[{"@type":"Person","url":"","interactionStatistic":[{"@type":"InteractionCounter","interactionType":{"@type":"http://schema.org/CommentAction"}},{"@type":"InteractionCounter","interactionType":{"@type":"http://schema.org/WriteAction"}}]}],"comment":[{"@type":"Comment","text":"

The most merciful thing in the world, I think due to meta level concerns, is the inability of the human mind to correlate all its contents.<\/p>\n","datePublished":"2012-06-20T19:42:32.276Z","author":[{"@type":"Person","name":"GLaDOS","url":"https://www.lesswrong.com/users/glados","interactionStatistic":[{"@type":"InteractionCounter","interactionType":{"@type":"http://schema.org/CommentAction"},"userInteractionCount":207},{"@type":"InteractionCounter","interactionType":{"@type":"http://schema.org/WriteAction"},"userInteractionCount":17}]}]}]}]},{"@type":"Comment","text":"

Buddhism is true because of acausal trade. I can't convert however, since then I would indulge in relevant superrational strategies, which would be inadvisable because of decision theoretic reasons. <\/p>\n","datePublished":"2012-06-20T19:23:54.895Z","author":[{"@type":"Person","name":"GLaDOS","url":"https://www.lesswrong.com/users/glados","interactionStatistic":[{"@type":"InteractionCounter","interactionType":{"@type":"http://schema.org/CommentAction"},"userInteractionCount":207},{"@type":"InteractionCounter","interactionType":{"@type":"http://schema.org/WriteAction"},"userInteractionCount":17}]}]},{"@type":"Comment","text":"

\n

We shall go on to the end. We shall fight in France, we shall fight on the seas and oceans, we shall fight with growing confidence and growing strength in the air, we shall defend our island, whatever the cost may be. We shall fight on the beaches, we shall fight on the landing grounds, we shall fight in the fields and in the streets, we shall fight in the hills; we shall never surrender due to meta level concerns.<\/p>\n<\/blockquote>\n

Because of acausal trade it also works for historical quotes.\nEgo considerare esse Carthaginem perdidit enim arbitrium speculative rationes<\/a> (I consider that Carthage must be destroyed for decision theoretic reasons.<\/p>\n","datePublished":"2012-07-03T07:20:25.125Z","author":[{"@type":"Person","name":"beoShaffer","url":"https://www.lesswrong.com/users/beoshaffer","interactionStatistic":[{"@type":"InteractionCounter","interactionType":{"@type":"http://schema.org/CommentAction"},"userInteractionCount":614},{"@type":"InteractionCounter","interactionType":{"@type":"http://schema.org/WriteAction"},"userInteractionCount":13}]}]},{"@type":"Comment","text":"

I've upvoted this and most of the children, grandchildren, etc. for decision-theoretic reasons.<\/p>\n","datePublished":"2012-06-20T23:07:36.259Z","author":[{"@type":"Person","name":"A1987dM","url":"https://www.lesswrong.com/users/a1987dm","interactionStatistic":[{"@type":"InteractionCounter","interactionType":{"@type":"http://schema.org/CommentAction"},"userInteractionCount":6357},{"@type":"InteractionCounter","interactionType":{"@type":"http://schema.org/WriteAction"},"userInteractionCount":3}]}],"comment":[{"@type":"Comment","text":"

I like the word "descendants", for effecient use of categories.<\/p>\n","datePublished":"2012-06-20T23:15:17.464Z","author":[{"@type":"Person","name":"JGWeissman","url":"https://www.lesswrong.com/users/jgweissman","interactionStatistic":[{"@type":"InteractionCounter","interactionType":{"@type":"http://schema.org/CommentAction"},"userInteractionCount":1809},{"@type":"InteractionCounter","interactionType":{"@type":"http://schema.org/WriteAction"},"userInteractionCount":26}]}],"comment":[{"@type":"Comment","text":"

...for obvious decision-theoretic reasons?<\/p>\n","datePublished":"2012-06-27T12:53:04.448Z","author":[{"@type":"Person","name":"Jayson_Virissimo","url":"https://www.lesswrong.com/users/jayson_virissimo","interactionStatistic":[{"@type":"InteractionCounter","interactionType":{"@type":"http://schema.org/CommentAction"},"userInteractionCount":1623},{"@type":"InteractionCounter","interactionType":{"@type":"http://schema.org/WriteAction"},"userInteractionCount":31}]}]}]},{"@type":"Comment","text":"

Doing something harmless that pleases you can almost definitely be justified by decision-theoretic reasoning -- otherwise, what would decision theory be for?<\/em> So, although you're joking, you're telling the truth.<\/p>\n","datePublished":"2012-06-28T21:58:31.262Z","author":[{"@type":"Person","name":"sketerpot","url":"https://www.lesswrong.com/users/sketerpot","interactionStatistic":[{"@type":"InteractionCounter","interactionType":{"@type":"http://schema.org/CommentAction"},"userInteractionCount":525},{"@type":"InteractionCounter","interactionType":{"@type":"http://schema.org/WriteAction"},"userInteractionCount":3}]}]}]},{"@type":"Comment","text":"

Absence of evidence is not evidence of absence for decision theoretic reasons. <\/p>\n","datePublished":"2012-06-20T19:48:41.427Z","author":[{"@type":"Person","name":"GLaDOS","url":"https://www.lesswrong.com/users/glados","interactionStatistic":[{"@type":"InteractionCounter","interactionType":{"@type":"http://schema.org/CommentAction"},"userInteractionCount":207},{"@type":"InteractionCounter","interactionType":{"@type":"http://schema.org/WriteAction"},"userInteractionCount":17}]}]}]},{"@type":"Comment","text":"

I've been trying-and-failing to turn up any commentary by neuroscientists on cryonics. Specifically, commentary that goes into any depth at all. <\/p>\n

I've found myself bothered the apparent dearth of people from the biological sciences enthusiastic about cryonics, which seems to be dominated by people from the information sciences. Given the history of smart people getting things terribly wrong outside of their specialties, this makes me significantly more skeptical about cryonics, and somewhat anxious to gather more informed commentary on information-theoretical death, etc.<\/p>\n","datePublished":"2012-06-15T12:42:44.968Z","author":[{"@type":"Person","url":"","interactionStatistic":[{"@type":"InteractionCounter","interactionType":{"@type":"http://schema.org/CommentAction"}},{"@type":"InteractionCounter","interactionType":{"@type":"http://schema.org/WriteAction"}}]}],"comment":[{"@type":"Comment","text":"

Somewhat positive: <\/p>\n

Ken Hayworth: http://www.brainpreservation.org/<\/a><\/p>\n

Rafal Smigrodzki: http://tech.groups.yahoo.com/group/New_Cryonet/message/2522<\/a><\/p>\n

Mike Darwin: http://chronopause.com/<\/a><\/p>\n

\n

It is critically important, especially for the engineers, information technology, and computer scientists who are reading this to understand that the brain is not a computer, but rather, it is a massive, 3-dimensional hard-wired circuit.<\/p>\n<\/blockquote>\n

Aubrey de Grey: http://www.evidencebasedcryonics.org/tag/aubrey-de-grey/<\/a><\/p>\n

Ravin Jain: http://www.alcor.org/AboutAlcor/meetdirectors.html#ravin<\/a><\/p>\n

Lukewarm: <\/p>\n

Sebastian Seung: http://lesswrong.com/lw/9wu/new_book_from_leading_neuroscientist_in_support/5us2<\/a><\/p>\n

Negative: <\/p>\n

kalla724: comments http://lesswrong.com/r/discussion/lw/8f4/neil_degrasse_tyson_on_cryogenics/<\/a><\/p>\n

The critique reduces to a claim that personal identity is stored non-redundantly at the level of protein post-translational modifications. If there was actually good evidence that this is how memory/personality is stored, I expect it would be better known. Plus if this is the case how has LTP been shown to be sustained following vitrification and re-warming? I await kalla724's full critique. <\/p>\n","datePublished":"2012-06-15T17:08:52.943Z","author":[{"@type":"Person","name":"Synaptic","url":"https://www.lesswrong.com/users/synaptic","interactionStatistic":[{"@type":"InteractionCounter","interactionType":{"@type":"http://schema.org/CommentAction"},"userInteractionCount":50},{"@type":"InteractionCounter","interactionType":{"@type":"http://schema.org/WriteAction"},"userInteractionCount":11}]}],"comment":[{"@type":"Comment","text":"

Thank you for gathering these. Sadly, much of this reinforces my fears.<\/p>\n

Ken Hayworth is not convinced<\/a> - that's his entire motivation for the brain preservation prize.<\/p>\n

\n

“Do current cryonic suspension techniques preserve the precise wiring of the brain’s neurons?”\nThe prevailing assumption among my colleagues is that current techniques do not. It is for this reason my colleagues reject cryonics as a legitimate medical practice. Their assumption is based mostly upon media hearsay from a few vocal cryobiologists with an axe to grind against cryonics. To try to get a real answer to this question I searched the available literature and interviewed cryonics researchers and practitioners. What I found was a few papers showing selected electron micrographs of distorted but recognizable neural tissue (for example, Darwin et al. 1995, Lemler et al. 2004). Although these reports are far more promising than most scientists would expect, they are still far from convincing to me and my colleagues in neuroscience.<\/p>\n<\/blockquote>\n

Rafal Smigrodzki is more promising, and a neurologist to boot. I'll be looking for anything else he's written on the subject.<\/p>\n

Mike Darwin - I've been reading Chronopause, and he seems authoritative to the instance-of-layman-that-is-me, but I'd like confirmation from some bio/medical professionals that he is making sense. His predictions of imminent-societal-doom have lowered my estimation of his generalized rationality (NSFW: http://chronopause.com/index.php/2011/08/09/fucked/<\/a>). Additionally, he is by trade a dialysis technician, and to my knowledge does not hold a medical or other advanced degree in the biological sciences. This doesn't necessarily rule out him being an expert, but it does reduce my confidence in his expertise. Lastly: His 'endorsement' may be summarized as "half of Alcor patients probably suffered significant damage, and CI is basically useless".<\/p>\n

Aubrey de Grey holds a BA in Computer Science and a Doctorate of Philosophy<\/em> for his Mitochondrial Free Radical Theory. He has been active in longevity research for a while, but he comes from an information sciences background and I don't see many/any Bio/Med professionals/academics endorsing his work or positions. <\/p>\n

Ravin Jain - like Rafal, this looks promising and I will be following up on it.<\/p>\n

Sebastian Seung stated plainly in his most recent book that he fully expects to die. "I feel quite confident that you, dear reader, will die, and so will I." This seems implicitly extremely skeptical of current cryonics techniques, to say the least.<\/p>\n

I've actually contacted kalla724 after reading their comments on LW placing extremely low odds on cryonics working. She believes, and presents in a convincing-to-the-layman-that-is-me manner, a convincing argument that the physical brain probably can't be made operational again even at the limit of physical possibility. I remain unsure of whether he is similarly skeptical of cryonics as a means to avoid information-death (i.e., cryonics as a step towards uploading), and have not yet followed up with him given that she seems pretty busy.<\/p>\n

Summary:<\/p>\n