In my first piece, I settled on conceiving radical empathy as an object view. I highlight some other potential moral implications of object views:
I illustrate and defend actualist object views as my conception of radical empathy, of being concerned exactly with what we would actually care about. As a kind of asymmetric person-affecting view, the most important implication for cause prioritization is probably lower priority for extinction risk reduction relative to total utilitarianism.
Fatebook is a website that makes it extremely low friction to make and track predictions.
It's designed to be very fast - just open a new tab, go to fatebook.io, type your prediction, and hit enter. Later, you'll get an email reminding you to resolve your question as YES, NO, or AMBIGUOUS.
It's private by default, so you can track personal questions and give forecasts that you don't want to share publicly. You can also share questions with specific people, or publicly.
Fatebook syncs with Fatebook for Slack - if you log in with the email you use for Slack, you’ll see all of your questions on the website.
As you resolve your forecasts, you'll build a track record - Brier score, Relative Brier score, and see your calibration chart. You can...
Interesting! I think the problem is dense/compressed information can be represented in ways in which it is not easily retrievable for a certain decoder. The standard model written in Chinese is a very compressed representation of human knowledge of the universe and completely inscrutable to me.
Or take some maximally compressed code and pass it through a permutation. The information content is obviously the same but it is illegible until you reverse the permutation.
In some ways it is uniquely easy to do this to codes with maximal entropy because per definit...
Epistemic Status: This post is an attempt to condense some ideas I've been thinking about for quite some time. I took some care grounding the main body of the text, but some parts (particularly the appendix) are pretty off the cuff, and should be treated as such.
The magnitude and scope of the problems related to AI safety have led to an increasingly public discussion about how to address them. Risks of sufficiently advanced AI systems involve unknown unknowns that could impact the global economy, national and personal security, and the way we investigate, innovate, and learn. Clearly, the response from the AI safety community should be as multi-faceted and expansive as the problems it aims to address. In a previous post, we framed fruitful collaborations between applied...
Thanks for the comment! I do hope that the thoughts expressed here can inspire some action, but I'm not sure I understand your questions. Do you mean 'centralized', or are you thinking about the conditions necessary for many small scale trading zones?
In this way, I guess the emergence of big science could be seen as a phase transition from decentralization -> centralization.
Something like a crux here is I believe the trajectories non-trivially matter for which end-points we get, and I don't think it's like entropy where we can easily determine the end-point without considering the intermediate trajectory, because I do genuinely think some path-dependentness is present in history, which is why even if I were way more charitable towards communism I don't think this was ever defensible:
...[...] Marx was philosophically opposed, as a matter of principle, to any planning about the structure of communist governments or economies. He
This is a follow-up to last week's D&D.Sci scenario: if you intend to play that, and haven't done so yet, you should do so now before spoiling yourself.
There is a web interactive here you can use to test your answer, and generation code available here if you're interested, or you can read on for the ruleset and scores.
The following encounters existed:
Encounter Name | Threat (Surprised) | Threat (Alerted) | Alerted By | Tier |
Whirling Blade Trap | -- | 2 | -- | 1 |
Goblins | 1 | 2 | Anything | 1 |
Boulder Trap | -- | 3 | -- | 2 |
Orcs | 2 | 4 | Anything | 2 |
Clay Golem | -- | 4 | -- | 3 |
Hag | 3 | 6 | Tier 2 and up | 3 |
Steel Golem | -- | 5 | -- | 4 |
Dragon | 4 | 8 | Tier 3 and up | 4 |
Each encounter had a Threat that determined how dangerous it was to adventurers. When adventurers encountered that, they would roll [Threat]d2 to determine how challenging they found it.
However, many encounters had two different Threat levels, depending on whether they were alerted to the adventurers or not. (A dragon that's woken up from...
Notes on my performance:
. . . huh! I was really expecting to either take first place for being the only player putting serious effort into the right core mechanics, or take last place for being the only player putting serious effort into the wrong core mechanics; getting the main idea wrong but doing everything else well enough for silver was not on my bingo card. (I'm also pleasantly surprised to note that I figured out which goblin I could purge with least collateral damage: I can leave Room 7 empty without changing my position on the leaderboard.)...
Edit 2: I'm now fairly confident that this is just the Presumptuous Philosopher problem is disguise, which is explained clearly in Section 6.1 here https://www.lesswrong.com/s/HFyami76kSs4vEHqy/p/LARmKTbpAkEYeG43u
This is my first post ever on LessWrong. Let me explain my problem.
I was born in a unique situation — I shall omit the details of exactly what this situation was, but for my argument's sake, assume I was born as the tallest person in the entire world. Or instead suppose that I was born into the richest family in the world. In other words, take as an assumption that I was born into a situation entirely unique relative to all other humans on an easily measurable dimension such as height or wealth (i.e., not some niche measure like "longest tongue"). And indeed, my...
The answer is yes, trivially, because under a wide enough conception of computation, basically everything is simulatable, so everything is evidence for the simulation hypothesis because it includes effectively everything.
It will not help you infer anything else though.
More below:
http://www.amirrorclear.net/academic/ideas/simulation/index.html