I illustrate and defend actualist object views as my conception of radical empathy, of being concerned exactly with what we would actually care about. As a kind of asymmetric person-affecting view, the most important implication for cause prioritization is probably lower priority for extinction risk reduction relative to total utilitarianism.
Fatebook is a website that makes it extremely low friction to make and track predictions.
It's designed to be very fast - just open a new tab, go to fatebook.io, type your prediction, and hit enter. Later, you'll get an email reminding you to resolve your question as YES, NO, or AMBIGUOUS.
It's private by default, so you can track personal questions and give forecasts that you don't want to share publicly. You can also share questions with specific people, or publicly.
Fatebook syncs with Fatebook for Slack - if you log in with the email you use for Slack, you’ll see all of your questions on the website.
As you resolve your forecasts, you'll build a track record - Brier score, Relative Brier score, and see your calibration chart. You can...
Interesting! I think the problem is dense/compressed information can be represented in ways in which it is not easily retrievable for a certain decoder. The standard model written in Chinese is a very compressed representation of human knowledge of the universe and completely inscrutable to me.
Or take some maximally compressed code and pass it through a permutation. The information content is obviously the same but it is illegible until you reverse the permutation.
In some ways it is uniquely easy to do this to codes with maximal entropy because per definit...
Epistemic Status: This post is an attempt to condense some ideas I've been thinking about for quite some time. I took some care grounding the main body of the text, but some parts (particularly the appendix) are pretty off the cuff, and should be treated as such.
The magnitude and scope of the problems related to AI safety have led to an increasingly public discussion about how to address them. Risks of sufficiently advanced AI systems involve unknown unknowns that could impact the global economy, national and personal security, and the way we investigate, innovate, and learn. Clearly, the response from the AI safety community should be as multi-faceted and expansive as the problems it aims to address. In a previous post, we framed fruitful collaborations between applied...
Thanks for the comment! I do hope that the thoughts expressed here can inspire some action, but I'm not sure I understand your questions. Do you mean 'centralized', or are you thinking about the conditions necessary for many small scale trading zones?
In this way, I guess the emergence of big science could be seen as a phase transition from decentralization -> centralization.
Something like a crux here is I believe the trajectories non-trivially matter for which end-points we get, and I don't think it's like entropy where we can easily determine the end-point without considering the intermediate trajectory, because I do genuinely think some path-dependentness is present in history, which is why even if I were way more charitable towards communism I don't think this was ever defensible:
...[...] Marx was philosophically opposed, as a matter of principle, to any planning about the structure of communist governments or economies. He
This is a follow-up to last week's D&D.Sci scenario: if you intend to play that, and haven't done so yet, you should do so now before spoiling yourself.
There is a web interactive here you can use to test your answer, and generation code available here if you're interested, or you can read on for the ruleset and scores.
The following encounters existed:
Encounter Name | Threat (Surprised) | Threat (Alerted) | Alerted By | Tier |
Whirling Blade Trap | -- | 2 | -- | 1 |
Goblins | 1 | 2 | Anything | 1 |
Boulder Trap | -- | 3 | -- | 2 |
Orcs | 2 | 4 | Anything | 2 |
Clay Golem | -- | 4 | -- | 3 |
Hag | 3 | 6 | Tier 2 and up | 3 |
Steel Golem | -- | 5 | -- | 4 |
Dragon | 4 | 8 | Tier 3 and up | 4 |
Each encounter had a Threat that determined how dangerous it was to adventurers. When adventurers encountered that, they would roll [Threat]d2 to determine how challenging they found it.
However, many encounters had two different Threat levels, depending on whether they were alerted to the adventurers or not. (A dragon that's woken up from...
Notes on my performance:
. . . huh! I was really expecting to either take first place for being the only player putting serious effort into the right core mechanics, or take last place for being the only player putting serious effort into the wrong core mechanics; getting the main idea wrong but doing everything else well enough for silver was not on my bingo card. (I'm also pleasantly surprised to note that I figured out which goblin I could purge with least collateral damage: I can leave Room 7 empty without changing my position on the leaderboard.)...
Edit 2: I'm now fairly confident that this is just the Presumptuous Philosopher problem is disguise, which is explained clearly in Section 6.1 here https://www.lesswrong.com/s/HFyami76kSs4vEHqy/p/LARmKTbpAkEYeG43u
This is my first post ever on LessWrong. Let me explain my problem.
I was born in a unique situation — I shall omit the details of exactly what this situation was, but for my argument's sake, assume I was born as the tallest person in the entire world. Or instead suppose that I was born into the richest family in the world. In other words, take as an assumption that I was born into a situation entirely unique relative to all other humans on an easily measurable dimension such as height or wealth (i.e., not some niche measure like "longest tongue"). And indeed, my...
The answer is yes, trivially, because under a wide enough conception of computation, basically everything is simulatable, so everything is evidence for the simulation hypothesis because it includes effectively everything.
It will not help you infer anything else though.
More below:
http://www.amirrorclear.net/academic/ideas/simulation/index.html
How many years will pass before transformative AI is built? Three people who have thought about this question a lot are Ajeya Cotra from Open Philanthropy, Daniel Kokotajlo from OpenAI and Ege Erdil from Epoch. Despite each spending at least hundreds of hours investigating this question, they still still disagree substantially about the relevant timescales. For instance, here are their median timelines for one operationalization of transformative AI:
Median Estimate for when 99% of currently fully remote jobs will be automatable | |
---|---|
Daniel | 4 years |
Ajeya | 13 years |
Ege | 40 years |
You can see the strength of their disagreements in the graphs below, where they give very different probability distributions over two questions relating to AGI development (note that these graphs are very rough and are only intended to capture high-level differences, and especially aren't very...
I've updated toward the views Daniel expresses here and I'm now about half way between Ajeya's views in this post and Daniel's (in geometric mean).
What were the biggest factors that made you update? (I obviously have some ideas, but curious what seemed most important to you.)
Crossposted from my personal blog. I was inspired to cross-post this here given the discussion that this post on the role of capital in an AI future elicited.
When discussing the future of AI, I semi-often hear an argument along the lines that in a slow takeoff world, despite AIs automating increasingly more of the economy, humanity will remain in the driving seat because of its ownership of capital. This world posits one where humanity effectively becomes a rentier class living well off the vast economic productivity of the AI economy where despite contributing little to no value, humanity can extract most/all of the surplus value created due to its ownership of capital alone.
This is a possibility, and indeed is perhaps closest to what a ‘positive singularity’ looks...
This is only true if you restrict "nobility" to Great Britain and if you only count "nobles" those who are considered such in our current day. This is a confusion of the current British noble title (specifically members of "Peerage of Great Britain") with "land owning rentier class that existed before the industrial revolution". For our discussion, we need to look at the second one.
I do not have specific numbers of UK, but quoting for Europe from wikipedia (https://en.wikipedia.org/wiki/Nobility#Europe):
"The countries with the highest proportion of nobles ...