(Cross-post from https://amistrongeryet.substack.com/p/are-we-on-the-brink-of-agi, lightly edited for LessWrong. The original has a lengthier introduction and a bit more explanation of jargon.)
No one seems to know whether transformational AGI is coming within a few short years. Or rather, everyone seems to know, but they all have conflicting opinions. Have we entered into what will in hindsight be not even the early stages, but actually the middle stage, of the mad tumbling rush into singularity? Or are we just witnessing the exciting early period of a new technology, full of discovery and opportunity, akin to the boom years of the personal computer and the web?
AI is approaching elite skill at programming, possibly barreling into superhuman status at advanced mathematics, and only picking up speed. Or so the framing goes. And...
Thanks for the mention Thane. I think you make excellent points, and agree with all of them, to some degree. Yet, I'm expecting huge progress in AI algorithms to be unlocked by AI reseachers.
I'll quote from my comments on .
...How closely are they adhering to the "main path" of scaling existing techniques with minor tweaks? If you want to know how a minor tweak affects your current large model at scale, that is a very compute-heavy researcher-time-light type of experiment. On the other hand, if you want to test a lot of novel new paths at much smaller scales
This is a follow-up to last week's D&D.Sci scenario: if you intend to play that, and haven't done so yet, you should do so now before spoiling yourself.
There is a web interactive here you can use to test your answer, and generation code available here if you're interested, or you can read on for the ruleset and scores.
The following encounters existed:
Encounter Name | Threat (Surprised) | Threat (Alerted) | Alerted By | Tier |
Whirling Blade Trap | -- | 2 | -- | 1 |
Goblins | 1 | 2 | Anything | 1 |
Boulder Trap | -- | 3 | -- | 2 |
Orcs | 2 | 4 | Anything | 2 |
Clay Golem | -- | 4 | -- | 3 |
Hag | 3 | 6 | Tier 2 and up | 3 |
Steel Golem | -- | 5 | -- | 4 |
Dragon | 4 | 8 | Tier 3 and up | 4 |
Each encounter had a Threat that determined how dangerous it was to adventurers. When adventurers encountered that, they would roll [Threat]d2 to determine how challenging they found it.
However, many encounters had two different Threat levels, depending on whether they were alerted to the adventurers or not. (A dragon that's woken up from...
I think puzzling out the premise could have been a lot more fun if we hadn't known the entry and exit squares going in
I think this would have messed up the difficulty curve a bit: telling players 'here is the entrance and exit' is part of what lets 'stick a tough encounter at the entrance/exit' be a simple strategy.
The writing was as fun and funny as usual - if not more so! - but seemed less . . . pointed?/ambitious?/thematically-coherent? than I've come to expect.
This is absolutely true though I'm surprised it's obvious: my originally-planned scenario did...
In my first piece, I settled on conceiving radical empathy as an object view. I highlight some other potential moral implications of object views:
I illustrate and defend actualist object views as my conception of radical empathy, of being concerned exactly with what we would actually care about. As a kind of asymmetric person-affecting view, the most important implication for cause prioritization is probably lower priority for extinction risk reduction relative to total utilitarianism.
Fatebook is a website that makes it extremely low friction to make and track predictions.
It's designed to be very fast - just open a new tab, go to fatebook.io, type your prediction, and hit enter. Later, you'll get an email reminding you to resolve your question as YES, NO, or AMBIGUOUS.
It's private by default, so you can track personal questions and give forecasts that you don't want to share publicly. You can also share questions with specific people, or publicly.
Fatebook syncs with Fatebook for Slack - if you log in with the email you use for Slack, you’ll see all of your questions on the website.
As you resolve your forecasts, you'll build a track record - Brier score, Relative Brier score, and see your calibration chart. You can...
Interesting! I think the problem is dense/compressed information can be represented in ways in which it is not easily retrievable for a certain decoder. The standard model written in Chinese is a very compressed representation of human knowledge of the universe and completely inscrutable to me.
Or take some maximally compressed code and pass it through a permutation. The information content is obviously the same but it is illegible until you reverse the permutation.
In some ways it is uniquely easy to do this to codes with maximal entropy because per definit...
Epistemic Status: This post is an attempt to condense some ideas I've been thinking about for quite some time. I took some care grounding the main body of the text, but some parts (particularly the appendix) are pretty off the cuff, and should be treated as such.
The magnitude and scope of the problems related to AI safety have led to an increasingly public discussion about how to address them. Risks of sufficiently advanced AI systems involve unknown unknowns that could impact the global economy, national and personal security, and the way we investigate, innovate, and learn. Clearly, the response from the AI safety community should be as multi-faceted and expansive as the problems it aims to address. In a previous post, we framed fruitful collaborations between applied...
Thanks for the comment! I do hope that the thoughts expressed here can inspire some action, but I'm not sure I understand your questions. Do you mean 'centralized', or are you thinking about the conditions necessary for many small scale trading zones?
In this way, I guess the emergence of big science could be seen as a phase transition from decentralization -> centralization.
Something like a crux here is I believe the trajectories non-trivially matter for which end-points we get, and I don't think it's like entropy where we can easily determine the end-point without considering the intermediate trajectory, because I do genuinely think some path-dependentness is present in history, which is why even if I were way more charitable towards communism I don't think this was ever defensible:
...[...] Marx was philosophically opposed, as a matter of principle, to any planning about the structure of communist governments or economies. He