Ruby

LessWrong Team

 

I have signed no contracts or agreements whose existence I cannot mention.

Sequences

LW Team Updates & Announcements
Novum Organum

Comments

Sorted by
Ruby42

Curated. The wiki pages collected here, despite being written in 2015-2017 remain excellent resources on concepts and arguments for key AI alignment ideas (both still widely used and those lesser known). I found that even for concepts/arguments like the orthogonality thesis and corrigibility, I felt a gain in crispness from reading these pages. The concept of, e.g. epistemic and instrumental efficiency I didn't have, yet feels useful in thinking about the rise of increasingly powerful AI.

Of course, there's also non-AI content that got imported. The Bayes guide likely remains the best resource for building Bayes intuition, and same with the guide on logarithms that is extremely thorough.

Ruby42

I think the guide should be 10x more prominent in this post.

Ruby20

You should see the option when you click on the triple dot menu (next to the Like button).

Ruby20

So the nice thing about karma is that if someone thinks a wikitag is worthy of attention for any reason (article, tagged posts, importance of concept), they're able to upvote it and make it appear higher.

Much of the current karma comes from Ben Pace and I who did a pass. Rationality Quotes didn't strike me a page I particularly wanted to boost up the list, but if you disagree with me you're able to Like it.

In general, I don't think have a lot of tagged posts should mean a wikitag should be ranked highly. It's a consideration, but I like it flowing via people's judgments about whether or not to upvote it.


The categorization is an interesting question. Indeed currently only admins can do it and that perhaps requires more thought.

Ruby30

Interesting. Doesn't replicate for me. What phone are you using?

Answer by Ruby197

It's a compass rose, thematic with the Map and Territory metaphor for rationality/truthseeking.

The real question is why does NATO have our logo. 

Ruby179

Curated!  I like this post for the object-level interestingness of the cited papers, but also for pulling in some interesting models from elsewhere and generally reminding us that this is something we can do.

In times of yore, LessWrong venerated the the neglected virtue of scholarship. And well, sometimes it feels like it's still neglected. It's tough because indeed many domains have a lot of low quality work, especially outside of hard sciences, but I'd wager on there being a fair amount worth reading, and appreciate Buck point at a domain where that seems to be the case.

Ruby20

Was there the text of the post in the email or just a link to it?

Ruby50

Curated. I was reluctant to curate this post because I found myself bouncing off it some due to length – I guess in pedagogy there's a tradeoff between explaining at length (and you lose people) and you convey enough info vs keeping it brief and people read it but they don't get enough. Based on private convo, Raemon thinks length is warranted.

I'm curating because I do think this kind of project is valuable. Everyday it feels easier to lose our minds entirely to AI, and I think it's important to remember we can think better or worse, and we should be trying to do the former. 

I have mixed feeling about Raemon's project overall. Parts of it feel good, something feels missing (I think I'm partial to John Wentworth's claim elsewhere that you need a bunch of technical study in the recipe), but I except the stuff Raemon is developing to be helpful to have engaged with for anyone who gets better at thinking.

Ruby246

This doesn't seem right. Suppose there are two main candidates for how to get there, I-5 and J-6 (but who knows, maybe we'll be surprised by a K-7) and I don't know which Alice will choose. Suppose I know there's already a Very General Helper and Kinda Decent Generalizer, then I might say "I assign 65% chance that Alice is going to choose the I-5 and will try to contribute having conditioned on that". This seems like a reasonable thing to do. It might be for naught, but I'd guess in many case the EV of something definitely helpful if we go down Route A is better than the EV of finding something that's helpful no matter the choice.

One should definitely track the major route they're betting on and make updates and maybe switch, but seems okay to say your plan is conditioning on some bigger plan. 

Load More