Reply to: A \"Failure to Evaluate Return-on-Time\" Fallacy<\/a><\/p>\n
Lionhearted writes:<\/p>\n [A] large majority of otherwise smart people spend time doing semi-productive things, when there are massively productive opportunities untapped.<\/p>\n A somewhat silly example: Let's say someone aspires to be a comedian, the best comedian ever, and to make a living doing comedy. He wants nothing else, it is his purpose. And he decides that in order to become a better comedian, he will watch re-runs of the old television cartoon 'Garfield and Friends' that was on TV from 1988 to 1995....<\/p>\n I’m curious as to why.<\/p>\n<\/blockquote>\n Why will a randomly chosen eight-year-old fail a calculus test? Because most possible answers are wrong, and there is no force to guide him to the correct answers. (There is no need to postulate a “fear of success”; most<\/em> ways writing or not writing on a calculus test constitute failure, and so people, and rocks, fail calculus tests by default.)<\/p>\n Why do most of us, most of the time, choose to \"pursue our goals\" through routes that are far less effective than the routes we could find if we tried?[1] My guess is that here, as with the calculus test, the main problem is that most<\/em> courses of action are extremely ineffective, and that there has been no strong evolutionary or cultural force sufficient to focus us on the very narrow behavior patterns that would actually be effective. <\/p>\n <\/a>To be more specific: there are clearly at least some limited senses in which we have goals. We: (1) tell ourselves and others stories of how we’re aiming for various “goals”; (2) search out modes of activity that are consistent with the role, and goal-seeking, that we see ourselves as doing (“learning math”; “becoming a comedian”; “being a good parent”); and sometimes even (3) feel glad or disappointed when we do/don’t achieve our “goals”.<\/p>\n But there are clearly also heuristics that would be useful to goal-achievement (or that would be part of what it means to “have goals” at all) that we do not<\/em> automatically carry out. We do not<\/em> automatically:<\/p>\n .... or carry out any number of other useful techniques. Instead, we mostly just do things. We act from habit; we act from impulse or convenience when primed by the activities in front of us; we remember our goal and choose an action that feels associated<\/em> with our goal. We do any number of things. But we do not systematically choose the narrow sets of actions that would effectively optimize for our claimed goals, or for any other goals.<\/p>\n Why? Most basically, because humans are only just on the cusp of general intelligence. Perhaps 5% of the population has enough abstract reasoning skill to verbally understand<\/em> that the above heuristics would be useful once these heuristics are pointed out<\/em>. That is not at all the same as the ability to automatically implement these heuristics<\/em>. Our verbal, conversational systems are much better at abstract reasoning than are the motivational systems that pull our behavior. I have enough abstract reasoning ability to understand that I’m safe on the glass floor of a tall building, or that ice cream is not healthy, or that exercise furthers my goals... but this doesn’t<\/em> lead to an automatic updating of the reward gradients that, absent rare and costly conscious overrides, pull my behavior. I can train my automatic systems, for example by visualizing ice cream as disgusting and artery-clogging and yucky, or by walking across the glass floor often enough to persuade my brain that I can’t fall through the floor... but systematically training one’s motivational systems in this way is also<\/em> not automatic for us. And so it seems far from surprising that most of us have not trained ourselves in this way, and that most of our “goal-seeking” actions are far less effective than they could be.<\/p>\n Still, I’m keen to train. I know people who are far more strategic than I am, and there seem to be clear avenues for becoming far more strategic than they are. It also seems that having goals, in a much more pervasive sense than (1)-(3), is part of what “rational” should mean, will help us achieve what we care about, and hasn't been taught in much detail on LW.<\/p>\n So, to second Lionhearted's questions: does this analysis seem right? Have some of you trained yourselves to be substantially more strategic, or goal-achieving, than you started out? How did you do it? Do you agree with (a)-(h) above? Do you have some good heuristics to add? Do you have some good ideas for how to train yourself in such heuristics?<\/p>\n <\/p>\n [1] For example, why do many people go through long training programs “to make money” without spending a few hours doing salary comparisons ahead of time? Why do many who type for hours a day remain two-finger typists, without bothering with a typing tutor program? Why do people spend their Saturdays “enjoying themselves” without bothering to track which of their habitual leisure activities are *actually* enjoyable? Why do even unusually numerate people fear illness, car accidents, and bogeymen, and take safety measures, but not bother to look up statistics on the relative risks? Why do most of us settle into a single, stereotyped mode of studying, writing, social interaction, or the like, without trying alternatives to see if they work better -- even when such experiments as we have tried have sometimes given great boosts?<\/p>","mainEntityOfPage":{"@type":"WebPage","@id":"https://www.lesswrong.com/posts/PBRWb2Em5SNeWYwwB/humans-are-not-automatically-strategic"},"headline":"Humans are not automatically strategic","description":"Reply to: A \"Failure to Evaluate Return-on-Time\" Fallacy ⢠Lionhearted writes: â¦","datePublished":"2010-09-08T07:02:52.260Z","about":[{"@type":"Thing","name":"General Intelligence","url":"https://www.lesswrong.com/tag/general-intelligence","description":" General Intelligence<\/strong> or Universal Intelligence<\/strong> is the ability to efficiently achieve goals in a wide range of domains. <\/p> This tag is specifically for discussing intelligence in the broad sense: for discussion of IQ testing and psychometric intelligence, see IQ / g-factor<\/a>; for discussion about e.g. specific results in artificial intelligence, see AI<\/a>. These tags may overlap with this one to the extent that they discuss the nature of general intelligence.<\/p> Examples of posts that fall under this tag include The Power of Intelligence<\/a>, Measuring Optimization Power<\/a>, Adaption-Executers not Fitness Maximizers<\/a>, Distinctions in Types of Thought<\/a>, The Octopus, the Dolphin and Us: a Great Filter tale<\/a>.<\/p> On the difference between psychometric intelligence (IQ) and general intelligence:<\/p> But the word âintelligenceâ commonly evokes pictures of the starving professor with an IQ of 160 and the billionaire CEO with an IQ of merely 120. Indeed there are differences of individual ability apart from âbook smartsâ which contribute to relative success in the human world: enthusiasm, social skills, education, musical talent, rationality. Note that each factor I listed is cognitive. Social skills reside in the brain, not the liver. And jokes aside, you will not find many CEOs, nor yet professors of academia, who are chimpanzees. You will not find many acclaimed rationalists, nor artists, nor poets, nor leaders, nor engineers, nor skilled networkers, nor martial artists, nor musical composers who are mice. Intelligence is the foundation of human power, the strength that fuels our other arts.<\/p><\/blockquote> -- Eliezer Yudkowsky, Artificial Intelligence as a Positive and Negative Factor in Global Risk<\/a><\/p><\/blockquote> After reviewing extensive literature on the subject, Legg and Hutter[1]<\/a><\/sup><\/span> summarizes the many possible valuable definitions in the informal statement âIntelligence measures an agentâs ability to achieve goals in a wide range of environments.â They then show this definition can be mathematically formalized given reasonable mathematical definitions of its terms. They use Solomonoff induction<\/a> - a formalization of Occam's razor<\/a> - to construct an universal artificial intelligence<\/a> with a embedded utility function<\/a> which assigns less utility<\/a> to those actions based on theories with higher complexity<\/a>. They argue this final formalization is a valid, meaningful, informative, general, u... <\/p>"},{"@type":"Thing","name":"Goodhart's Law","url":"https://www.lesswrong.com/tag/goodhart-s-law","description":" Goodhart's Law<\/strong> states that when a proxy for some value becomes the target of optimization pressure, the proxy will cease to be a good proxy. One form of Goodhart is demonstrated by the Soviet story of a factory graded on how many shoes they produced (a good proxy for productivity) â they soon began producing a higher number of tiny shoes. Useless, but the numbers look good.<\/p> Goodhart's Law is of particular relevance to AI Alignment<\/a>. Suppose you have something which is generally a good proxy for \"the stuff that humans care about\", it would be dangerous to have a powerful AI optimize for the proxy, in accordance with Goodhart's law, the proxy will breakdown.<\/p>\n In Goodhart Taxonomy<\/a>, Scott Garrabrant identifies four kinds of Goodharting:<\/p>\n Posts which (we theorize) are good to show to new users, to get them excited about rationality. Posts listed here should be high-quality classics, should be accessible without having previously read the Sequences or anything else on LessWrong, and should somehow convince a certain sort of reader that rationality is important, and they want to read more about it. A good motivational intro post might argue the value of rationality directly, or it might point out a reasoning flaw which people recognize strongly in themselves, or it might introduce a rationality concept which is particularly sticky.<\/p> This tag will be treated as a special case in recommendations, and is configured such that it isn't displayed on posts and doesn't appear in search results when adding tags from a post page (but you can still apply it from here). If you aren't sure whether a post meets the criteria, use the discussion page, or just add it (someone else can downvote the tag's relevance later).<\/p>"},{"@type":"Thing","name":"Rationality","url":"https://www.lesswrong.com/tag/rationality","description":" Rationality<\/strong> is the art of thinking in ways that result in accurate beliefs<\/a> and good decisions<\/a>. It is the primary topic of LessWrong. Topics covered in rationality include (but are not limited to): normative and theoretical explorations of ideal<\/a> reasoning<\/a>; the capabilities and limitations<\/a> of our brain<\/a>, mind and psychology<\/a>; applied advice such as introspection<\/a> techniques and how to achieve truth collaboratively<\/a>; practical techniques and methodologies for figuring out whatâs true ranging from rough quantitative modeling to full research guides.<\/p> Note that content about how the world is <\/i>can be found under World Modeling<\/a>, and practical advice about how to change the world<\/i> is categorized under World Optimization<\/a> or Practical<\/a>.<\/p>\n
\n
Definitions of General Intelligence<\/h2>
Goodhart Taxonomy<\/h2>\n
\n
See Also<\/h2>\n
\n
Rationality is not only about avoiding the vices of self-deception<\/a> and obfuscation (the failure to communicate clearly<\/a>), but also about the virtue of curiosity<\/a>, seeing the world more clearly than before, and achieving things<\/a> previously unreachable<\/a> to you<\/a>. The study of rationality on LessWrong includes a theoretical understanding of ideal cognitive algorithms, as well as building a practice that uses these idealized algorithms to inform heuristics<\/a>, habits<\/a>, and techniques<\/a>, to successfully reason and make decisions in the real world.<\/p>