æ¥æ¬äººã¯çæAIã«ç¥æ§ãå¹»è¦ããã¡ã ããæ¬§ç±³äººã¯éå ·ã¨ãã¦å²ãåã£ã¦ãããã¨ãã主æ¨ã®はてな匿名ダイアリーã話é¡ã«ãªã£ãããããããããããå¹»æ³ã¯AIæ¥çã®é»ææã«ç«¯ãçºãã¦ããããã®å¹»æ³ãçµæ¸å¦çè¦ç¹ããã®AIã®è¦å¶ã妨ãã¦ãããã¨ããè¶£æ¨ã®ãã¨ãããã³ã»ã¢ã»ã¢ã°ã«ããªããã¢ã³ãé£éã®Econ Focusã®インタビューï¼H/T Mostly Economicsï¼ã§è¿°ã¹ã¦ããã
EF: Arguments for regulating AI along economic lines seem uncommon now. More usually, one sees arguments about AI and alignment, about AI and long-term threats.
Acemoglu: Those arguments really confuse the debate. I'm not worried about artificial general intelligence coming and taking over humanity.
EF: Why do you think economic policy arguments about AI aren't more salient?
Acemoglu: There are many reasons. I think one of them is Hollywood and science fiction. I love science fiction, don't get me wrong, but it has conditioned us to think about the scenario in which the machines become humanlike and compete against humans.
But second, even more importantly â and this is, to me, a foundational mistake in the AI community, going back to Turing's work and to the [1956] Dartmouth Conference on AI â it was a mistake framing the objective as machines being intelligent, developing humanlike capabilities, doing better than humans. I think we should have framed the question from the beginning as a machine that's useful. We don't want machine intelligence in itself; we want machines that are useful to us having some high-level capabilities and functions.
Today, still, the way you get status in AI research is by achieving humanlike capabilities. On top of that prestige, the biggest sources of funding right now for engineering, computer science, and AI are companies like Google and Microsoft. Put the two effects together and you have an amazing bias.
And then the third is the economics profession. You know, economists are right: We owe today's prosperity to technology. We would not be 30 times as prosperous as our great-great grandparents who lived 250 years ago if it wasn't for the huge breakthroughs of industrialization, of communication, of improvements in pharmaceuticals, all of these things. Yet that does not imply that technological change is always good for workers or always good for society. So we really need to develop a perspective of how can we harness technology for the better. But if you subscribe to the view that technology is always and everywhere good, it's like a sin to ask questions about regulation of technology within the economics profession. And if you put that together with the ideological disposition of the AI community, I think you get the current picture.
ï¼æè¨³ï¼
- ã¤ã³ã¿ãã¥ã¢ã¼
- AIãçµæ¸å¦ç観ç¹ããè¦å¶ãããã¨ããè°è«ã¯ä»ã¯ä¸è¬çã§ã¯ãªãããã«æããã¾ããããä¸è¬çã«ç®ã«ããè°è«ã¯ãAIã¢ã©ã¤ã¡ã³ã*1ãAIã®é·æçãªè å¨ã«ã¤ãã¦ã®ãã®ã§ãã
- ã¢ã»ã¢ã°ã«
- ããããè°è«ã¯æ¬å½ã«è©±ãæ··ä¹±ããã¾ããæ±ç¨äººå·¥ç¥è½ã®å°æ¥ã¨ããã«ãã人é¡ã®æ¯é ã«ã¤ãã¦ç§ã¯å¿é ãã¦ãã¾ããã
- ã¤ã³ã¿ãã¥ã¢ã¼
- AIã«ã¤ãã¦ã®çµæ¸æ¿çã®è°è«ããã£ã¨æ´»çºã«ãªããªãçç±ã¯ä½ã ã¨æãã¾ããï¼
- ã¢ã»ã¢ã°ã«
- å¤ãã®çç±ãããã¾ããããã®ä¸ã¤ã¯ããªã¦ããã¨ãµã¤ã¨ã³ã¹ãã£ã¯ã·ã§ã³ã ã¨æãã¾ãã誤解ãã¦ã»ãããªãã®ã§ãããç§ã¯SFã好ãã§ããã§ãSFã«ãã£ã¦æã ã¯ãæ©æ¢°ã人éã®ããã«ãªãã人éã¨ç«¶åãããã¨ããã·ããªãªã§èããããã«æ¡ä»¶ä»ããããããã«ãªãã¾ããã
ããã第äºã®ãããããããã£ã¨éè¦ãªè©±ã¯ââããã¦ããã¯ç§ã«è¨ãããã°ããã¥ã¼ãªã³ã°ã®ç ç©¶ã¨ï¼»1956å¹´ã®ï¼½AIã«é¢ãããã¼ããã¹ä¼è°*2ã«é¡ãAIæ¥çã®æ ¹æ¬çãªéã¡ãªã®ã§ããââæ©æ¢°ãç¥æ§ãæã¡ã人éã®ãããªè½åãæã¡ã人éããã䏿ããããã¨ãç®çã«æ®ãããã¨ã¯ééãã§ãããå½åããæç¨ãªæ©æ¢°ã追究ãã¹ãã ã£ãã®ã§ããæã ã¯æ©æ¢°ã®ç¥æ§ãã®ãã®ã欲ããããã§ã¯ããã¾ãããä½ã髿°´æºã®è½åã¨æ©è½ãæã¤æç¨ãªæ©æ¢°ã欲ããã®ã§ãã
仿¥ã§ãä¾ç¶ã¨ãã¦AIç ç©¶ã§ã¹ãã¼ã¿ã¹ãå¾ãæ¹æ³ã¯ã人éã®ãããªè½åãéæãããã¨ã§ãããããããã¬ã¹ãã¼ã¸ã«å ãã¦ãå·¥å¦ãã³ã³ãã¥ã¼ã¿ç§å¦ãããã³AIã¸ã®ä»ç¾å¨ã®æå¤§ã®è³éã®åºãæã¯ãã°ã¼ã°ã«ããã¤ã¯ãã½ããã®ãããªä¼æ¥ã§ãããã®2ã¤ã®å½±é¿ãåãããã¨ãé©ãã¹ãååãçãã¾ãã
ããã¦3ã¤ç®ã®åé¡ã¯çµæ¸å¦çã§ãã確ãã«ã仿¥ã®ç¹æ ã¯æè¡ã®ãèãã¨è¨ãç¹ã§çµæ¸å¦è ã¯æ£ããã§ããå·¥æ¥åãéä¿¡ãå»è¬åã®æ¹åã¨ãã£ãåæ¹é¢ã§ã®å¤§ããªããã¬ã¼ã¯ã¹ã«ã¼ãç¡ããã°ã250å¹´åã®æã ã®é«ç¥ç¶æ¯ãçããæä»£ã®30åãã®ç¹æ ãéæãããã¨ã¯ãªãã£ãã§ãããããã ããã®ãã¨ã¯ãæè¡ã®å¤åã常ã«å´åè ã«ã¨ã£ã¦è¯ãã常ã«ç¤¾ä¼ã«ã¨ã£ã¦è¯ããã¨ãããã¨ãæå³ãã¾ãããå¾ã£ã¦æã ã¯ãããè¯ãå½¢ã§æè¡ãå¶å¾¡ããè¦éããç«ã¦ãå¿ è¦ãæ¬å½ã«ããã¾ããããããæè¡ã¯ãã¤ã§ãã©ãã§ãè¯ããã®ã ãã¨ããèãã«å¸°ä¾ãã¦ããã¨ãçµæ¸å¦çã«ããã¦æè¡ã®è¦å¶ã«ã¤ãã¦åé¡æèµ·ãããã¨ã¯ç½ªæªã®ãããªãã®ã«ãªã£ã¦ãã¾ãã¾ããããã¦ãããAIæ¥çã®ã¤ããªãã®ã¼çãªå¾åã¨çµã¿åãããã¨ãä»ã®ç¶æ³ã«ãªããã¨æãã¾ãã
*1:cf. AI alignment - Wikipediaã
*2:cf. ダートマス会議 - Wikipediaã