ã¡ã³ãã©ã¡ããããã£ã¼ãã©ã¼ãã³ã°ã®ææ°è«æãããªããªèªã£ã¦ãããã·ãªã¼ãºã§ãï¼Twitterã«æ稿ããã¹ã©ã¤ããã¾ã¨ãã¾ããï¼ ãµã ãç»å ã¹ã©ã¤ãå ã®ããã¹ãæ½åºï¼æ¤ç´¢ã¨ã³ã¸ã³ç¨ï¼ ã¡ã³ãã©ã¡ããã¨å¦ãµã ããã£ã¼ããã©ã¼ãã³ã¯ãææ°è«æ 製ä½: Ryobot ã¯ãããã« ä½è ⢠Ryobot (ããã»ãã£ã¨) ⢠NAIST修士2å¹´.RIKEN AIPå¤å (2017/7~) ⢠ãã£ããããããã®åæ§ã¨å¤æ§æ§ã®ç 究ããã¦ãã¾ã ⢠Twitter@_Ryobot ã¦ããæ°ã«å ¥ãè«æãç´¹ä»ãã¦ãã¾ã ã¹ã©ã¤ããã®æ¦è¦ ⢠ã¡ã³ãã©ã¡ããããææ°è«æããããªãããªèªã£ã¦ããã¾ã ⢠åéã¯ä¸»ã«èªç¶è¨èªå¦ç (æ©æ¢°ç¿»è¨³ã¨è¨èªç解) ã¦ãã ⢠Twitter ã¦ãæ稿ããã¹ã©ã¤ããã®ã¾ã¨ãã¦ãã ã¡ã³ãã©ã¡ãã ⢠ã·ãã§ã¤ãããæ§å¶ä½ã®LINEã¹ã¿ã³ããã¦ãã ⢠ä½è æ§ããããªã¼ç´
ããã«ã¡ã¯ Ryobot (ããã¼ã£ã¨) ã§ãï¼ æ¬ç´ã¯ RNN ã CNN ã使ãã Attention ã®ã¿ä½¿ç¨ãããã¥ã¼ã©ã«æ©æ¢°ç¿»è¨³ Transformer ãææ¡ãã¦ããï¼ ããããªè¨ç·´ã§å§åç㪠State-of-the-Art ãéæãï¼è¯éºã«ã¿ã¤ãã«ååããï¼ ã¾ã注æãé常ã«ã·ã³ãã«ãªæ°å¼ã«ä¸è¬åããããã§ï¼å æ³æ³¨æã»å ç©æ³¨æã»ã½ã¼ã¹ã¿ã¼ã²ãã注æã»èªå·±æ³¨æã«åé¡ããï¼ãã®ãã¡èªå·±æ³¨æã¯ããªãæ±ç¨çãã¤å¼·åãªææ³ã§ããä»ã®ãããããã¥ã¼ã©ã«ãããã«è»¢ç¨ã§ããï¼ WMT'14 ã® BLEU ã¹ã³ã¢ã¯è±ä»: 41.0, è±ç¬: 28.4 ã§ç¬¬ 1 ä½ Attention Is All You Need [Åukasz Kaiser et al., arXiv, 2017/06] Transformer: A Novel Neural Network Architecture f
é翻訳 (Back-Translation) ãç¨ããææ³ãé©ãã¹ãå¿«æãæãéãã¾ãã*1ï¼ é翻訳ãã¤ããã¹ã³ã¢ãå©ãåºãã¦ã¦ã³ã£ããããï¼ããããAttention以éã§ã¯æ大ã®æ§è½uphttps://t.co/ssaQw2s22f 深層å¦ç¿ã¯ããã¤ãªãææ³ãçªç¶ããã¨ã§ã¦ãããããããã pic.twitter.com/RwyrjCn8Rxâ Ryobot | ããã¼ã£ã¨ (@_Ryobot) 2018å¹´11æ15æ¥ æ¯å¹´éå¬ãããæ©æ¢°ç¿»è¨³ã®å½éä¼è° WMT18 ã®ã·ã§ã¢ã¼ãã¿ã¹ã¯*2ã«ã¦äººæè©ä¾¡ã®1ä½ãç²å¾ãï¼æ©æ¢°ç¿»è¨³ã®ãã³ããã¼ã¯ã§ã¯ä»¥åã®æé«ã¹ã³ã¢ã 29.8 ãªã®ã«å¯¾ããã®ææ³ã¯ 35.0 ãéæãã¦ãã¾ãï¼ ä¸å³ã¯æ©æ¢°ç¿»è¨³ã®ãã³ããã¼ã¯ã«ãããææ³ã®æ¯è¼ã§ã*3ï¼ æ¨å¹´ç»å ´ãã翻訳ã¢ãã« Transformer *4ã大ããè©ä¾¡ã¹ã³ã¢ãä¸ãã¾ãããï¼é翻訳ã¯ãã以ä¸ã®ä¸ã
ãã¯ããã¸ã¼ãçºéãããã¨ã§ãå°é家ã§ãªãã¦ããã¥ã¼ã©ã«ãããã¯ã¼ã¯ã使ã£ã¦ç¿»è¨³ããã°ã©ã ãä½ããã¨ãå¯è½ã«ãªãã¾ãããã¨ã¯è¨ã£ã¦ããå ¨ãç¥èããªã人ã«ãã®ä»çµã¿ãç解ããã®ã¯é£ãããã®ãããã§ã©ã¤ã¿ã¼ã®Samuel Lynn-Evansãããèªåã§æ å ±ã調ã¹ã¤ã¤0ãã翻訳ããã°ã©ã ãä½æãããã®æã«ç解ããä»çµã¿ãæ°å¼ã使ããã«èª¬æãã¦ãã¾ãã Found in translation: Building a language translator from scratch with deep learning https://blog.floydhub.com/language-translator/ è¨èªã¯é常ã«è¤éã§ãããã¾ã§æ©æ¢°ç¿»è¨³ãè¡ãã«ã¯ä½äººãã®å°é家ãå¿ è¦ã§ããããããã人工ç¥è½(AI)ã®çºéã«ããããã¯ãå°é家ã§ãªãã¦ãæ©æ¢°ç¿»è¨³ãè¡ããã¨ãå¯è½ã«ãªãã¾ãããããã¾ã§å°é家
ã¤ãå é±ï¼æ©æ¢°ç¿»è¨³ã§é©ãã¹ãé²å±ãããã¾ããï¼ æ師ãªãæ©æ¢°ç¿»è¨³ãã¤ããé²åãéãã¦ã¦ã³ã£ããããï¼ãã£ãåå¹´ã§BLEUã¹ã³ã¢ã15ãã25ã«æ¹åããã®ãã¬ã¤ã¯ã¹ã«ã¼ã§ã¯ï¼https://t.co/SVQlYYu2Pt æ師ãªãå¦ç¿ã§ãã®ã¯ãªãªãã£ã®æ©æ¢°ç¿»è¨³ã§ããã®ã¾ãã§æåãããï¼ã¡ãã£ã¨èªã£ã¦ããï¼ pic.twitter.com/fBllGtTkgbâ Ryobot | ããã¼ã£ã¨ (@_Ryobot) 2018å¹´4æ23æ¥ è¦ç´ããã¨æ師ãªãå¦ç¿ã§ãã²ã¨æåã®æ師ããå¦ç¿ã®æ©æ¢°ç¿»è¨³ã«å¹æµããæ§è½ãç²å¾ã§ããã¨ããã®ã§ãï¼ãã®è¨äºã§ã¯æ©æ¢°ç¿»è¨³ãç¥ããªãåå¿è ã«ããããããã«éæ³ã®ãããªæ師ãªãæ©æ¢°ç¿»è¨³ã®ä»çµã¿ã説æãããã¨æãã¾ãï¼ æ師ããå¦ç¿ã®éç æ©æ¢°ç¿»è¨³ã¯ãã£ã¼ãã©ã¼ãã³ã°ãé©ç¨ãããã¨ã§æ¥æ¿ã«é²æ©ããåéã®ï¼ã¤ã ã¨æãã¾ãï¼Google 翻訳ã¯ãã¥ã¼ã©ã«æ©æ¢°ç¿»è¨³ãå°å ¥ããã
2. èªå·±ç´¹ä» ⢠ç§å¦æè¡æ¯èæ©æ§ ç ç©¶å¡ â æ¥ä¸ã»ä¸æ¥æ©æ¢°ç¿»è¨³å®ç¨åããã¸ã§ã¯ã ï¼2013-2017å¹´åº¦ï¼ â¢ NLPè¥æã®ä¼2017å¹´å§å¡é· http://yans.anlp.jp â 3/14 YANSæ@ç§èå ⢠ãã±ããã¾ã ããã¾ãï¼ â 8/27-29 (äºå®) NLPè¥æã®ä¼ 第12åã·ã³ãã¸ã¦ã @??? ⢠ã¹ãã³ãµã¼åéäºå®ï¼ãæ¤è¨ãã ããï¼ â¢ AMCãã¤ã¤ã¢ã³ãä¼å¡ 2 3. é常ã«åèã«ãªãè³æãªã© ⢠ãããããã£ã¼ãã©ã¼ãã³ã°ã£ã¦ä½ï¼ã¨ããæ¹ã¯ â https://www.slideshare.net/yutakikuchi927/deep- learning-26647407 ⢠æ¥æ¬èªããã ï¼è±èªã®è³æãããï¼ã¨ããæ¹ã¯ â https://sites.google.com/site/acl16nmt/ â https://arxiv.org/abs
DeepLearning Advent Calendar 2016ã®17æ¥ç®ã®è¨äºã§ãã ã¯ããã« ã¯ããã¾ãã¦ã Liaroã¨ããä¼ç¤¾ã§ã¨ã³ã¸ãã¢ããã¦ãã@eve_ykã¨ç³ãã¾ãã ä»å¹´ããã¨å ãã¨ãªãã¾ããã ãããã§ãä»å¹´ã®DeepLearningã®ä¸»è¦ãªææãæ¯ãè¿ã£ã¦ã¿ã¾ãããã ãã®è¨äºã¯ã2016å¹´ã«çºè¡¨ãããDeepLearningé¢ä¿ã®ç 究ãåºãæµ ãã¾ã¨ãããã®ã§ããä»å¹´ã®DeepLearningã®ç 究ã®é²æ©ã俯ç°ããã®ã«å½¹ç«ã¦ã°å¹¸ãã§ãã ããããã®å 容ã«ã¤ãã¦ããã®è¦ç¹ãææ³ãªãããç°¡åã«ã¾ã¨ãããããã¨æãã¾ãã ç¹ã«éè¦ã ã¨æã£ãç 究ã«ã¯â ãã¼ã¯ãã¤ãã¦ããã¾ãã é常ã«é·ããªã£ã¦ãã¾ã£ããããèå³ã®ããåéã ãèªãã§ããã ããã°ã¨æãã¾ãã è¨ã訳ã¨ãé¡ã è¦ã¤ãããã®ã¯ã³ã¼ãã¸ã®ãªã³ã¯ã示ãã®ã§ãããã°ã©ãã³ã°ã«é¢ä¿ããè¨äºã¨ãããã¨ã§â¦ åéçã«ããªãåã£
Bytenetã¨ã¯2016/10/31ã«DeepMindããæ稿ãããè«æï¼"Neural Machine Translation in Linear Time"ã«ã¦ææ¡ãããæ©æ¢°ç¿»è¨³ããããã¥ã¼ã©ã«ãããã¯ã¼ã¯ã§ãï¼é³åæããããã¥ã¼ã©ã«ãããã¯ã¼ã¯ã¨ãã¦ææ¡ãããWaveNetã¨åãããã«ï¼Dilationãå°å ¥ãããã¨ã«ãã£ã¦ï¼é ãæç³»åã®ç¸é¢ãå¦ç¿ãããã¨ãã§ãã¾ãï¼ããã«ï¼å¦ç¿ã«ãããæéãï¼æç« ã®é·ãã«ãããã¦ç·å½¢ã¨ãªã£ã¦ããï¼æ¯è¼çéãã¨è¨ããã¦ãã¾ãï¼ Bytenetã®æ¦è¦ 翻訳ã¿ã¹ã¯ããããã¥ã¼ã©ã«ãããã¯ã¼ã¯ã§ãï¼å ã®è¨èªã®æååã${\bf s}=s_0,\dots,s_{N_s}$ã¨ãã¦ï¼ç¿»è¨³å¾ã®è¨èªã®æååã${\bf t}=t_0,\dots,t_{N_t}$ã¨ããã¨ï¼$p({\bf t}|{\bf s})$ãæ¨å®ããã¿ã¹ã¯ã§ãï¼ ãã®ç¢ºçåå¸ãByteNetã§
Posted by Quoc V. Le & Mike Schuster, Research Scientists, Google Brain Team Ten years ago, we announced the launch of Google Translate, together with the use of Phrase-Based Machine Translation as the key algorithm behind this service. Since then, rapid advances in machine intelligence have improved our speech recognition and image recognition capabilities, but improving machine translation remai
人工ç¥è½ï¼AIï¼é¢é£ã®æè¡ã®ä¸ã§ãä»ããã£ã¨ã注ç®ããã¦ããã®ãããã£ã¼ãã©ã¼ãã³ã°ãã ããã£ã¼ãã©ã¼ãã³ã°ã¨ã¯ãè³ã®ç¥çµåè·¯ã«ãã³ããå¾ãããã¥ã¼ã©ã«ãããã¯ã¼ã¯ãããã¼ã¹ã«ããæ©æ¢°å¦ç¿ã®ææ³ãAIãè¨å¤§ãªç»åãé³å£°ãããã¹ããªã©ã®ãã¼ã¿ãåæã»å¦ç¿ãããã¨ã§ãå ´é¢ã«åããã対å¿çãå°ãåºããã¨ãå¯è½ã«ãªãããã®è½åã¯ãããé åã«ããã¦ã¯äººéã¨åçããããã¯ããã§ã«äººéãè¶ ãããã¨ãè¨ããã¦ããã ãã£ã¼ãã©ã¼ãã³ã°ãæ´»ç¨ããã·ã¹ãã ãä½ãããã§æ¬ ãããªãã®ãGPUï¼ã°ã©ãã£ãã¯ã¹ã»ããã»ãã·ã³ã°ã»ã¦ãããï¼ã¨éçºç°å¢ããã®äºã¤ã®æåãã³ãã¼ã§ããNVIDIAã訪åããTREND EXPO TOKYO 2016ã«ç»å£ããäºå´æ¦å£«æ°ã«ã人工ç¥è½ã®ç¾ç¶ã¨äººã ã®çæ´»ã«ã©ããªæ©æµãããããã®ãèããã 1997å¹´æ±äº¬å¤§å¦å·¥å¦é¨ææå¦ç§åæ¥å¾ã1999å¹´æ±äº¬å¤§å¦å¤§å¦é¢å·¥å¦ç³»ç 究ç§éå±å·¥å¦å°æ»ä¿®äºã
ãã®è¨äºã¯Deep Learning Advent Calendar 5æ¥ç®ã®è¨äºã§ãï¼ ã¯ããã« çæ§ï¼ãç¡æ²æ±°ã«ãã¦ããã¾ãï¼olanleedã§ãï¼ ã¨ãã¨ãAdvent Calendar以å¤ã§ããã°ãæ´æ°ããªããã¡ãªäººéã«ãªã£ã¦ãã¾ãã¾ããï¼æ´æ°ãããã¨ããããèãã¦ãã®ã§ããï¼å¦ä¼ããã¸ã£ã¼ãã«ã¸ã®è«ææ稿ãªã©ããã£ã¦ï¼ãªããªãå³ãããã®ãããã¾ããï¼ ãã®12æã¯ç°å¸¸ãªã¾ã§ã«Advent Calendarã¨LTãå ¥ããã®ã§ï¼ææ¶ã®æ´æ°ã«ãªãããã§ãï¼ãä»ãåããã ããï¼ ããã§ã¯æ¬é¡ã«å ¥ãããã¨æãã¾ãï¼ RNNãç¨ããæ©æ¢°ç¿»è¨³ Deep Learningãæ§ã ãªåéã§å¤§ããªææãåºãã¦ããç¾å¨ï¼çµ±è¨çæ©æ¢°ç¿»è¨³ã§ãRecurrent Neural Network(RNN)ãæ´»ç¨ããç 究ãæåãåãã¦ãã¾ãï¼ ä»åã¯RNN(LSTM)ãç¨ãã翻訳ã¢ãã«ã®ä¸ã¤ã§ããSequence
ãªãªã¼ã¹ãé害æ å ±ãªã©ã®ãµã¼ãã¹ã®ãç¥ãã
ææ°ã®äººæ°ã¨ã³ããªã¼ã®é ä¿¡
å¦çãå®è¡ä¸ã§ã
j次ã®ããã¯ãã¼ã¯
kåã®ããã¯ãã¼ã¯
lãã¨ã§èªã
eã³ã¡ã³ãä¸è¦§ãéã
oãã¼ã¸ãéã
{{#tags}}- {{label}}
{{/tags}}