ãããã«ã¡ã¯ï¼ãããã¾@é£ã楽ããã¨ã³ã¸ãã¢ã§ãã
ãå æ¥ã¯ããã¼ã®è©°ãåãããããã ãã¾ããã ãã¯ããã¼ãªãã¦ãã¨æããªãã§ä¸ããã注æããï¼ãæå¾ ã¡ãåãã¦è²·ãã«ã¯ç´¹ä»ãå¿ è¦ãªã麹çºã«ããèèæ´èååºã»æä¸éæ°å ã®ã¯ããã¼ãªã®ã§ãã
* 京é½ã«ããååã®ãåºã¨ã¯éãã¾ã
ãé£å¾ã«ãããããããå ¨27種é¡ãé£ã¹ã¦ãã¾ãã¾ãã (çããã®ã¯å¥è ¹ã¨ãè¨ãã¾ããã人ã«ããã¾ããç§ã¯å¥ã«ãªããªãå´ã§ãï½)ããããªç§ã§ãããã®ã¯ããã¼ã¯é£ã¹ããã飽ããããªããã¨ã«ã³ã£ãããç´ æ´ãªé¢¨å³ãå®ãã¤ã¤ãæ代ã«åããã¦å°ããã¤æ¹è¯ããã¦ããã¨æ³åã§ãã¾ãã æä¸éæ°å ã®æ´å²ãå§ã¾ã£ãã®ãææ²»1å¹´ããã®é ã®æ¥æ¬ã®ãèåã®æ´å²ãã¿ã¦ã¿ãã¨
- 1868å¹´ï¼ææ²»1å¹´ï¼ æä¸å ä¿ï¼ãããã¿ã¿ã¤ããï¼ããå½å®¶æ¿çã®ä¸ç°ã¨ãã¦æ´èå製é æè¡ç¿å¾ãå½ãããããã¨ããæä¸éæ°å ã®æ´å²ã®ã¯ãã¾ã
- 1875å¹´ï¼ææ²»8å¹´ï¼ ç±³æ´¥é¢¨æå ã§æ©æ¢°ã使ãæ¬æ ¼çãªãã¹ã±ããã製é ãéå§
- 1899å¹´ï¼ææ²»32å¹´ï¼æ±äº¬èµ¤åã§æ£®æ°¸å¤ªä¸éãã£ã³ãã¼ãä½ãï¼ã®ã¡ã®æ£®æ°¸è£½èï¼
- 1910å¹´ï¼ææ²»43å¹´ï¼æ¨ªæµå çºã«ãä¸äºå®¶ãéåº
- 1916å¹´ï¼å¤§æ£5å¹´ï¼ æ±äº¬è£½èãè¨ç«ãããï¼1924å¹´ã«æ治製èã¨ãªãï¼
- 1921å¹´ï¼å¤§æ£10å¹´ï¼ã°ãªã³åµæ¥
- 1924å¹´ï¼å¤§æ£13å¹´ï¼å¤§ç«¹è£½èã¨æ±äº¬èå製èãåä½µãã¦æ治製èã¨æ¹ç§°
å¼ç¨ æ¥æ¬ã®ãèåæ´å²å¹´è¡¨ | ãèåä½ã§ãæ å ±é¤¨
ãææ²»æ代å¾åãã大æ£æ代ã®åãé ã«ãç¾å¨ã®å¤§æ製èã¡ã¼ã«ã¼ã®å¤ããç»å ´ãã¦ãã¾ããã
ãããããã¯ã¾ã㧠ãèåä¼ç¤¾ã®ã«ã³ããªã¢å¤§ççºãâ(ï¿£ï¾ï¿£)ãªãã¨ï¼ï¼
ãã»ã»ã»
ååã®ãããã
ãååã¯Tensorflowã«ããããã¼ã¿ããã¼æ¼ç®ã®ä»çµã¿ãè¦ã¦ããã¾ããããã®éãææ¥éä¸æ³ãå¦ç¿ãloss(æ失)ãªã©èãæ £ããªãç¨èªãåºã¦ããã¨æãã¾ãããããã¯æ©æ¢°å¦ç¿ã«å ±éããç¨èªã§ãã深層å¦ç¿ã¯æ©æ¢°å¦ç¿ã®ä¸ç¨®ã§ããã深層å¦ç¿ãç解ããã«ã¯æ©æ¢°å¦ç¿ã®ç¥èãæ¬ ããã¾ããã ä»åã¯æ©æ¢°å¦ç¿ã«å ±éã®ç¨èªã»ä»çµã¿ãæ´çãã¦ã¿ããã¨æãã¾ãã
ã¯ããã«ï¼æ©æ¢°å¦ç¿ã®ç®çãç¥ã
ãç§ãã¡ã®å¨ãã«ã¯å¦çæé ãä¸æ確ãªããªããåãããªãããããªããã®ãããããããã¾ããããã¨ãã°ã¹ã¼ãã¼ã«è¡ãã°ãç§ãã¡ã¯å£²ãå ´ã®ãªã³ã´ããªããããªã³ã´ã ãã¨èªèã§ãã¾ãããèªè»¢è»ã¯ç·´ç¿ãããã¨ã§ãªããä¹ããããã«ãªãã¾ãã
ããªãèªèã§ããã®ãããªãä¹ããããã«ãªãããéç¨ãã¯ã£ãããã¦ãã¾ãããã§ãç§ãã¡ã¯å¦ç¿ããè¡åã«ç§»ãã¦ãã¾ãã ãã®ãããªãçç±ãå¦çæé ãæããã§ãªãäºè±¡ãã³ã³ãã¥ã¼ã¿ã§ãå¦çã§ããªãã®ãâ¦â¦ããããèãã®ãã¨ç 究ããã¦ããã®ãæ©æ¢°å¦ç¿ã§ãã
ããã¨ãã°ããªã³ã´ãèªèã§ããè½åãä½ããã®ææ³ã§ã¢ãã«åãã¦ãã³ã³ãã¥ã¼ã¿ã§åç¾ãããã¨ããæè¡ã¯ãæ©æ¢°å¦ç¿ã®ä¸ç¨®ã§ãããã¿ã¼ã³èªèã¨å¼ã°ããåé¡ã«ãªãã¾ãã
ãä¸æ¹ãå¦çæé ãã¯ã£ãããã¦ãããã®ã¯èªååã»å¹çåããããã§ãããã人éã1ã¤1ã¤å¦çæé ã決ãããã®1ã¤1ã¤ã常ã«åãçµæã«ãªãããããã°ã©ãã³ã°ããã°ãã³ã³ãã¥ã¼ã¿ã«ããå¦çã®èªååãå³ãã¾ããç§ãã¡ã®çæ´»ã®ãããå ´æã§æ´»ç¨ããã¦ããã»ã¨ãã©ã®ã·ã¹ãã ã¯ãå¦çæé ãã¯ã£ãããã¦ãããã®ã«ãªãã¾ãã
ãç®çã«å¿ãã¦é©ããå¦çã®æ¹æ³ã»ã·ã¹ãã ãããã¨ãããã¨ã§ãã
æ©æ¢°å¦ç¿ã®æ´»ç¨ä¾
ãæ©æ¢°å¦ç¿ããæ示çã«ããã°ã©ã ããªãã¦ãå¦ç¿ããè½åãã³ã³ãã¥ã¼ã¿ã«ä¸ããç 究åéãã¨åãã¦å®ç¾©ããã®ã¯ã¢ã¼ãµã¼ã»ãµãã¥ã¨ã«ã¨è¨ããã¦ãã¾ããå½¼ãæ©æ¢°å¦ç¿ãå®ç¾©ããã®ã¯1959å¹´ã®ãã¨ã§ããç¾å¨ã§ã¯å¤ãã®åéã§æ´»ç¨ããã¦ãã¾ãã
- è¿·æã¡ã¼ã«ãã£ã«ã¿
- ååã®æ¨è¦
- æ ªä¾¡äºæ³
- ã¯ã¬ã¸ããã«ã¼ãã®æªç¨æ¤ç¥
ãæè¿ç¹ã«ããããªã®ã¯
- é³å£°ã¢ã·ã¹ã¿ã³ãããã£ããããã
- ã¹ãã¼ããã¼ã 端æ«
- èªåé転
ãå¤ãã®ä¼æ¥ã人工ç¥è½åéã¸ã®æè³ãæ´»çºåããã¦ãã¾ãããããAIãªã®ãDeep Learningãªã®ããå¼ã³æ¹ã¯æ¬è³ªçãªåé¡ã§ã¯ãªãã¨æãã¾ããè²ã ãªæ©æ¢°ããããã¯ã¼ã¯ã§ç¹ãããç¸äºã«ä½ç¨ãå§ãã¦ãããã¨ãããã¦ããã®å¶å¾¡ãUIãªã©ã«ãæ©æ¢°å¦ç¿ãå©ç¨ãã¦ãããã¨ãéè¦ãªã®ã§ããã
æ©æ¢°å¦ç¿ã®ä»çµã¿
ãç®çã¨æ´»ç¨ä¾ã«ãµããã¨ããã§ãæ©æ¢°å¦ç¿ã®ä»çµã¿ãè¦ã¦ããã¾ãã
ãæ©æ¢°å¦ç¿ã¨èãã¨ãæ©æ¢°ãåæã«å¦ç¿ããã»åå¼·ãããã¨ã®å°è±¡ãåãã¾ããã
[:W600]
ããã¨ãã®ä¸çã ã¨å°äººãå¤ä¸ã«ãã£ã¦ããããã§ãããç¾å®ã«ã¯ããããªé½åã®è¯ããã¨ã¯ãªãããã§ãã
ãæ©æ¢°å¦ç¿ã«ã¯ãè¦åæ§ããã¿ã¼ã³ãè¦ã¤ãåºãä»çµã¿ãããã°ã©ã ããå¦ç¿å¨ã¨ãå¦ç¿å¨ã«æå ¥ããè¨ç·´ãã¼ã¿ãç¨æããå¿ è¦ãããã¾ããå¦ç¿å¨ãè¨ç·´ãã¼ã¿ããæ½åºããè¦åæ§ããã¿ã¼ã³ã¯å¦ç¿ã¢ãã«ã¨å¼ã°ãã¾ãããã®å¦ç¿ã¢ãã«ãæ´»ç¨ãã¦ãã¾ãã¾ãªãã¼ã¿ã®è§£ãåºãã¦ããã¾ãã
ãæ©æ¢°å¦ç¿ã®ããã»ã¹ã¯å¦ç¿å¦çã¨å¤å®å¦çã«åé¡ã§ãã¾ãã
ï¼. å¦ç¿å¦ç
ãè¨ç·´ãã¼ã¿ãå¦ç¿å¨ã«æå ¥ãã¦å¦ç¿ã¢ãã«ãçæãã¾ãã
ï¼. å¤å®å¦ç
ãå¦ç¿å¦çã§çæããå¦ç¿ã¢ãã«ãå©ç¨ãã¦æªç¥ã®ãã¼ã¿ãå¤å®ãã¾ãã
ããã®ããã«ãæç¶ããæ確ã§ãªãã¿ã¹ã¯ã«å¯¾ãã¦ããã®ã¿ã¹ã¯ãéè¡ããããã®å¦ç¿ã¢ãã«ãè¨ç·´ãã¼ã¿ããä½æãããã®ä½æããå¦ç¿ã¢ãã«ãå©ç¨ãã¦æªç¥ã®ãã¼ã¿ãå¤æãããã¨ããããæ©æ¢°å¦ç¿ã®åºæ¬çãªä»çµã¿ã»å®ç¾©ã«ãªãã¾ãã
æ©æ¢°å¦ç¿ã®å¦ç¿ææ³
ãå¦ç¿å¦çã§ç¨ããè¨ç·´ãã¼ã¿ã«ãããã¤ã種é¡ãããã¾ããææ³ã§åé¡ãã¦ã¿ã¾ãããã
- æ師ããå¦ç¿ ãå ¥åãã¼ã¿ã¨ãã®ãã¼ã¿ãä½ããåããã©ãã«ï¼æ師ãã¼ã¿ï¼ããã¢ã§ç¨æããå¦ç¿å¨ã«æå ¥ãã¾ããåãã¯ééã£ã解çãåºãã¦ããã¾ãããä½ããã®ãã©ã¡ã¼ã¿ãå¤åããããã¨ã§è§£çãæ£è§£ã«è¿ã¥ããããã«å¦ç¿ãã¾ãã
- æ師ãªãå¦ç¿ ãæ師ãã¼ã¿ã¯ç¨æããå ¥åãã¼ã¿ã®ã¿ãå¦ç¿å¨ã«æå ¥ãã¾ããå ¥åãã¼ã¿éã®è·é¢ãé¡ä¼¼åº¦ãçµ±è¨çãªæ§è³ªã«åºã¥ãã¦ã¯ã©ã¹ãçæï¼ã¯ã©ã¹ã¿ãªã³ã°ï¼ãããã¨ã主ãªç®çã«ãªãã¾ãã
- åæ師ããå¦ç¿ ãæ師ããå¦ç¿ã¯æ師ãã¼ã¿ãæããã®ã大å¤ã§ããåæ師ããå¦ç¿ã§ã¯ãæ師ãã¼ã¿ãããå ¥åãã¼ã¿ã¨ãæ師ãã¼ã¿ããªãå ¥åãã¼ã¿ã®ä¸¡æ¹ãç¨ãã¦å¦ç¿ãã¾ãã
- å¼·åå¦ç¿ ããªãããã®è©ä¾¡ã®åºæºã«ããå ±é ¬ãä¸ãããã®å ±é ¬ãå¢ããã¦ããããã«å¦ç¿ãã¾ããä¸é£ã®è¡åã®æå¾ã«è©ä¾¡ãä¸ãããããããªå ´åã«ç¨ããå¦ç¿æ¹æ³ã§ãã
ãå¼·åå¦ç¿ã¯æ師ããå¦ç¿ã»æ師ãªãå¦ç¿ä¸¡æ¹ã®ç¹å¾´ããããæã¤ä¸éçãªå¦ç¿æ¹æ³ã¨ãèãããã¾ãã決å®ããæ£è§£ãæããªãï¼æ師ãªãå¦ç¿ï¼ããã¤å ±é ¬ãããï¼æ£è§£ãä¸ãã¦ãããã¨ããã®ã¯æ師ããå¦ç¿ã¨åãææ³ï¼ããã§ãã
ãå¼·åå¦ç¿ã¯ã²ã¼ã ã®åæãéãã¦æ¦ç¥ç¥èãç²å¾ãããªã©ã«å©ç¨ã§ãã¾ãã
æ©æ¢°å¦ç¿ã¢ã«ã´ãªãºã ã®åé¡
ãã§ã¯ãå®éã«æ©æ¢°å¦ç¿ãããéã«ã©ã®ãããªã¢ã«ã´ãªãºã ãé¸æããã°è¯ãã®ã§ããããï¼scikit-learnã®ãããã«ããã¨ãæ©æ¢°å¦ç¿ã®ã¢ã«ã´ãªãºã ã¯4ã¤ã«åé¡ã§ãã¾ãã
- åé¡ (Classification) ãå ¥åãããã¼ã¿ã«å¯¾ãã¦ãã¼ã¿ã®å±æ§ã種é¡ãè¿ãææ³ã§ããäºæ¸¬å¯¾è±¡ã¯ã«ãã´ãªã§ãã
- å帰 (Regression) ãå ¥åãããã¼ã¿ã«å¯¾ãã¦å¤ãåºåããææ³ã§ããäºæ¸¬å¯¾è±¡ã¯æ°å¤ã§ãã
- ã¯ã©ã¹ã¿ãªã³ã° (Clustering) ãæå ¥ãããã¼ã¿ã®ã°ã«ã¼ãã³ã°çµæãè¿ãææ³ã§ãã
- 次å åæ¸ (Dimensionality Reduction) ãå¤å¤æ°ã®ãã¼ã¿ãè¦ç´ããææ³ã§ããå³å¯ã«ã¯ãã§ããã ãå ã®æ å ±éãæãªããªãããã« ä½æ¬¡å ã®ãã¼ã¿ã«å¤æãã¾ãã
ãä¾ ã¹ãã ã¡ã¼ã«ã®å¤å®
ãä¾ æ¢åã®ã¬ã¹ãã©ã³ã®ãã¼ã¿ãå©ç¨ãã¦ãæ°è¦ã¬ã¹ãã©ã³ã®æ¶è²»é»æ°éãäºæ¸¬
ãä¾ è³¼è²·ãã¼ã¿ããããã£ã¼ã«æ å ±ãä½ããã®æå³ã§ã°ã«ã¼ãåããã¦ããã®ç¹å¾´ã»å¾åãçºè¦
ãç¹å¾´éãå¤ããªãã°ãªãã»ã©èª¿æ´ãã¹ããã©ã¡ã¼ã¿ã®æ°ãå¢ãã¦ãéå¦ç¿ã®ãªã¹ã¯ãé«ã¾ãã¾ããéå¦ç¿ã®ãªã¹ã¯ã¨ã¯ãå¦ç¿ãéãã¦è¨ç·´ãã¼ã¿ã«ç¹åãããããã¨ã§ãããã£ã¦æªç¥ã®ãã¼ã¿ã«å¯¾ããæ±åæ§ã失ã£ã¦ãã¾ããã¨ã§ãã
ãä¸æ¹ããã¼ã¿ãã¾ã æ£ããæ´ã¿åãã¦ããªãç¶æ ãæªå¦ç¿ã¨è¨ãã¾ããéå¦ç¿ã¨æªå¦ç¿ã¯ãã¬ã¼ããªãã®é¢ä¿ã«ãªããã¨ã«æ³¨æãã¾ãããã
ãããã¾ã§æ©æ¢°å¦ç¿ã®ææ³ã»åé¡ãç´¹ä»ãã¦ãã¾ãããããããã®å©ç¨å²åã¤ã¡ã¼ã¸ã¯ä»¥ä¸ã®ã¨ããã§ããã«ãã³å ã¯ä»£è¡¨çãªã¢ã«ã´ãªãºã ã«ãªãã¾ãã
深層å¦ç¿ã®ã¢ã«ã´ãªãºã
ããæ°ã¥ãããããã¾ããããããã¾ã§ã§æ·±å±¤å¦ç¿ã«é¢ãããã¨ãåºã¦ãã¦ã¾ããã深層å¦ç¿ã«ãè²ã ãªç¨®é¡ãããã¾ããããã§ã¯ä¸»ãªãã®ã5ã¤ç´¹ä»ãã¾ãã
- CNN (Convolutional Neural Network) / ç³è¾¼ã¿ãã¥ã¼ã©ã«ãããã¯ã¼ã¯ ãç³è¾¼ã¿å±¤ã¨ãã¼ãªã³ã°å±¤ãå«ãé ä¼æåãããã¯ã¼ã¯ã§ããã£ã±ãç»åèªèã«å©ç¨ããã¾ãã
- RNN (Recurrent Neural Network) / å帰åãã¥ã¼ã©ã«ãããã¯ã¼ã¯ ãé³å£°ãè¨èªãåç»åã¨ãã£ãç³»åãã¼ã¿ã¯ãç³»åå ã®è¦ç´ ã®ä¸¦ã³ï¼æèï¼ã«é¢é£æ§ãããã¾ããããããç³»åãã¼ã¿ãæ±ããã¥ã¼ã©ã«ãããã§ãã
- AE (AutoEncoder) / èªå符å·åå¨ ããã¼ã¿ããã表ãç¹å¾´ãç²å¾ãããã¨ãç®æ¨ã¨ãããã¥ã¼ã©ã«ãããã§ãã深層å¦ç¿ã®äºåå¦ç¿ã§éã¿ã®åæå¤ãã¨ããã¨ãªã©ã«å©ç¨ããã¾ãã
- DBM (Deep Boltzmann Machine) / 深層ãã«ããã³ãã·ã³ ãã¦ãããéçµåãåæ¹åæ§ãæã¤ãã¥ã¼ã©ã«ãããã§ãããããã¯ã¼ã¯ã®æåã確ççã«è¨è¿°ã§ããã®ã§ããã¼ã¿ã®çæã¢ãã«ã¨ãã¦å©ç¨ããã¾ãã
- DQN (Deep Q-Network) / 深層強åå¦ç¿ ãGoogleã®åä¼ç¤¾Deepmindãéçºããã深層å¦ç¿ã¨å¼·åå¦ç¿ãçµã¿åãããã¢ã«ã´ãªãºã ãä¾ãã°å²ç¢AIã®AlphaGoã¯ãèªåã®é¸æããæã«å¯¾ãã¦ãæå©ã«ãªã£ããä¸å©ã«ãªã£ãããå®éçãªå ±é ¬ã¨ãã¦ä¸ããå¢ããããã«å¦ç¿ãã¦ããã¯ãã§ãã
ãä»ã«ãåé¡ã®ä»æ¹ãããã¾ããããã§ã¯ãæ©æ¢°å¦ç¿ã®ææ³ã§ã¯æ師ããå¦ç¿ããã使ããã¦ãããã¨ãææ³ã«å¯¾ãã¦ä»£è¡¨çãªã¢ã«ã´ãªãºã ãã©ã®ããã«å½ã¦ã¯ã¾ãããä¼ããã°ã¨èãã¦ãã¾ãã
ãµã³ãã«ã³ã¼ãï¼æ©æ¢°å¦ç¿ã®ç¹å¾´ãæ´ç
ãç°¡åãªããã°ã©ã ã§æ©æ¢°å¦ç¿ã®åºæ¬çãªä»çµã¿ãæ´çãã¦ã¿ã¾ããããæ©æ¢°å¦ç¿ã¢ã«ã´ãªãºã ã®ãHello Worldãã¨å¼ã°ããmnistã¯ãgithubã«ã½ã¼ã¹ã³ã¼ããããã¾ãã
mnistã¯ä¸è¨ã®ãããª0ãã9ã®ææ¸ãæ°åç»åãã¼ã¿ã»ããã§ãããã®ç»åãæ°åã®ä½ã表ããã¨ããã©ãã«ãã¼ã¿ãå«ãã§ãã¾ãã
å®è¡ãã¦ã¿ãã¨
$ python mnist_softmax.py
çµæã¨ãã¦ä¸è¨ã表示ããã¾ãã
0.9167
ãã®æ°å¤ã¯ä½ã表ãã¦ããã®ã§ããããï¼å®éã«ã½ã¼ã¹ã³ã¼ããè¦ã¦ã¿ã¾ãããã
"""A very simple MNIST classifier. See extensive documentation at http://tensorflow.org/tutorials/mnist/beginners/index.md """ from __future__ import absolute_import from __future__ import division from __future__ import print_function # 1.Import data from tensorflow.examples.tutorials.mnist import input_data import tensorflow as tf flags = tf.app.flags FLAGS = flags.FLAGS flags.DEFINE_string('data_dir', '/tmp/data/', 'Directory for storing data') mnist = input_data.read_data_sets(FLAGS.data_dir, one_hot=True) sess = tf.InteractiveSession() # 2.Create the model x = tf.placeholder(tf.float32, [None, 784]) W = tf.Variable(tf.zeros([784, 10])) b = tf.Variable(tf.zeros([10])) y = tf.nn.softmax(tf.matmul(x, W) + b) # 3.Define loss and optimizer y_ = tf.placeholder(tf.float32, [None, 10]) cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1])) train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy) # 4.Train tf.initialize_all_variables().run() for i in range(1000): batch_xs, batch_ys = mnist.train.next_batch(100) train_step.run({x: batch_xs, y_: batch_ys}) # 5.Test trained model correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) print(accuracy.eval({x: mnist.test.images, y_: mnist.test.labels}))
ãã¨ã¦ãã·ã³ãã«ãªMNISTåé¡å¨ã®ä¾ã¨ã³ã¡ã³ãããã¦ã¾ããçãã³ã¼ãã§ãããéè¦ãªäºã沢山詰ã¾ã£ã¦ãã¾ãã®ã§æ´çãã¦ã¿ã¾ãããã
å¦çã®å¤§ã¾ããªæµã
ãã³ã¼ãå ã§å¦ç¿å¦çã¨å¤å®å¦çããã¦ãããæ©æ¢°å¦ç¿ã®ä¸è¬çãªå¦çã¨åãã§ããå ¨ä½çãªå¦çã®æµãã¯ä¸è¨ã®ããã«ãªãã¾ãã
ãã¤ã³ã
ãå¦ç段éãã¨ã®ãã¤ã³ãããããã¾ãããã
ãâ» æ°å¼ãé¢æ°ããã¥ã¼ã©ã«ãããã®èª¬æãããªãçç¥ãã¦ãã¾ãã段éçã«è©³ãã説æã§ããã°ã¨èãã¦ãã¾ãã
ï¼. ãã¼ã¿æºå
ãmnistã®ãã¼ã¿ç®¡çç¨ã®å¥ãã¡ã¤ã«input_data.pyãå©ç¨ãã¦ãã¾ãããã®ãã¡ã¤ã«ã¯ãã¼ã¿ã3ã¤ã«åºåãããã¦ã³ãã¼ãããã¾ãã
- train
- validation
- test
ãæ©æ¢°å¦ç¿ã§ã¯è¨ç·´ã¨è©¦é¨ã§ãã¼ã¿ãåãããã¨ãéè¦ã§ããã¢ãã«ä½ææã«å©ç¨ããè¨ç·´ãã¼ã¿ã§è©¦é¨ï¼è©ä¾¡ï¼ããã®ã§ã¯ãæ±ç¨çãªè¨æ¸¬ç²¾åº¦ã¨ã¯è¨ãã¾ããããã®ããtrainã¨testã®2ã¤ã®ãã¼ã¿ãæºåããã¦ãã¾ãã
ãvalidation ã¯ä»åã®ãµã³ãã«ã³ã¼ãã§ã¯å©ç¨ãã¦ãã¾ãããããã©ã¡ã¼ã¿ãããã«èª¿æ´ããããã®ãã¼ã¿ã«ãªãã¾ãã
ï¼. ã¢ãã«ä½æ
ããã®ãµã³ãã«ã³ã¼ãã§ä½æããæ°çã¢ãã«ã¯ä¸è¨ããã¼ãã¨ã¨ãã¸ã®å ç©ã®çµåãã®åã«ããã¤ã¢ã¹ã足ãããã®ãSoftmaxé¢æ°ã®å¼æ°ã«ãã¦ãã¾ããé ä¼æãããã¯ã¼ã¯ã§ãã
Wï¼éã¿ãxï¼å¦ç¿ãã¼ã¿ã bï¼ãã¤ã¢ã¹
ãçµåãã¯ãä¸è¨åç §
ãsoftmaxé¢æ°ã®æ»ãå¤ãããã§ã¯yã¯ã0.87666ã»ã»ãªã©ã®ç¢ºçã®å¤ã¨ãªãã¾ããã§ãã®ã§y1ãy2ãy3ã»ã»ã»ãå ¨é¨è¶³ãã¨å¸¸ã«1 ã«ãªãã¾ãã
ï¼. æ失ã¨æé©åææ³å®ç¾©
ãæ失ã¨ãã¦ã¯ãã¹ã¨ã³ãããã¼ãå©ç¨ãã¦ãã¾ãã
yï¼äºæ¸¬ç¢ºçåå¸ãy'ï¼çã®åå¸
ãæé©åææ³ã¯å¾é éä¸ã¢ã«ã´ãªãºã ããµã³ãã«å ã®0.5ã¯å¦ç¿ä¿æ°ã«ãªãã¾ãã å¤ã¯ã©ã¹åé¡ã®å ´åã¯ã¯ãã¹ã¨ã³ãããã¼é¢æ°ããã使ãããå帰åé¡ã¯äºä¹èª¤å·®é¢æ°ã使ããã¾ãã
ãé ä¼æãããã¯ã¼ã¯ã®å¦ç¿ã¨ã¯ãå¦ç¿ãã¼ã¿ã§è¨ç®ããã誤差é¢æ°ãããããã¯ã¼ã¯ã®ãã©ã¡ã¼ã¿ã«ã¤ãã¦æå°åãããã¨ã§ããã¤ã¾ãå¦ç¿ã®ç®çããé¸ãã 誤差é¢æ°(=æ失é¢æ°)ãæå°ã«ãããããã¯ã¼ã¯ã®ãã©ã¡ã¼ã¿ãæ±ãããã¨ã¨è¨ãã¾ãã
ã極å°è§£ã¯ä½ããã®å¤ãåºçºç¹ã«ããã©ã¡ã¼ã¿ãå復è¨ç®ãããã¨ã§æ±ãã¾ãããã®å復è¨ç®æ¹æ³ã¨ãã¦ãï¼éç·å½¢é¢æ°ã®æå°åææ³ã§ï¼ãã£ã¨ãåç´ãªæ¹æ³ãå¾é
éä¸æ³ã«ãªãã¾ãã
ãå¾é
éä¸æ³ã¯ããããã¯ã¼ã¯ã®ãã©ã¡ã¼ã¿ãè² ã®å¾é
ï¼ãã¯ãã«ï¼æ¹åã«å°ããã¤åãããæ´æ°ãã¦ããã¾ãããã®éã®æ´æ°éã®å¤§ãããå®ããå®æ°ãå¦ç¿ä¿æ°ï¼å¦ç¿çï¼ã«ãªãã¾ãã
ï¼. ã¢ãã«ã®è¨ç·´
ãä»åã®ãµã³ãã«ã³ã¼ãã¯ãããããå¦ç¿ã§ã100ã®ã©ã³ãã ãªãã¼ã¿ãåå¾ãã¦ãã¾ãããã®æ°å¤ï¼100ï¼ã¯ããããµã¤ãºã§ãã æ©æ¢°å¦ç¿ã§ã®å¦ç¿ãã¼ã¿ã®ä¸ãæ¹ã¯ï¼ãã¿ã¼ã³ã«åãããã¾ãã
- ãããå¦ç¿ ãå ¨ã¦ã®å¦ç¿ãµã³ãã«ãç¨ãã¦æ´æ°éãæ±ããæ¹æ³ã§ããå¦ç¿ãµã³ãã«ã®èª¤å·®ã¯ãã¯ãã¹ã¨ã³ãããã¼é¢æ°ãäºä¹èª¤å·®é¢æ°ãç¨ãã¾ãã
- ãªã³ã©ã¤ã³å¦ç¿ï¼é次å¦ç¿ï¼ ãï¼ã¤ã®ãµã³ãã«ã§ãããã¯ã¼ã¯ãæ´æ°ããå¾ã次ã®ãµã³ãã«ãæ´æ°ããããããã¯ã¼ã¯ã§è©ä¾¡ãããå¦ç¿ãµã³ãã«ãï¼ã¤ãã¤ä¸ããæ¹æ³ã§ãã
- ãããããå¦ç¿ ãå¦ç¿ã»ãããå°éã®ãµãã»ããã«åããï¼åã®æ´æ°ã«ãã®ãµãã»ãããç¨ããããããå¦ç¿ã¨ãªã³ã©ã¤ã³å¦ç¿ã®ä¸éçãªæ¹æ³ã§ããè¨ç®ã並ååãã¦ãæ°å¤è¨ç®ãå¹çåãã¾ãã
ï¼. ã¢ãã«ãè©ä¾¡
ãæ£è§£ãã¼ã¿ã¨æ¯è¼ãã¦ã精度ãç®åºãå¦ç¿ã¢ãã«ãè©ä¾¡ãã¦ãã¾ãã
ããã®è©ä¾¡çµæãææ¨ã«ãã¦ã¢ãã«ããã¼ã¿ãå¤æ´ã»æ¹åãã¦ããã¾ãã
ãããã«
ãæ©æ¢°å¦ç¿ãå©ç¨ããã·ã¹ãã ã¯ä»å¾ã¾ãã¾ã身è¿ãªãã®ã¨ãªã£ã¦ããã¯ãã§ãã
ã深層å¦ç¿ãæ©æ¢°å¦ç¿ã®ä¸ç¨®ãæ©æ¢°å¦ç¿ã®å¿
è¦æ§ãä»çµã¿ã®ã¤ã¡ã¼ã¸ã湧ãã°ã深層å¦ç¿ã®ç解ãã¹ã ã¼ãºã«ãªããã¨æãã¾ãã
ã話å¤ããã¾ãããæè¿ãªãã£ã¹ãå¢è¨ãã¾ãããä»ã¾ã§ã®æ¥æ¯è°·ã«å ããæ°ãããªãã£ã¹ã¯æ°æ©é§ ãå 幸çºé§ è¿ãã«ãªãã¾ããããã¦ãããã¤ãã³ãã¹ãã¼ã¹ãããã®ã§ãããã®ãã«ãã«ã®ã¹ãã¼ã¹ã使ããã«ã¯ããããã»ã»ã»åå¼·ä¼ãã¾ãã
åå¼·ä¼æ¦è¦
Deep Learningãä¸å¿ã¨ããæ©æ¢°å¦ç¿ã§
- ä½ãã§ããã®ãåãããªã
- ããããéçºãã¦ã¿ãããã©ãã©ãããæãã¤ãã¦è¯ããåãããªã
- å°ãéçºãã¦ã¿ããã©ããªããªãä¸æããããªã
ãã®ãããªæ¹ã対象ã«ãDeep Learningã®ç解ãéçºããä»ããå°ãã§ãé²ãå ´ã«ãªããã°å¬ããã§ãã
ä»åã¯ãå¥éãå
·ä½çãªãããã¯ãã®è¦å´è©±ããã¹ã¿ã¼ãã¢ããã®æ¹ã®æ§æ³è©±ã®LTãããã¾ãã
æ¥æï¼2016å¹´7æ28(æ¨)
å ´æï¼æ±äº¬é½å代ç°åºå
幸çº1-3-1幸ãã«ãã£ã³ã°8é
- 18:45 - 19:30 åä»
- 19:30 - 20:45 åå¼·ä¼
- 20:45 - 21:45 æ親ä¼
20æ以éä¼å ´ãã«ã«å ¥ãã¾ãããé ãã¦ããæ¹ã¯æ³¨æãã¦ä¸ããã
åè
深層å¦ç¿ï¼æ©æ¢°å¦ç¿ãããã§ãã·ã§ãã«ã·ãªã¼ãº
æ©æ¢°å¦ç¿ã¨æ·±å±¤å¦ç¿ âCè¨èªã«ããã·ãã¥ã¬ã¼ã·ã§ã³â
åãã¦ã®ãã£ã¼ãã©ã¼ãã³ã°
ITã¨ã³ã¸ãã¢ã®ããã®æ©æ¢°å¦ç¿çè«å
¥é
ããªã¼ã½ããã§ã¯ãããæ©æ¢°å¦ç¿å
¥é
TensorFlow, the TensorFlow logo and any related marks are trademarks of Google Inc.
ãç¥ãã
ãããªã³ã§ã¯ä¸ç·ã«åã仲éãåéãã¦ãã¾ãã