2017å¹´ å人ã®æ¯ãè¿ã
ä»å¹´ã®ä¸»ãªã¤ãã³ã
- ä¼ç¤¾éè·
- ãµã©ãªã¼ãã³è¾ãã
- ä¼ç¤¾è¨ç«(èªèº«ã®å人äºæ¥ä¸»çãªä¼ç¤¾)
- BizteXã«Join(Co-founder & CTOã«ãªã)
- ãããã¯ãã®ã¼ãããã®éçº
- ãã¼ã ãã«ãã£ã³ã°
- ãããã¯ããªãªã¼ã¹ï¼ãã¼ã¿ï¼æ£å¼ï¼
- 40æ³ã«ãªã£ã
- ãããã¯ãã®åå注
ä»å¹´ã¯ç§å人ã«ã¨ã£ã¦æ¿åã®1å¹´ã§ããã
é·ãå¤ãã¦ããåè·ã§ã¯çµ¦ä¸é¢ã§ã¯æµã¾ãã¦ãã¾ãããããããªç©ã¯é¢¨åããå¤ããã°ããç¡ããªãã¨èãã¦ãã¦ã没è½ãç¶ãã¦ãããã®æ¥æ¬ã§çãæ®ãããããµã©ãªã¼ãã³ãè¾ãå人ã®è½åãè£éã§çãããã¨ããäºã5,6å¹´åããèãã¦ãã¾ããã
ã¨ããã¹ã¿ã¼ãã¢ããã®æä¼ããããããèªåã§äºæ¥ã¢ã¤ãã¢ãèãã¦ãããã¿ã¤ããä½ã£ãããã¦ãã¾ããããå»å¹´11æãããã«ãã¾ãã¯èªåå人ãé£ããåè¨ä¼ç¤¾ãä½ããã¨æ±ºæãã¾ããã
決æãã¦ããã¯ãæ°å¹´éã®çµæ¸åºç¤ãä½ãã¹ãããã¤ãã®ç®æ¨ãå®ãã¦ãã¦ãã¾ãã
å¹´åã®ç®æ¨
- ä¼ç¤¾è¨ç«
- éçºã¹ãã«ãããã
- æä½ä»å¾3å¹´éã¯ã¬ãããã°ã©ãã¼ã¨ãã¦ãã
- æ©æ¢°å¦ç¿ããã
- ã¹ãã¼ãã³ã³ãã©ã¯ãããã
- å¶æ¥ããã
- ä½ã§ããã
éçºç³»ã¯æè¡çæè³ã¨ããæå³åãã§ã®ææ°æè¡ã®å¸åãéè¦ãã¾ããã
ã¾ããåè·ã§ã¯å¶æ¥æ¯æ´ãçä¸ããã¸ã§ã¯ãã®PMã¨ãã¦å¯¾é¡§å®¢ã³ãã¥ãã±ã¼ã·ã§ã³ããã£ã¤ããã£ã¦ãããã§ããããã¯ãå人ã§çãã¦ããã«ã¯ç´ç²ãªå¶æ¥åãå¿ è¦ã ããã¨ããäºã§ãå¶æ¥ãã
ãã¨ã¯ä¼ç¤¾ã®ç»è¨ã
ä¼ç¤¾è¨ç«
ä¼ç¤¾è¨ç«ã«ããããä¼ç¤¾ã®ç»è¨ãªã©ãå ¨é¨ä¸äººã§å®æ½ãã¾ããã ãã®ä»ãåæ¨ç³è«ãªã©ããã£ã¦ã¿ããããããªã«æéãæããªãã¦ãä¸äººã§è²ã åºæ¥ããã ãªã¨ããææ³ã§ãã ç¾ç¶ã¯ä¼ç ç¶æ ãªã®ã§è¨ç«èªä½ã¯ç¡é§ã¨ããã°ç¡é§ã§ãããããçµé¨ã ã£ããªã¨æã£ã¦ã¾ã
éçº
éçºç³»ã§éè¦ããã®ã¯ãããã°ã©ãã³ã°ã¹ãã«ããæ©æ¢°å¦ç¿ããã¹ãã¼ãã³ã³ãã©ã¯ããã§ãã
ãããã°ã©ãã³ã°ã¹ãã«ãã«ã¤ãã¦ã¯
- ç´ç²ã«ã³ã¼ããæ¸ãã¾ãã
- æ¬èªãï¼ãªã¼ããã«ã³ã¼ããé人ããã°ã©ãã¼ããã¹ãèªååããã®ä»ããããï¼
- ã¬ãã¥ã¼
ãããããã£ã¦ãã¦ãBizteXã«Joinããåã¯å人ã§ä¼ç»ããWebãµã¼ãã¹ã®éçºãã´ãªã´ãªãã¦ãã¦ãJoinãã2ã¶æåãããããã¯BizteX cobit (https://www.biztex.co.jp/cobit.html)ã®éçºãã´ãªã´ãªãã£ã¦ãã¾ãã BizteXããã¯ã¡ã³ãã¼ã¨ã®ã¬ãã¥ã¼ï¼ã¬ãã¥ã¼ããã¦ãããï¼ããï¼ãå®æ½ãã¦ãã¦ããããã¹ãã«ã¢ããã«ç©åãå¯ä¸ãã¦ãã¾ãã
ãæ©æ¢°å¦ç¿ãã«ã¤ãã¦ã¯ãã¼ãããä½ãDeep Learningãã®æ¬ãèªã¿å§ããäºã大ããããã®æ¬ã®ãããã§æ©æ¢°å¦ç¿ã«èå³ãæã¡ããã®ä»ã«ãããã¤ãã®æ¬ã¨ã¹ã¿ã³ãã©ã¼ã大å¦ããªã³ã©ã¤ã³ã§æä¾ãã¦ãããCoursera Machine Learningãã®ã³ã¼ã¹ã履修ãã¾ããããã®å¾ãTensorFlowãkerasãªã©ã®ã©ã¤ãã©ãªã使ã£ã¦ããã¤ãã¢ãã«ãä½ã£ã¦éãã§ãã¾ãããããããã¯ãéçºã«æéãåããã¦ãã¾ã触ãã¦ããªãã®ãç¾ç¶ã§ãã æ¥å¹´ããã¯ãããã¯ãã«çµã¿è¾¼ã¿ããã
ãã¹ãã¼ãã³ã³ãã©ã¯ããã«ã¤ãã¦ã¯æ°å¹´åã«Bitcoinãç¥ã£ã¦ããå»å¹´æ«ãããã«ã¤ã¼ãµãªã¢ã ã¨ããå¥ã®ä»®æ³é貨ãç¥ãç©åãèå³ãè¦ããã¨åæã«ç¤¾ä¼ãå¤ãããæè¡ã ã¨æãç®æ¨ã«å ¥ãã¾ãããæ®å¿µãªããæ¬ãèªãã®ã¨ã¹ãã¼ãã³ã³ãã©ã¯ããããã°ã©ãã³ã°ããè¨èªsolidityãç°¡åã«è§¦ã£ãç¨åº¦ã§æ¢ã¾ã£ã¦ãã¾ããæ¥å¹´ã¯å人ç趣å³ã§å°ã触ã£ã¦è¡ãã¾ã
Join
ã¾ããä»å¹´ä¸çªã®ã¤ãã³ãã¯ãã¯ãä»ã®BizteXã®ä»£è¡¨å¶ç°ã«åºä¼ã£ãäºã§ãããã
æåã¯ãä½ã£ãã°ããã®ä¼ç¤¾ã§æ¥åå§è¨å¥ç´ããã¦ããããã¨åæãããã¿ã¤ããä½ã£ã¦è¦ã¦ããã£ã¦ããã®ããæè¡çãªèª²é¡æãé¢ç½ãäºãå¶ç°ã®äºæ¥ã«å¯¾ããç±éã»äººæãªã©ã«æ¹ããJoinããäºã決æãã¾ããã å ã ä¸è¨ã®ããã«å人åãé«ããããã«èªåã§ä½ã§ãããããã£ããããã¡ã³ãã¼ã§ã¯ãªãè£éã®å¤§ããFounderï¼çµå¶è ã¨ãã¦å ¥ããäºã決ãæã§ãã
Joinãã¦ããã¯ãã¾ããããã¯ããã¼ãããçã¿åºãããã«ç´6ã¶æéã²ãããéçºãç¶ãã¾ããã (æ®æ®µé£²ã¿ã«è¡ãã®ã大好ããªç§ã§ãããããã®6ã¶æéã¯ã»ã¨ãã©é£²ã¿ã«è¡ã£ã¦ãã¾ããç¬) ãã¼ã¿ãªãªã¼ã¹å¾ã¯é¡§å®¢ã¸ã®ä½¿ç¨æã®ã¤ã³ã¿ãã¥ã¼ã»ãªã³ãµã¤ããµãã¼ãã»ã¡ã¼ã«ãµãã¼ãããã¤ã¤éçºãç¶ã å¶æ¥æ¯æ´ã»å¶æ¥ãå®æ½ãã¦ãã¾ã ããã§ãå¶æ¥ããããä½ã§ããããã¯ããç¨åº¦åºæ¥ãããªã¨æãã¾ãã
ãã¨ãå ããªã©ã¯ä¸åä¿¡ãã¦ããªããã§ãããèªåã®çå¹´ææ¥ã ã¨ã2017å¹´ã¯12å¹´ã«ä¸åº¦ã®å¤§å¹¸éæã§ã人çãå·¦å³ãã人ç©ã«åºä¼ããã¨ããããããå¶ç°ã ã£ããã§ã¯ãªããã¨å°ãä¿¡ãã¤ã¤ããã¾ã
å注
ããã¦ä»å¹´ä¸çªããããã£ãã¤ãã³ãã¯ãåå注ãã§ãã SaaSã¹ã¿ã¼ãã¢ããæ大ã®é£é¢ã¯åå注ã ã¨æã£ã¦ãããããåå注ããæã¯ä¸¦ã ãªãã¬æããããã¾ãã
å注ããããã«ãã¼ã å ¨å¡ãéçºã»å¶æ¥ã«éé²ããã¦ã¼ã¶ã¼ãæ±ãã¦ãã課é¡ãä¸ã¤ä¸ã¤è§£æ±ºãã¦ãã ããã¤ãã®å¶æ¥æ¡ä»¶ã§ãã¨ä¸æ©ã¨ããæãä¹ãè¶ããªãããã¾ãã§ãã®é¡§å®¢ã®ããã«ãã¡ã®ãµã¼ãã¹ãä½ã£ããã ã ã¨ãã課é¡è§£æ±ºãçµã¦ããã®å注ã¯ã¾ãã«ITãæã¤æ¬æ¥ã®å½¹å²ã§ãããBizteXã®åå¨ä¾¡å¤ã示ãã第ä¸æ©ã ã¨æãã¾ãã
KPT
ä»ç¾å¨ãéçºãã¼ã ã§ã¤ãã¬ã¼ã·ã§ã³ãã¨ã«KPTã§æ¯ãè¿ããå®æ½ãã¦ãããã§ãããå人ã®æ¯ãè¿ãã¨ãã¦2017å¹´æ¯ãè¿ãã®KPTããã£ã¦ã¿ã¾ãã çæéã§ãã£ã¦ã¿ãã®ã§ä»£è¡¨çãªç©ã ãè¨è¼ãã¦ãã¾ãããæ¥å¹´ã¯ããã«è¨è¼ããKEEP, TRYãä¸å¿ã«å®æ½ãã¦äºæ¥ãå éããã¦è¡ãããã¨æã£ã¦ãã¾ã
KEEP
éçºé¢é£(sodeyama)
- éçºã¹ãã«ãä¸ãã
- æ¬ãèªã
éçºé¢é£(ãã¼ã )
- ã³ã¼ãã¬ãã¥ã¼
- æ¯ã¤ãã¬ã¼ã·ã§ã³ã®KPT
å¶æ¥
- å¶æ¥æ¯æ´
- å¶æ¥
çµå¶é¢é£
- éè¦ãªçµå¶ææ¨ãä¸ã¤æã¡ããã¾ãã«ä¿®æ£ãã
- é·æçãªå±æããã®éç®ããã¹ã±ã¸ã¥ã¼ãªã³ã°
- éçºã®ãªãã¡ã©ã«æ¡ç¨
Problem
éçºé¢é£(sodeyama)
- ä»ãããã¯ãã®ã³ã¼ããªã¼ãã£ã³ã°ãå¼±ã
- æ©æ¢°å¦ç¿ã®åå¼·ãé絶ãã
- ã¹ãã¼ãã³ã³ãã©ã¯ããªã©ãããã¯ãã¨ç¾ç¶ç¡é¢ä¿ãªæè¡ã¸ã®æè³ãå¼±ã
éçºé¢é£(ãã¼ã )
- ãã¼ã ã®éçºé度ããã£ã¨ãããã
- ãã¼ã ã®å®å®æ§(ã¢ããã¼ã·ã§ã³ã®æç¶çãªæå³ã§)
- ãããã¯ãã®UXåä¸
- ãªã¢ã¼ãéçºã®å¹çå
ãµãã¼ãé¢é£
- ã«ã¹ã¿ãã¼ãµãã¼ãã®ã¹ã±ã¼ã« 質ãä¿ã¡ãªããã¹ã±ã¼ã«ãããµãã¼ããå¿ è¦ ï¼ã«ã¹ã¿ãã¼ãµã¯ã»ã¹ã«ãã¤ãªããããããã«ï¼
å¶æ¥
- å¶æ¥æ¯æ´ã®ã¹ã±ã¼ã«
çµå¶é¢é£
- ãã¯ããã¸ã¼é¢ã§ã®ä¼ç¤¾ãã©ã³ãã®ç¢ºç«
- ä»å¾ã®ã¨ã³ã¸ãã¢æ¡ç¨
- WEBãã¼ã±ãã£ã³ã°
- ã°ãã¼ãã«å±é
Try
éçºé¢é£(sodeyama)
- æåãããã¯ãã®ã³ã¼ããèªã
- æ©æ¢°å¦ç¿ãå©ç¨ããæ©è½ããããã¯ãã«åãè¾¼ãããã®ãããã¿ã¤ããä½ã
- [趣å³]solidityã§å®é¨çãªã³ã³ãã©ã¯ããæ¸ã
éçºé¢é£(ãã¼ã )
- ã³ã¼ãã®å質ããã£ã¨ä¸ãã
- åä½ãã¹ã(ã³ã¼ã)ã®ã«ãã¬ãã¸ãä¸ãã
- CIã§ã®ãã¹ãã±ã¼ã¹ã®ä¸¦åå®è¡
- åãã¼ã ã¡ã³ãã¼ã¸ã®ãããã¯ãã¸ã®è£é権ã¨è²¬ä»»ã®ãããªãåæ£
- UXãå¾æãªæ¹ã¨æ¥åå§è¨å¥ç´ãçµãã§å®éã®ãããã¯ãã®UXãåä¸ããã
- ãªã¢ã¼ãéçºã®å¹çå
ãµãã¼ãé¢é£
- ã«ã¹ã¿ãã¼ãµã¯ã»ã¹ãæèãããµãã¼ãã®è¨è¨ã¨ãã¼ã«ã®å°å ¥
å¶æ¥
- å¶æ¥æ¯æ´ç³»ã®ã¤ã³ã¿ã¼ã³æ¡ç¨ããµãã¼ã系社å¡ã¸ã®æè²
çµå¶é¢é£
- 社å ã®ãã¯ããã¸ã¼æåã®é示 éçºããã»ã¹ãä½ãéè¦ãã¦ããã
- æè¡åå¼·ä¼ã®å ±åéå¬
- ã¨ã³ã¸ãã¢æ¡ç¨ã®ããã«ããã¤ãã®HRãµã¼ãã¹ã使ã
- ãããã¯ãã®å¤è¨èªå(2018å¹´æ«äºå®)
æ©æ¢°å¦ç¿ãSVMã使ãåºæº
stanford machine learning 7é±ç®ã§ã
SVM (Support Vector Machine)ã¨ãããã¥ã¼ã©ã«ãããã¯ã¼ã¯ã§ã¯ãªãå¦ç¿ã¢ãã«ã®ä¸ã¤ã ãã¸ã¹ãã£ãã¯å帰ã§ãããSVMã§ãããã¯ä»¥ä¸ã®ãããªåºæºã§åãåãã
å¦ç¿ã«å©ç¨ããç¹å¾´ç¹ï¼å®¶ã®ä¾¡æ ¼ãäºæ³ããå ´åã¯åºããé¨å±æ°ãæå¯ãé§ ãªã©ã®æ å ±ï¼ã®æ°ãnã¨ããå¦ç¿ãããã¬ã¼ãã³ã°ã»ããã®æ°ãmã¨ããå ´å
n ã m ã«æ¯ã¹ã¦ããªã大ããå ´å
ä¾ãã°nã10000ã§mã10ã10000ãããã®å ´å ãã¸ã¹ãã£ãã¯å帰ããSVM without Kernel(ç·å½¢ã«ã¼ãã«ï¼ãå©ç¨ãã
n ã m ã«æ¯ã¹ã¦ããªãå°ããå ´å
nã1ã1000ã§mã10ã10000ãããã®å ´å SVMãã¬ã¦ã¹ã«ã¼ãã«ã§å©ç¨ãã â»mã¯5ä¸ãããã¾ã§ãè¯ãããã以ä¸ã ã¨ã¬ã¦ã¹ã«ã¼ãã«ãå©ç¨ããã¨é ã
n ãå°ãªã㦠mã極ãã¦å¤ãå ´å
nã1ã1000ã§mã5ä¸ä»¥ä¸ãæ°ç¾ä¸ã®å ´å èªåã§ç¹å¾´ç¹ã®æ°ãå¢ãã(æ°ãã«åéãï¼ããã¸ã¹ãã£ãã¯å帰ããSVM without Kernel(ç·å½¢ã«ã¼ãã«ï¼ãå©ç¨ãã
ãã¥ã¼ã©ã«ãããã¯ã¼ã¯ã®å©ç¨ã«ã¤ãã¦
ä¸è¨ã®åé¡å ¨ã¦ã«ãã¥ã¼ã©ã«ãããã¯ã¼ã¯ã®ã¢ãã«ãé©åã«ä½ãã°æå¹ã«åãããè¨ç®é度ãé ãã±ã¼ã¹ãããããé©åã«ãã¸ã¹ãã£ãã¯å帰ãSVMãç¨ããæ¹ãè¯ã
æ©æ¢°å¦ç¿ ã¢ãã«ã®ãããã°
stanfordã®machine learningè¬ç¿ã6é±ç®ã§ã
ãããéè¦ãªåã ã£ãã®ã§å°ãã¾ã¨ã
ã¢ã«ã´ãªãºã ã®ãããã°ææ³
cross validationã®ãã§ãã¯
å¦ç¿ã®æ£å½æ§ã確èªããã®ã«ãã¼ã¿ã
- training set
- cross validation set
- test set
ã«åãã¾ãã ãã¬ã¼ãã³ã°ã»ããã¯å¦ç¿ã®ããã®ã»ããã§æ®ã2ã¤ã¯ãããã妥å½ãã®ç¢ºèªã«ä½¿ãã¾ã 大ä½å²åã¯60%:20%:20%ã§åããã¨ã®äºã cross validation setã¯featureã®æ°ãæ£è¦åï¼Î»ï¼ã®å æ¸ãpolynomialï¼ã²ã¨ã¤ã®feature xã«å¯¾ããx2, x3ãªã©ãfeatureã«å ããæ¹æ³ï¼ã®å¢å ãªã©ãå®æ½ããéã«æ£ããå¦ç¿åºæ¥ã¦ãããã確èªããããã®ãã¼ã¿ã»ããã å¦ç¿ç¶æ³ã®ç¢ºèªæé ã¯ä»¥ä¸
- training setã®æ°ãmã¨ããã¨for i =1:m ã¨ããç¹°ãè¿ãã®ä¸ã§2ã5ãç¹°ãè¿ã
- training setã®iååã®ãã¼ã¿ã«å¯¾ããæ£è¦åãããã³ã¹ãé¢æ°ã§å¦ç¿ããæå°ã¨ãªããã©ã¡ã¼ã¿ãæ±ãã
- 2ã®ãã©ã¡ã¼ã¿ã使ãæ£è¦åãã¦ããªãã³ã¹ãé¢æ°ã§training set iååã«å¯¾ããã³ã¹ãé¢æ°ã®å¤ãåºã
- 2ã®ãã©ã¡ã¼ã¿ã使ãæ£è¦åãã¦ããªãã³ã¹ãé¢æ°ã§cross validation set(å ¨ãã¼ã¿)ã«å¯¾ããã³ã¹ãé¢æ°ã®å¤ãåºã
- 3ã¨4ã®åºåçµæãå ã«high biasç¶æ ãhigh varianceç¶æ ãã確èªãã
- ç¶æ³ã«åããã¦ãã©ã¡ã¼ã¿ã«ä¿®æ£ãå ã1ã5ãç¹°ãè¿ã試ã
- æé©ãªç¶æ ã§test setã§ã¨ã©ã¼çãåºã
high biasãªç¶æ
under fittingã§èµ·ãããtraining setã®ã¨ã©ã¼çãé«ããcross validation setã§ã®ã¨ã©ã¼çãé«ã å¦ç¿ãé²ããã¨training setã¨cross validation setã®ã¨ã©ã¼çãã»ã¼åãã«ãªããæçµçã«åæããã¨ã©ã¼çãé«ãã®ã§training setããããå¢ããã¦å¦ç¿ããã¦ãç¡é§ãªã®ã§ãã®ç¶æ ãç¾ãããå³è¦ç´ãã
high varianceãªç¶æ
over fittingï¼éå¦ç¿ï¼ã§èµ·ãããtraining setã®ã¨ã©ã¼çãä½ããcross validation setã§ã®ã¨ã©ã¼çã¯é«ã å¦ç¿ãé²ãã㨠training setã®ã¨ã©ã¼çãcross validation setã®ã¨ã©ã¼çããä½ããå¾ã ã«å·®ã¯ç¸®ã¾ã£ã¦ããã åæããã¨ã©ã¼çã¯æå¾ ãããå¤ã«è¿ã¥ãã®ã§training setãéãããªããä»ã®å¯¾å¦æ¹æ³ã§ãã¥ã¼ãã³ã°ãã¹ãã
ç¶æ³ã«åã£ã対å¿æ¹æ³
high bias
- featureãå¢ãã -> hidden layerå¢ããããhidden layerã®featureæ°ãå¢ãã
- polynomial featureãå¢ãã -> x2, x3ããx1*x2ãããå¢ãããã ãã©ãã¥ã¼ã©ã«ãããã¯ã¼ã¯ãªãä¸è¨ã®æ¹æ³ã§è¯ãããï¼
- æ£è¦åλãæ¸ãã -> over fittingæ¯æ£ããããã«å ¥ããλãé«ãããå ´åã«high biasã«ãªã£ã¦ãå¯è½æ§ãã
high variance
- training setãå¢ãã
- featureã®æ°ãæ¸ãã -> hidden layerå¤ããããhidden layerä¸ã®parameteræ°ãå¤ãå ´åã«æ¸ãã
- æ£è¦åλãå¢ãã -> λã¯å¢ãããããã¨high biasã«ãªãã®ã§æ³¨æãã¾ã æé©ã§ãªããªãå¢ãã
Stanford Universityã®Online machine learningã³ã¼ã¹ãåãã
ã¼ãããä½ãDeep Learningã®æ¬ãèªã¿çµããå¾ã«tensor flowã®ãã¥ã¼ããªã¢ã«ãkerasã®ãã¥ã¼ããªã¢ã«ãããã¤ããã£ã¦ããã®ã§ãããããå°ãä½ç³»çã«åå¼·ããããªã¨æã
ãã®Stanford Universityãå®æ½ãã¦ãããªã³ã©ã¤ã³ã®æ©æ¢°å¦ç¿ã³ã¼ã¹ãåè¬ãã¦ãã¾ãããããªè¬ç¾©ãã¡ã¤ã³ã§æã ã«é¸æå¼ãåè¾¼ã¿å¼ã®ãã¹ããå®æ½ãã80%以ä¸ã®æ£ççã ã¨æ¬¡ã®è¬ç¿ã«é²ãã¾ããå ¨ä½ã§11é±åã®è¬ç¾©ããããã³ã¼ããæ¸ãã¦æåºãããå½¢å¼ã®ãã¹ããé±ã«1åã»ã©ããã¾ãã
è¬ç¾©ãããªã§ãããæ¥æ¬èªã®åå¹ãåºãã¾ããã¾ãè¬ç¾©ä¸ã®è±èªãé常ã«å¹³æãªã®ã§ãããç¨åº¦ã®å°éã®è±åèªã«æ £ãã¦ãããè±èªåå¹ã§ãç解åºæ¥ããã¨æãã¾ããæ°å¦ã«ã¤ãã¦ãäºç´°ãã«èª¬æããã¦ããã®ã§æ°å¦ã®åæç¥èãªãã§ãé²ããããããããã¾ããã
ãã®ã³ã¼ã¹ã®ææãå§ãã¦ããã®ãOctaveã¨ããæ°å¼è¨ç®ã«ç¹åããè¨èªã§ãæ©æ¢°å¦ç¿ãåå¼·ããã«ã¯ãã®è¨èªãæé©ã ã¨ã®äºã§ãã確ãã«Octaveã¯è¡åè¨ç®ãªã©ãç´ã§è¨ç®ããæ¹æ³ã¨ã ãã¶è¿ããããnumpyã®ãããªã³ã³ãã¥ã¼ã¿ãããªè¨ç®æ¹æ³ããåã£ä»ãå®ãã§ããã·ãªã³ã³ãã¬ã¼ã®äººéãæ©æ¢°å¦ç¿ã®ã¢ãã«ãå®é¨ããéã¯Octaveã使ãäºãå¤ãããã§ãã
Week2ï¼å ¨Week11ï¼ã¾ã§åããææ³ã¨ãã¦ã¯ãæ©æ¢°å¦ç¿ãåå¼·ãã以å¤ã®éå£ã¨ãªãç¥èï¼æ°å¦ããã¼ã«ã®ä½¿ãæ¹çï¼ã極åæèãããªãããã«è¬ç¾©ããããã¦ããæã大å¤ç´ æ´ãããã§ãã
TensorFlowã使ã£ã¦ã¿ã 5
Kerasã使ã£ã¦ã¿ã
tensorflowã®é«æ©è½ã©ã¤ãã©ãªã§ããcontrib.learnã使ã£ã¦ã¿ããã§ãã
ä»ä¸ãããã¥ããã£ãã®ã§ããä¸ã¤ã®é«æ©è½ã©ã¤ãã©ãªã§ããKerasãããã£ã¦ã¿ã¾ã
ã¨ããããï¼å±¤ã§Optimizerã¯æéä¸å¾é æ¹ã§æãåç´ãªãããã¯ã¼ã¯ãçµãã§ã¿ã¾ã
contrib.learnã¨æ¯ã¹ã¦å±¤ã§ä½ããã£ã¦ããããé常ã«æå¿«ã§ãã
ãã ããçµæã¨ãã¦ã¯accuracyã0.86ãããã¨ååç´ ã®tensorflowã§çµãã å ´åã®0.97ã¨æ¯ã¹ãã¨ããªãä½ãã§ãã
ã¨æã£ã¦èª¿ã¹ã¦ã¿ãããlrã®å¤ãlearning rateã¨ãã1åã®å¾®åã§ã©ã®ç¨åº¦éã¿ã移åããããã®å¤ã0.01ã¨ããªãä½ãå¤ãæå®ãã¦ãã¾ã£ã¦ãã¾ããã
ç´ ã®tensorflowã§çµãã å ´åã¨åãå¤ã®0.5ã«ãããaccuracyã¯0.956ã¨ç²¾åº¦ã¯è¿ã¥ãã¦ã¾ãã
å¾ã精度ãåºãã®ã«éè¦ãªãã¤ãã¼ãã©ã¡ã¼ã¿ã§ããéã¿ã®åæå¤ã以ä¸ã®ããã«åããã¦ã¿ããã§ããã精度ã¯éã«è½ã¡ã¦0.952ãããã¨ãªã£ã¦ãã¾ãã¾ããã
std_deviation = 0.01 def my_init(shape, name=None): print(shape) value = std_deviation * np.random.randn(*shape) return K.variable(value, name=name) model.add(Dense(output_dim=100, input_dim=784, init=my_init))
Keras使ã£ã¦ã¿ã 2
Kerasã§ç³ã¿è¾¼ã¿ãã¥ã¼ã©ã«ãããã¯ã¼ã¯ãæ¸ãã¦ã¿ã¾ãã
ãã¯ãcontrib learnããä½ããã£ã¦ãããåããæããTry&Errorã容æã«åºæ¥ãããªæ°ããã¾ã
çµæã¯accuracyã0.991ã¨ãªãã¾ããã
TensorFlowã使ã£ã¦ã¿ã 4
ç³ã¿è¾¼ã¿ãã¥ã¼ã©ã«ãããã¯ã¼ã¯ã®ãã¥ã¼ããªã¢ã«ããã£ã¦ã¿ã
Deep MNIST for Experts | TensorFlow
ãã®ãã¼ã¸ã®ãBuild a Multilayer Convolutional Networkã以ä¸ããã£ã¦ã¿ã¾ãã
â»mnistã¯ãã¼ãããä½ãDeepLearningæ¬ãã®æ¹ã§åå¾ãããã¼ã¿ãå©ç¨ããããã«å¤ãã¦ããã¾ã
æ°ã«ãªã£ãæãããã¤ãæç²
def weight_variable(shape): initial = tf.truncated_normal(shape, stddev=0.1) return tf.Variable(initial) def main(_): ... W_conv1 = weight_variable([5, 5, 1, 32])
ãã£ã«ã¿ã¼ã®tensorã®é åºãã¼ãããä½ãæ¬ã¨éãã
å
é¨çã«ã¯æ£ããè¨ç®ããã¦ããã¯ããªã®ã§ããã¨ãã¦ãTensorFlowã®Convolutionã®é åºã¯ç¸¦ã横ããã£ãã«ï¼è²ï¼ãã¢ã¦ãããããã£ãã«ã§ãã
def conv2d(x, W): return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME') def max_pool_2x2(x): return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
stridesã«æ¸¡ãé
åã®0ã¨3çªç®ã®è¦ç´ ã¯å¿
ã1ã¨ããä¸ã®äºã¤ã®å¤ï¼åãã«ããï¼ãstrideã¨ãªãã
ãã®å ´åã¯strideã1ã
paddingã¯2種é¡ãSAMEãã¨ãVALIDããæå®åºæ¥ã¦SAMEã®å ´åã¯å
¥åã¨åºåã®ãµã¤ãºãåãã«ãªãããã«èªåã§paddingã調æ´ãã¦ããããVALIDã®å ´åã¯ä½ãããã
max_pool_2x2ã§å®ç¾©ããã¦ããstridesã¨paddingãåããksizeã¯poolãã£ã«ã¿ã®ãµã¤ãºã§2x2ã
W_fc1 = weight_variable([7*7*64, 1024]) b_fc1 = bias_variable([1024]) h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64]) h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
2åã®poolãã£ã«ã¿ã§28x28x1â 14x14x32â 7x7x64ã¾ã§ã«ãªã£ããã¼ã¿ãä¸æ¦1024åã®ãã¥ã¼ãã³ã«åºåãã¦ã¾ã(Affineå¤æã§)
(2x2ã®ãã£ã«ã¿ãstride 2ã§ãã£ã«ã¿ãã¦ãã®ã§å
ã®ååã«ãªãï¼
ä»ä¸ä½ã®ããã«ãã£ã¦ãããããã¾ããã
ãã¼ã¿ãå§ç¸®ããããï¼
keep_prob = tf.placeholder(tf.float32) h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
éå¦ç¿ãé²ãããã«ããã¤ãã®ãã¥ã¼ãã³ã®ä¼æ¬ãæ¢ããdropout層ã¨ããã®ãæ¿å
¥ããã¦ã¾ã
W_fc2 = weight_variable([1024, 10]) b_fc2 = bias_variable([10]) y_conv = tf.matmul(h_fc1_drop, W_fc2) + b_fc2 y_ = tf.placeholder(tf.float32, [None, 10]) cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y_conv)) train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
æå¾ã¯æ°åã®ã¯ã©ã¹åããããããã«10ã®å¤ã§åºåãã¦ãã¾ã(Affineå¤æ)
ã¡ãªã¿ã«ããã§mnistãå¦ç¿ããã¨
step 11000, training accuracy 1 step 11100, training accuracy 1 step 11200, training accuracy 1 step 11300, training accuracy 0.99 step 11400, training accuracy 1 step 11500, training accuracy 1 step 11600, training accuracy 1 step 11700, training accuracy 1 step 11800, training accuracy 1 step 11900, training accuracy 1 step 12000, training accuracy 1 step 12100, training accuracy 0.99 step 12200, training accuracy 1
ãããªæãã§0.99ãè¶ ãã¾ããã
試ããã³ã¼ã
experiment2.py · GitHub
TensorFlowã使ã£ã¦ã¿ã 3
å®éèªåã§æ¸ããæ°åã§è©¦ãã¦ã¿ã
ååæ¸ãããµã³ãã«ã³ã¼ãã使ããå®éã«macã®ãçµµæãã½ããã§æ¸ãã以ä¸ã®æ°åãèªèããããã£ã¦ã¿ã¾ããã
mnistã§è©¦ãã¦ãã ãã ã¨ãªãã ãæ¬å½ã«åã£ã¦ãã ãããããããªãã§ããããã
çµæã¯
0.9746 [1 5 7]
ã¨ãªãããããªæ±ãæ°åã§ãç¡äºèªèãã¦ãã¾ããã
æ°åã®åãè¾¼ã¿ã¯PILã使ãgrayscaleã§èªã¿è¾¼ã¿ï¼convert("L"))ãnumpyã§matrixã«ããflattenã§1次å é åã«ãããã¨å転(255ã§å¼ã)ã¨æ£è¦å(255ã§å²ã)ãå®æ½ãã¦ãã¾ãã