TPUã使ã£ãå ´åã¯ç²¾åº¦ãããªãè½ã¡ã¦ãã¾ãããããã¯ç²¾åº¦åä¸ã«å¯ä¸ãã¦ããLearningRateSchedulerï¼keras.callbacksï¼ãTPUã§ã¯æ©è½ãã¦ããªãããã§ããCallbackå ã§å¦ç¿çå¤åããã¦ãå¹æããªãã£ãã®ã§ãTensorFlowã®ä½ã¬ãã«APIã§ã©ãã«ããããããã°ç´ãããã¾ã§å¾ ã¤ãããªãã¨æãã¾ããTPUï¼ä¸ï¼ã¨GPUï¼ä¸ï¼ã®ã¨ã©ã¼ã®æ¨ç§»ã§ããã©ã¡ããKerasã®ä¾ã§ãã ãªã¢ã«ãªãã¼ã¿ã§å¦ç¿ç調æ´ãããã¨ã¯ãã¾ããªãã®ã§ãããCIFARã®å ´åã¯å¦ç¿ç調æ´ãéè¦ãªã®ã§ããã ãã¯æ³¨æãå¿ è¦ã§ãã ã¡ãªã¿ã«é度ã¯ãã¡ããã¡ãéãã§ããGPUã§ã¯å±¤ãæ·±ãããã°ããã»ã©é ããªã£ã¦ããèªç¶ãªçµæã¨ãªã£ã¦ããã®ã«å¯¾ããTPUã§ã¯ã»ã¼å®æ°æéã§å¦çã§ãã¦ãã¾ãããããã層ãæµ ãå ´åã¯ãTPUã§ã¯ä½ãå¥ã®è¦ç´ ãããã«ããã¯ã¨ãªã£ã¦ãã¦ãæ¬ä½ã®è¨ç®æ§è½ãåºãã¦ããªãã¨
2019/5/11 PR: ãã¡ãã®å 容ãå«ãã2019å¹´5æ11æ¥çºåã® å³è§£éç¿DEEP LEARNINGã¨ããæ¬ãã§ãã¾ããã[2019å¹´5æç] æ©æ¢°å¦ç¿ã»æ·±å±¤å¦ç¿ãå¦ã³ããã¬ã³ãã追ãããã®ãªã³ã¯150é¸ - Qiitaã§ããä¸é¨å 容ãã覧ããã ãã¾ã 2019/3/9 Colaboratoryã«é¢ããæ å ±äº¤æSlackã試é¨çã«ç«ã¡ä¸ãã¾ããããªã³ã¯ãããç»é²ã»ãåå ãã ããã 2019/3/3 TensorBoardã«å ¬å¼å¯¾å¿ãã¾ãããã¾ããã©ã³ã¿ã¤ã ã®RAM/ãã£ã¹ã¯ç©ºã容éãä¸ç®ã§ç¢ºèªã§ããããã«ãªãã¾ãããå¾ã»ã©è¨äºã«è¿½è¨ãã¾ãã ã¯ããã« Colaboratoryã¯ãç¡æã§ä½¿ããã¨ãã§ããã»ã¨ãã©ã®ä¸»è¦ãã©ã¦ã¶ã§åä½ãããè¨å®ä¸è¦ã®Jupyterãã¼ãããã¯ç°å¢ã§ããGoogleããæ©æ¢°å¦ç¿ã®æè²ãç 究ç¨ã«ä½¿ããããã¨ãç®çã«ãç¡åæä¾ãã¦ãã¾ãããã£ããã¨ãããªãã
ããã§ã¯ãNVIDIAã®æ°ããGPUã¢ã¼ããã¯ãã£ã§ãããTuringãã«ã¤ãã¦è§£èª¬ãããåã¢ã¼ããã¯ãã£ã¯ã2018å¹´8æã«çºè¡¨ãããQuadro RTX 8000ããã³ãGeForce RTX 2080ã2080 Tiã2070ã«æ¡ç¨ããã¦ããã Turingã¢ã¼ããã¯ãã£æ大ã®ç¹å¾´ã¯ãã¬ã¤ãã¬ã¼ã·ã³ã°ç¨ã®RTã³ã¢ãå èµãããã¼ãã¦ã§ã¢ã«ããã¬ã¤ãã¬ã¼ã·ã³ã°ãå¯è½ã«ãªããã¨ãNVIDIAã®èª¬æã«ããã°ãæ大10åã¬ã¤ï¼ç§ã®é度ã§ã¬ã¤ãã¬ã¼ã·ã³ã°ãè¡ããã¨ããã ã¾ããæ°ã¢ã¼ããã¯ãã£ã¨ãã¦NVIDIA GPUãå èµããStreaming Multi-processorï¼ä»¥ä¸ãSMï¼ãæ¹è¯ããã¦ãããVoltaã¢ã¼ããã¯ãã£ã§æè¼ãããè¡åæ¼ç®ãè¡ãTensorã³ã¢ãå èµããã¦ãããããã«ãæ´æ°æ¼ç®ã¨å精度浮åå°æ°ç¹ï¼Single Precision floating point num
æ°è¨ãããIBM System x CAD on VDIæ¤è¨¼ã»ã³ã¿ã¼ããå©ç¨ãããã¨ã§ãææ°ã®ãã¼ãã¦ã§ã¢ç°å¢ãç¨ãã¦çæéã§POCï¼Proof Of Conceptï¼æ¦å¿µå®è¨¼ï¼ãå®æ½ã§ããã¨ããã æ¥æ¬IBMã¯2014å¹´4æ22æ¥ãå社ã»æ¬ç¤¾å ã«ã¯ã¼ã¯ã¹ãã¼ã·ã§ã³ä»®æ³åã®å°å ¥æ¤è¨¼ãæ¯æ´ãããIBM System x CAD on VDIæ¤è¨¼ã»ã³ã¿ã¼ããéè¨ããã¨çºè¡¨ããããã¼ããã¼å社ã¨ã¨ãã«ã3次å CADãCGé åãä¸å¿ã¨ããä»®æ³ãã¹ã¯ãããç°å¢ï¼VDIï¼ã½ãªã¥ã¼ã·ã§ã³ã®å°å ¥æ¤è¨ãæ¯æ´ããã è¨è¨ç°å¢ã®ä»®æ³åã«ã¤ãã¦ã¯ãã°ãã¼ãã«åã«ãããè¨è¨æ ç¹ã®æµ·å¤ã¸ã®åºãããªã©ã®ä»ãBCPã®è¦³ç¹ããã¼ã¿ã®çé£é²æ¢ãªã©ã®è¦³ç¹ãã注ç®ãé«ã¾ã£ã¦ãããå¾æ¥ã¯ã³ã³ãã¥ã¼ãã£ã³ã°ãã¯ã¼ããããã¯ã¼ã¯ã®åé¡ã§ãè¨è¨ãè¡ãã«ã¯ååãªç°å¢ãæ§ç¯ãããã¨ãã§ããªãã£ããããããã®æ§è½ãé«ã¾ã£ããã¨ã§ãããæè»ãªè¨
趣å³ã§ãã£ã¼ãã©ã¼ãã³ã°ã§éã¶ããã« GPU ãã·ã³ã使ãããã GPU ã¯æ¬å½ã«ãããã¦ãèªåã® MacBook Air 㧠2 æéããããããªå¦ç¿ã GPU ã使ã㨠5 å足ããã§çµãããCPU ã ãã§ãã£ã¼ãã©ã¼ãã³ã°ããã®ã¯é¦¬é¹¿é¦¬é¹¿ããã¨ããæ°æã¡ã«ãªãã ãããèªå® ã« GPU ãã·ã³ãçµãã ã¨ãã¦ãåå æä¸è¨ç®ãç¶ããããã§ããªããããã£ãããªãããããã¯ã¯ã©ã¦ããµã¼ãã¹ãæ´»ç¨ãã¦å®ãæ¸ã¾ãããã1 ããã«ãæè¿ã§ã¯ Docker ã³ã³ããå ãã GPU ãå©ç¨ãããã¨ãã§ãã NVIDIA Docker ã¨ãã Docker ãã©ã°ã¤ã³ãããããããå©ç¨ãããã¨ã§ GPU ãã·ã³ã®ç°å¢ãæ±ããã¨ãªã好ããªã ãå¦ç¿ç°å¢ãç«ã¡ä¸ãããã¨ãã§ããã ä»å㯠Amazon EC2 ã® GPU ã¤ã³ã¹ã¿ã³ã¹ã¨ NVIDIA Docker ã使ã£ã¦è¶£å³ç¨ã®ãã£ã¼ãã©ã¼ãã³ã°ç°å¢ãä½ã£ã
ã¿ãªãããããã«ã¡ã¯ãRetty CTO ã®æ¨½ç³ã§ãã ãã®è¨äºã¯ Retty Advent Calendar 25æ¥ç®ã§ããã¡ãªã¼ã¯ãªã¹ãã¹ã æ¨æ¥ã¯ @ttakeoka ã®ãMFIã«ããã¦Rettyã®åãçµã¿ãã§ããã ä»å¹´ãæ®ããããã«ãªãã¾ãããããããéããã§ããï¼ Retty ã¯ãã® ï¼ å¹´ã§ã¨ã³ã¸ãã¢ãã»ã¼åå¢ãã¾ãããããã«ãã£ã¦ãæ å ±çºä¿¡è ãå¢ããAdvent Calendar ã«åå åºæ¥ãããã«ãªãã¾ãããã¿ããªæ¥½ãããã«ãã¦ãã¦ãããããã§ãã Retty Inc. Advent Calendar 2016 - Qiita ãã¦ãä»å¹´æå¾ã® Retty Advent Calendar è¨äºãæ¸ãã¨ãããã¨ã§ãã¯ãã㯠ï¼å¹´ã®ã¾ã¨ãçå 容ã«ããããã¨æãã¾ããããããã§ã¯å¹³å¡ã§é¢ç½ãããã¾ãããããã§ããã¿ã«ãªããããªããã¢ãã¯ãªæè¡çè¨äºã§ç· ããããããã¨æãã¾ãã
æ¨æ¥ãããªãã¬ã¹ãªãªã¼ã¹ã話é¡ã«ãªã£ã¦ãã¾ããã www.sakura.ad.jp é«ç«åã§ãã£ã¦ãå¼·ããã ã¡ãã£ã¨åã«ã¯ãããªã®ã話é¡ã«ã ãã£ã¼ãã©ã¼ãã³ã°å°ç¨GPUãµã¼ããã¡ã¼ã ãç´ èæ ï¼ãããï¼ããæ§ç¯ï½ãã¥ã¼ã¹ï½åºå ±æ å ±ï½æ ªå¼ä¼ç¤¾ãã¯ã³ã´ ç¾æç¹ã§ä¸çæé«æ§è½ã¨ãªãMaxwellä¸ä»£ã®CUDAã³ã¢ãæè¼ããGPUãµã¼ãã¼100å°ç¨åº¦ã§æ§æ åããå¼·ããã ããããè¨äºãèªãã å¾ã«èªåã®è¶³ä¸ã§åãã¦ããæ©æ¢°ãã¾ãã¾ãã¨çºãã¦ã¿ãã¨ããã¾ãã®ä½ç«åã«æ¥ããããããè¦ãã¾ããã¯ãããã§ãã«ã¼ã§æ¸¬ã£ã¦ã¿ãã¨500Wã«ãæºããªãããã§ãå®ç©é»åã¬ã³ã¸ã¬ãã«ãã¾ãã«ä½ç«åã ä½ç«åãã£ã¼ãã©ã¼ãã³ã°ã®ããã®èªä½ãã¼ã ä¼æ¥ãç 究室ã«Titan4ææãç°å¢ãæ´ã£ã¦ãããããªäººã«ã¨ã£ã¦ã¯ããã¡ãã¿ãããªç°å¢ããããã¾ããããç§ã使ã£ã¦ããç°å¢ãæãã¤ã¤ãå人ã®è¶£å³ã¬ãã«ã§(ãã)ãã£ã¼ãã©ã¼ã
å¯å£«éç 究æã¯9æ21æ¥ããã¥ã¼ã©ã«ãããã®å¦ç¿ã®é«ç²¾åº¦åã«åããGPUã¡ã¢ãªã¼å©ç¨å¹çåæè¡ãéçºããã¨çºè¡¨ããã ãã£ã¼ãã©ã¼ãã³ã°ã®å¦ç¿å¦çã§ã¯å¤§éã®æ¼ç®å¦çãå¿ è¦ã¨ãããããæè¿ã§ã¯GPUï¼ã°ã©ãã£ãã¯ããã»ããµã¼ï¼ãç¨ãããã¦ãããGPUã®é«éæ¼ç®æ§è½ã®å©ç¨ã«ã¯ããã¼ã¿ãGPUã®å é¨ã¡ã¢ãªã¼ã«æ ¼ç´ããå¿ è¦ããããã容éã«ãã£ã¦ãã¥ã¼ã©ã«ãããã®è¦æ¨¡ãå¶éãããã¨ããåé¡ããã£ãã å¯å£«éç 究æã§ã¯ããã¥ã¼ã©ã«ãããã®å層ã®å¦ç¿å¦çã«ããã¦ãéã¿ãã¼ã¿ããä¸é誤差ãã¼ã¿ãæ±ããæ¼ç®ã¨ãä¸éãã¼ã¿ããéã¿ãã¼ã¿ã®èª¤å·®ãæ±ããæ¼ç®ãç¬ç«ãã¦è¨ç®ã§ãããã¨ã«çç®ãå¦ç¿éå§æã«ãã¥ã¼ã©ã«ãããå層ã®æ§é ã解æããã大ããªãã¼ã¿ãé ç½®ãã¦ããã¡ã¢ãªã¼é åãåå©ç¨ã§ããããã«æ¼ç®ã®å¦çé åºãåãæ¿ããã¡ã¢ãªã¼ä½¿ç¨éãåæ¸ããææ³ãéçºããã ããã«ããã40ï¼ ä»¥ä¸ã®ã¡ã¢ãªã¼ä½¿ç¨éãåæ¸ããGP
ä»åãNumbaã®ããã¥ã¡ã³ããèªãã§è¡ãã¾ãã Numba â numba 0.15.1 documentation ã¨æã£ããã§ãããèªã¿é²ãã¦è¡ãã¨ä»¥å¤ã«ç´¹ä»ããå 容ãå°ãªããã¨ã«æ°ã¥ãã¾ããã ã·ã³ãã«ãªã®ã¯è¯ããã¨ãªã®ã§ãä»åã¯UFuncãç´¹ä»ãã¦ã 次åã«GPUã«ã¤ãã¦ç´¹ä»ãã¦å ¥éç·¨ã¯ä¸æ¦çµããããã¨æãã¾ãã ãã®å¾ã«ãNumbaãåå¼·ããåæ©ã¨ãªã£ããã®ãä¸å¿ã«å¿ç¨ç·¨ã話ãäºå®ã§ãã UFunc UFuncs â numba 0.15.1 documentation UFuncï¼ã¦ããã¼ãµã«é¢æ°ï¼ã¨ã¯ãè¦ããã«numpy.ndarrayã«å¯¾ãã¦è¦ç´ ãã¨ã«æ¼ç®ãã¦ãããé¢æ°ã§ãã ä¾ãã°numpy.logã«é åãé£ãããã¨ããããªæãã§ãã >>> import numpy >>> hoge = numpy.arange(1, 5) >>> hoge array([1, 2,
GPU-Accelerated Computing with Python NVIDIAâs CUDA Python provides a driver and runtime API for existing toolkits and libraries to simplify GPU-based accelerated processing. Python is one of the most popular programming languages for science, engineering, data analytics, and deep learning applications. However, as an interpreted language, itâs been considered too slow for high-performance computi
2024å¹´ä¸åæã®ãã£ã«ã åçãã¡ ãµã¨æ°ãã¤ãã¨2024å¹´ãæ®ãå ãã ããã¦ãµã¨æ¯ãè¿ãã¨ãã®ããã°ã«æ²è¼ããåçãGRIIIã§æ®ã£ããã®ã°ããã«ãªã£ã¦ããã®ã ãã©ã決ãã¦ãã£ã«ã ã«é£½ããã¨ãããã¨ã§ã¯ãªãããã£ã«ã ã¯ãã£ã«ã ã§æ·¡ã ã¨æ®ãç¶ãã¦ããããã¢ãã¯ããã«ã©ã¼ãã¬ããã¤ãéãèªå® ã§â¦
ï¼»GTC 2015ï¼½GPUãæ©æ¢°å¦ç¿åAIã人éããè³¢ãããï¼ NVIDIA CEOã«ããGTC 2015åºèª¿è¬æ¼ã¬ãã¼ã ã©ã¤ã¿ã¼ï¼è¥¿å·åå¸ Jen-Hsun Huangæ°ï¼Co-Founder and CEO, NVIDIAï¼ãæè¿ï¼è¬æ¼ã§æ°ãç»å£ããã¨ãã¯ï¼æ±ºã¾ã£ã¦è©ã«é²ã®å ¥ã£ããã®ã¸ã£ã³ãã¼ãçã¦ãã åç±³æé2015å¹´3æ17æ¥ï¼NVIDIA主å¬ã®GPUã³ã³ãã¥ã¼ãã£ã³ã°éçºè ä¼è°ãGPU Technology Conference 2015ãï¼ä»¥ä¸ï¼GTC 2015ï¼ãç±³å½ãµã³ãã¼ã§éå¹ãããä¼æåæ¥ã«ã¯ï¼NVIDIAã®ç¤¾é·å ¼CEOã§ããJen-Hsun Huangï¼ã¸ã§ã³ã¹ã³ã»ãã¢ã³ï¼æ°ãç»å£ããåºèª¿è¬æ¼ãè¡ããï¼å社ã®ææ°è£½åãï¼GPGPUåéã®ææ°ååã«ã¤ãã¦ã解説ããã¦ããã è¬æ¼ã®åé ã§ï¼Huangæ°ã¯ãä»æ¥ï¼ã¢ãã¦ã³ã¹ããããã¨ã¯4ã¤ãããã¨èªãã ãããã¹ã©ã¤ãã§æ²
Facebookã¯ç±³å½æé12æ10æ¥ãå社ã®äººå·¥ç¥è½ï¼AIï¼ãã¼ãã¦ã§ã¢ããªã¼ãã³ã½ã¼ã¹åããè¨ç»ãçºè¡¨ããã Facebookã®ã¨ã³ã¸ãã¢ã§ããKevin Leeæ°ã¨Serkan Piantinoæ°ã¯10æ¥ä»ãã®ããã°æ稿ã§ãã¼ãããæ§ç¯ããããªã¼ãã³ã½ã¼ã¹å対象ã®AIãã¼ãã¦ã§ã¢ã¯ãå¸è²©è£½åãããå¹ççã§å¤ç¨éã«é©ç¨å¯è½ã§ãããã¨ã強調ãããããã¯ãOpen Compute Projectï¼OCPï¼è¦æ ¼ã«åºã¥ããã¼ã¿ã»ã³ã¿ã¼å ã§ãµã¼ãã稼åã§ããããã ã¨ããã ãå¤ãã®é«æ§è½ã³ã³ãã¥ã¼ãã£ã³ã°ã·ã¹ãã ã¯ãç¹æ®ãªå·å´æ§é ãªã©ã®ç¬èªã¤ã³ãã©ãåä½ã«å¿ è¦ã¨ãªãããããããã¯ãããã®æ°ãããµã¼ããç±æ§è½ã¨é»åå¹çã®é¢ã§æé©åããããããç¬èªã®ããªã¼ã¨ã¢ã¼å·å´ãæ¡ç¨ããOCPè¦æ ¼æºæ ã®ãã¼ã¿ã»ã³ã¿ã¼ã§ã稼åã§ããããã«ãããã¨Leeæ°ã¨Piantinoæ°ã¯èª¬æããã ãBig Surãã¨ããé
CUDA Zone CUDA® is a parallel computing platform and programming model developed by NVIDIA for general computing on graphical processing units (GPUs). With CUDA, developers are able to dramatically speed up computing applications by harnessing the power of GPUs. In GPU-accelerated applications, the sequential part of the workload runs on the CPU â which is optimized for single-threaded performance
Torch7ãTheanoãCaffeã¨ãã£ãã©ã¤ãã©ãªãGPUä¸ã§å®è¡ããããã«å¿ è¦ã¨ãªãCUDAç°å¢ã®æ§ç¯ã«ã¤ãã¦åºæ¬çãªæ å ±ã¨ãããã²ã£ãããç¹ããã®è§£æ±ºæ¹æ³ã示ãã¾ãã 注æï¼GPUç°å¢æ§ç¯ã¯ãç°å¢ï¼ãã¼ãæ§æçï¼ã«ãã£ã¦ã¯ãLinuxã®å°éçãªç¥èãè¦ããå ´åãããã¾ãã Deep Learningç 究ã«ããã¦ãGPUãå©ç¨ããè¨ç®ãã§ãããã¨ã¯ã»ã¼å¿ é ã ã¨è¨ã£ã¦ãéè¨ã§ã¯ããã¾ãããããã¯ãGPUã使ãå ´åã¨ä½¿ããªãå ´åã§ã¯æ°åããï¼ï¼åç¨åº¦ãè¨ç®æéã«å·®ãã§ãããã§ãã大éã®ãã¼ã¿ãè¨ç·´ãã¼ã¿ã¨ãã¦å©ç¨ãããã¨ãå¤ãDeep Learningã®ç 究ã«ããã¦ãã®å·®ã¯è´å½çã«ãªããã¨ãå¤ãã§ãã GPUã®è¨ç®ç°å¢ã¨ãã£ã¦ããããã¤ãã®å®è£ ãããã¾ãã代表çãªGPGPUç¨ã®ã©ã¤ãã©ãªã¨ãã¦ã¯ãOpenCLãCUDAãæãããã¾ããTorch7ãPylearn2ãCaffeã¨ãã£ãã¡
ãªãªã¼ã¹ãé害æ å ±ãªã©ã®ãµã¼ãã¹ã®ãç¥ãã
ææ°ã®äººæ°ã¨ã³ããªã¼ã®é ä¿¡
å¦çãå®è¡ä¸ã§ã
j次ã®ããã¯ãã¼ã¯
kåã®ããã¯ãã¼ã¯
lãã¨ã§èªã
eã³ã¡ã³ãä¸è¦§ãéã
oãã¼ã¸ãéã
{{#tags}}- {{label}}
{{/tags}}