$ whoami è¤åç§å¹³ (FUJIWARA Shuhei) Twitter: @shuhei_fujiwara GitHub: @sfujiwara â¶ Google Developer Expert (Machine Learning) â¶ TensorFlow User Group Tokyo Organizer 2
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR I0000 00:00:1723689002.526086 112933 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355 I0000 00:00:17
Posted by The TensorFlow MLIR Team The TensorFlow ecosystem contains a number of compilers and optimizers that operate at multiple levels of the software and hardware stack. As a day-to-day user of TensorFlow, this multi-level stack might manifest itself as hard-to-understand compiler and runtime errors when using different kinds of hardware (GPUs, TPUs, mobile). These components, starting from th
Last year we held a machine learning seminar in our London office, which was an opportunity to reproduce some classical deep learning results with a nice twist: we used OCaml as a programming language rather than Python. This allowed us to train models defined in a functional way in OCaml on a GPU using TensorFlow. Specifically we looked at a computer vision application, Neural Style Transfer, and
ç±³Microsoftåä¸ã®LinkedInéçºè ã¯9æ12æ¥ãApache Hadoopä¸ã§TensorFlowããã¤ãã£ãã«åãããªã¼ãã³ã½ã¼ã¹ããã¸ã§ã¯ããTensorFlow on YARNï¼TonYï¼ããçºè¡¨ããã TensorFlow on YARNï¼TonYï¼ã¯ãå¤§è¦æ¨¡ãªApache Hadoopå®è£ ä¸ã§åæ£åã§æ©æ¢°å¦ç¿ãéç¨ããããã«LinkedIn社å ã§éçºããããã¬ã¼ã ã¯ã¼ã¯ãåä¸ãã¼ãã¾ãã¯åæ£åã®TensorFlowãã¬ã¼ãã³ã°ãHadoopã¢ããªã±ã¼ã·ã§ã³ã¨ãã¦åãããã¨ãã§ããã éçºãã¼ã ã«ããã¨ãããã¾ã§ãTensorFlow on SparkããIntelã®ãTensorFlowOnYARNãã試ããããä¿¡é ¼æ§ãæè»æ§ã«æ¬ ããããæ°ãã«éçºãããã¨ã«ããã¨ãããTonYã§ã¯ããªã½ã¼ã¹ãã´ã·ã¨ã¼ã·ã§ã³ãã³ã³ããç°å¢è¨å®ãªã©ã®ã¿ã¹ã¯å¦çãéãã¦TensorFlo
This chapter intends to introduce the main objects and concepts in TensorFlow. We also introduce how to access the data for the rest of the book and provide additional resources for learning about TensorFlow. General Outline of TF Algorithms Here we introduce TensorFlow and the general outline of how most TensorFlow algorithms work. Creating and Using Tensors How to create and initialize tensors i
ãã¼ã¿ä¸¦åã¨ã¯ ååã¯Distributed TensorFlowã®ãã«ãããã¢ãã«ä¸¦åã¾ã§ãè¡ãã¾ããããä»åã¯ãã¼ã¿ä¸¦åã«ããå¦ç¿ã試ãã¦ã¿ã¾ãã並ååã«ã¯ã¢ãã«ä¸¦åã¨ãã¼ã¿ä¸¦åã®ï¼ç¨®é¡ãããã¾ããã大éæã«è¨ãã¨ä¸è¨ã®ããã«ãªãã¾ãã ã¢ãã«ä¸¦å: ãã¼ã¿1000åã«å¯¾ããå·¨å¤§ãªæ¼ç®1åã100人ã§åæ ãã ãã¼ã¿ä¸¦å: 1人ããããã¼ã¿10åãã¤å°åãã«ãã¦100人ã§åæ ãã ã¢ãã«ä¸¦åã¯å½ç¶ãªããã¢ãã«ã«ä¾åããã®ã§ããã¼ã¿ä¸¦åã§ä¸åº¦ã«æ±ããã¼ã¿ãæ¸ããã»ããæ±ç¨æ§ã¯é«ãã¨ãããã§ãããã ãã©ã¡ã¼ã¿ã®å ±æ å¦ç¿ã«ããããã¼ã¿ä¸¦ååã§ã¯ãåããã©ã¡ã¼ã¿ãæã£ãã¢ãã«ã®ã³ãã¼ãè¤æ°ä½ããããããå°åãã«ãã¦ããããã®ã¢ãã«ã®ã³ãã¼ã«æ¸¡ããåã ã«å¾®åãè¨ç®ããã¾ããã¤ã¾ãåããã©ã¡ã¼ã¿ããã£ãã¢ãã«ãããã¤ã¹ãã¨ã«æãããªããã°ãªããªãã®ã§ããããã®ãããã®æ±ããå°ããããã«ããã§ã
ã¯ããã« TensorFlow ã¯ãªããªãã¨ã£ã¤ãã«ããé¨åããããã¨æãã¾ããæ¸ãæ¹ãç¬ç¹ãªã®ã§ãæ £ããã¾ã§ã«æéãããããã¨æãã¾ããå ¬å¼ã® MNIST ã¯ä¸éããã£ããã©ãèªåã§èãããã£ã¼ããã¥ã¼ã©ã«ãããã¯ã¼ã¯ (DNN) ãã©ãæ¸ããããããããããªãâ¦â¦ãªãã¦ãã¨ãããã®ã§ã¯ãªãã§ããããã ãã®è¨äºã§ã¯ãã·ã³ãã«ãªåé¡ã顿ã«ãã¦ã TensorFlow ãã³ããã«é ¼ãããèªåã®æ¸ããã DNN ã ä½ã¬ãã« API ã使ã£ã¦ã©ã®ããã«æ¸ãããæå°æ§æã§èª¬æãã¦ããã¾ãï¼ä»ããæã¯ããã¾ããããï¼ããã¼ã¹ã¯å ¬å¼ããã¥ã¡ã³ãã® Low Level APIs ã®ããã ã§ãã å 容ã¨ãã¦ã¯ï¼ TensorFlow ã®æå°æ§æè¦ç´ TensorFlow ã§ç·å½¢é¢æ°ã®ãã£ããã£ã³ã° TensorFlow ã§ DNN ãå®è£ ã®ããã«ãªã£ã¦ãã¾ããã¾ãã以ä¸ã®ãããªæ¹ã対象ã¨ãã¦ãã
ã¯ããã« ãã®è¨äºã¯ AWS ã® GPU ãã·ã³ã®ãªãã§æãæ§è½ã®é«ã p2 ç³»ã®ãã·ã³ã§ã®ã TensorFlow ã®ç°å¢æ§ç¯ã®ã¡ã¢ã§ãã æçµçã« AWS Step Function 㨠AWS Lambda ãçµã¿åããã¦ãããã§ä½ã£ããã·ã³ãã¹ãããã¤ã³ã¹ã¿ã³ã¹ã¨ãã¦ç«ã¡ä¸ãèªåã§å¦ç¿ãåããã¨ãç®æãã¦ãã¾ãã â» p2 ã¤ã³ã¹ã¿ã³ã¹ã¯ç¾å¨æ±äº¬ãªã¼ã¸ã§ã³ã«ã¯å°å ¥ããã¦ãã¾ããããªã¬ã´ã³ã»ãã¼ã¸ãã¢ãªã©ã®ãªã¼ã¸ã§ã³ã使ã£ã¦ãã ããã ã¤ã¡ã¼ã¸ã®é¸å® TensorFlow å ¬å¼ããã¥ã¡ã³ã ãèªã㨠Linux ã®å ´å Ubuntu ããã¼ã¹ã«æ¸ããã¦ããããã§ãã®ã§ã Ubuntu ã®ã¤ã³ã¹ã¿ã³ã¹ãé¸ã³ã¾ããã Ubuntu Server 16.04 LTS (HVM), SSD Volume Type äºã CUDA ç°å¢ãªã©ãå ¥ã£ã¦ãã Amazon Linux ã® AMI ç
TensorFlowã®åå¿è åããã¥ã¼ããªã¢ã«ãTensorBoardã§åºå TensorFlowã®åå¿è åããã¥ã¼ããªã¢ã«MNIST For ML BeginnersãTensorBoardã«åºåãã¦è¦ããåãã¾ãããã§ãããTensorBoardã®è¦æ¹(ç¹ã«Graph以å¤)ãã»ã¨ãã©çè§£ãã¦ããªãã®ã§ã³ã¼ãã¨ç°¡åãªè§£èª¬ã®ã¿ã§ããDeep Learningãå®è·µãã¦ããã¨ãéå¸¸ã«æç¨ããã§ãããããã¾ã§ã®ã¬ãã«ã«éãã¦ããªãã®ã§ã»ã»ã» ç°å¢:python3.5 tensorflow1.1 åèãªã³ã¯ TensorFlowãWindowsã«ã¤ã³ã¹ãã¼ã« Pythonåå¿è ã§ãç°¡åã ã£ãä»¶ ãå ¥éè åã解説ãTensorFlowåºæ¬æ§æã¨ã³ã³ã»ãã ãå ¥éè åã解説ãTensorFlowãã¥ã¼ããªã¢ã«MNIST(åå¿è åã) TensorFlow APIã¡ã¢ ãTensorBoardå ¥éãT
ï¼è¿½è¨: æ¹è¨çã®è¨äºãæç¨¿ãã¾ãã - http://qiita.com/TomokIshii/items/0a7041ad337f68f71286 ï¼ å é±(2015/11/9)ï¼Deep Learninã®Framework ã®"TensorFlow"ãå ¬éããããï¼ããã¥ã¡ã³ãã®èª¬æãMNISTï¼ææ¸ãæ°åã®åé¡ï¼ã¯æ©æ¢°å¦ç¿ã®"Hello World" ã§ããï¼ãã¨ããç®æã«ç´å¾ããããªãï¼Courseraã®Machine Learning (Stanford)ã§ãããã ã£ããï¼æ©æ¢°å¦ç¿ãåæ©ããå¦ã¶å ´åï¼ãã¯ãæå㯠Linear Regressionï¼ç·å½¢å帰ï¼ã¨ï¼å人çã«èããï¼ æ¬è¨äºã§ã¯ï¼æåã«Linear Regressionï¼ç·å½¢å帰ï¼ã®ã³ã¼ãã調ã¹ï¼æ¬¡ã« Logistic Regressionï¼ãã¸ã¹ãã£ãã¯å帰ï¼ã®ã³ã¼ãã使ãã¦ï¼TensorFlowã®é°å²æ°ãã¤ã
dot DeepLearningé¨ã§ã®çºè¡¨ã«ä½¿ãè³æã§ãã ããã¾ã§ã®ç¼ãç´ãã§ãããããããåããããããªã£ã¦ãã¾ãã TensorFlow ã®æ¦è¦ãåããè³æãªã©ã追å ããã¦ãããã¾ãã
å°è¦æ¨¡ãªãã¼ã¿ã»ããã§å¦ç¿ããããæã大ã¾ããªå¾åãè¦ãããã«ã¢ãã«ã®ãã©ã¡ã¼ã¿ãæ¯ãããæãããã¾ããscikit-learnã«ã¯Grid Searchã¨Cross Validationãåæã«è¡ãGridSearchCVã¨ããååãã®ã¾ãã¾ãªä¾¿å©æ©è½ãããã¾ããã ãããTensorFlowã§ãã ãªã¢ã¼ãã§ä¸¦åã«ã ããã¦é¢åãªåæ£å¦çã³ã¼ããæ¸ããã« Jupyter Notebookããç°¡åã« ããããã§ãããã ã¯ããGoogle Cloud Dataflowãªããããã§ãããã§ãï¼ æºå ããã¯ååã®è¨äºã¨ã»ã¼åãã§ãããã¢ãã«ã«ãã£ã¦ã¯ãã·ã³ã¿ã¤ããå¤ããæ¹ãè¯ãã§ããããworker_options.machine_typeã®é¨åã§æå®ã§ãã¾ãã ã¾ãnum_workersãæå®ããã¨autoscaleãç¡å¹ã«ãããããã§ããä»åã¯6ãè¨å®ããã®ã§ãworkerã6ã¤ä¸æ°ã«
ãã®ä¾ã¯ãè¿½è·¡ã¨æ´é ã®åä¸ã®ããã«ååä»ã Step ãªãã¸ã§ã¯ãã使ç¨ããã¯ã¼ã¯ããã¼ã宿¼ãã¾ãããã®ãã¿ã¼ã³ã¯ãåç´ãªã·ã¼ã±ã³ã·ã£ã«å®è¡ãç¶æããªãããæç¢ºãªã¹ãããèå¥ã¨æ¡å¼µãã®ã³ã°ãæä¾ãã¾ãã Agno 2â¦.
TensorFlowOnSpark brings scalable deep learning to Apache Hadoop and Apache Spark clusters. By combining salient features from the TensorFlow deep learning framework with Apache Spark and Apache Hadoop, TensorFlowOnSpark enables distributed deep learning on a cluster of GPU and CPU servers. It enables both distributed TensorFlow training and inferencing on Spark clusters, with a goal to minimize t
ããä½å¹´ãæ°ã¶æããã«ããªããæµè¡ã£ã¦ããæ©æ¢°å¦ç¿ãããã°ãã¨æã£ã¦æãã¤ãããã©ããã«é æ«ããã¨ããã®ãç¹°ãè¿ããã¡ï¼ç¶ããªãåå ã®ã²ã¨ã¤ãå®è¡æéã ã¨æ°ã¥ãã¾ããï¼èªå以å¤ã«ãå®éã«åããã®ã触ããªãããããªãã¨åå¼·ã®ã¢ããã¼ã·ã§ã³ãä¿ã¦ãªã人ã¯å®è¡æéã§è¦æ¦ãã¦ãããã¨æãã¾ãï¼ GPU使ãã¨10åãããé«éåãããããã®ã§ä½¿ããããªã¨æã£ã¦ããã¨ããã§ï¼TensorFlowãWindowsã«å¯¾å¿ãã¦ããã®ã§ï¼ã²ã¨ã¾ãæ®æ®µä½¿ã£ã¦ããWindowsãã¼ãã§å®è¡ãã¦ã¿ã¾ããï¼ TensorFlow 0.12 ã Windows ããµãã¼ã CPU/GPU/AWSã§ã®Tensorflowå®è¡é度æ¯è¼ æºå ç°å¢ï¼ Windows 10(64bit) Python 3.5 (ä»åã¯æ¬å®¶ã®ãã¤ã使ãã¾ããï¼Anacondaçã®ã好ããªç©ã) NVIDIAã®GPU CUDA + cuDNN 使ã
ä½ã®è©±ãã¨ãã㨠Google Cloud MLãå©ç¨ãã¦ãTensorFlowã®åæ£å¦ç¿ãè¡ãæ¹æ³ã§ããåãæ¥ããèªåç¨ã®ã¡ã¢ã¨ãã¦å ¬éãã¦ããã¾ãã 忣å¦ç¿ã«ã¯ããã¤ãã®ãã¿ã¼ã³ãããã¾ãããæãã·ã³ãã«ãªããã¼ã¿åæ£ãã®å ´åã説æãã¾ããåãã¼ãã¯åãã¢ãã«ã«å¯¾ãã¦ãåå¥ã«å¦ç¿ãã¼ã¿ãé©ç¨ãã¦ãVariableãä¿®æ£ããå¾é ãã¯ãã«ãè¨ç®ãã¾ããããããã§è¨ç®ããå¾é ãã¯ãã«ãç¨ãã¦ãå ±éã®Variableãä¿®æ£ãã¦ããã¾ãã åæç¥è TensorFlowã®åæ£å¦ç¿å¦çãè¡ãéã¯ã3種é¡ã®ãã¼ãã使ç¨ãã¾ãã ã»Parameter Serverï¼Workerãè¨ç®ããå¾é ãã¯ãã«ãç¨ãã¦ãVariableã®ã¢ãããã¼ããè¡ãã¾ãã ã»Workerï¼æå¸«ãã¼ã¿ããå¾é ãã¯ãã«ãè¨ç®ãã¾ãã ã»Masterï¼Workerã¨åæ§ã®å¦çã«å ãã¦ãå¦ç¿æ¸ã¿ã¢ãã«ã®ä¿åããã¹ãã»ããã«å¯¾ãã
ã©ã³ãã³ã°
ã©ã³ãã³ã°
ã¡ã³ããã³ã¹
ãªãªã¼ã¹ãé害æ å ±ãªã©ã®ãµã¼ãã¹ã®ãç¥ãã
ææ°ã®äººæ°ã¨ã³ããªã¼ã®é ä¿¡
å¦çãå®è¡ä¸ã§ã
j次ã®ããã¯ãã¼ã¯
kåã®ããã¯ãã¼ã¯
lãã¨ã§èªã
eã³ã¡ã³ãä¸è¦§ãéã
oãã¼ã¸ãéã
{{#tags}}- {{label}}
{{/tags}}