2023-12-01ãã1ã¶æéã®è¨äºä¸è¦§
ç°å¢ æºå å¦ç¿ åç»ããã³ãã¤ã³ãã¯ã©ã¦ããã¼ã¿ã®ä½æ 使ç¨ãªã½ã¼ã¹ ç°å¢ Google Colob(ãã¤ã¡ã¢ãª) æºå ã©ã¤ãã©ãªã®ã¤ã³ã¹ãã¼ã«çãè¡ãã¾ãã %cd /content !git clone -b dev https://github.com/camenduru/4DGen %cd /content/4DGen !wget https:/â¦
ã¯ããã« ç°å¢ æºå æ¨è« ã¢ãã«ã®ãã¼ã ãµã³ãã«ããã³ãã ã¾ã©ãã®ããã³ãã 使ç¨ãªã½ã¼ã¹ ã¯ããã« å¹´æ«ã«ãã¦æ°ããã¢ãã«ãåºãã®ã§ã触ã£ã¦ããã¾ã åç¨å©ç¨å¯è½ãªæ¥æ¬èªLLMãKarasuããQarasuããå ¬éãã¾ãããMTãã³ãã§æ¥æ¬èªå ¬éã¢ãã«ã§æé«â¦
æºå æ¨è« ã¢ãã«ã®ãã¼ã ãµã³ãã«ããã³ãã é¢ç½ãã»ãªã ã¾ã©ãã®ããã³ããæ¹ ä½¿ç¨ãªã½ã¼ã¹ æºå 以ä¸ãã¤ã³ã¹ãã¼ã«ãã¾ã !pip install torch !pip install transformers !pip install sentencepiece æ¨è« ã¢ãã«ã®ãã¼ã ãµã³ãã«ã³ã¼ãéãbf16ã§ãâ¦
åãã« ç°å¢ æºå ç°å¢æ§ç¯ Notion APIã®åå¾ ã¯ã¦ãªããã°ã®ä¸è¦§ãåå¾ãã Zennã®è¨äºä¸è¦§ãåå¾ãã Notionã®DBã«è¨äºã追å ãã ããããã®ãã¼ã¿ãåå¾ãã¦ãæ¥ä»ã½ã¼ããã¦æ¸ãè¾¼ã GitHub Actionsã使ã£ã¦å®æçã«å®è¡ãã åãã« èªåã®ãã¼ããã©ãªâ¦
åãã« ç°å¢ æºå æ¨è« ã¢ãã«ã®ãã¼ã ãµã³ãã«ããã³ãã ã¾ã©ãã®ãã¹ã 使ç¨ãªã½ã¼ã¹ åãã« ç°å¢ L4 GPU OS : Ubunts 22.04 æºå å¿ è¦ãªã©ã¤ãã©ãªãå ¥ãã¾ã !pip install accelerate !pip install torch !pip install transformers æ¨è« ã¢ãã«ã®ãã¼â¦
ã¯ããã« ç°å¢ éçºç°å¢æ§ç¯ LLMãGGUFã«å¤æ llama.cppã®æºå ã¢ãã«ã®ãã¦ã³ãã¼ã GGUFã«å¤æ GGUFãéåå GGUFåããã¢ãã«ãåãã llama.cppãGPUã§åããç°å¢ãæ§ç¯ cmakeãã¤ã³ã¹ãã¼ã« CUDA Toolkitãã¤ã³ã¹ãã¼ã« cuBLASã使ããããã«ãã cuBLAâ¦
ã¯ããã« ç°å¢ æºå æ¨è« 翻訳 ã¾ã©ãã® ã¯ããã« ayousanz.hatenadiary.jp ããGGUFãåºã¦ããã®ã§åããã¦ããã¾ã huggingface.co ç°å¢ Google Colob (T4) æºå llama.cppã®ãã¦ã³ãã¼ãããã¾ã !git clone https://github.com/ggerganov/llama.cpp.git â¦
ã¯ããã« ç°å¢ æºå æ¨è« ã¢ãã«ã®ãã¼ã 翻訳ããã³ãã ã¾ã©ãã®ããã³ãã 使ç¨ãªã½ã¼ã¹ 4bitéååç 追å ã©ã¤ãã©ãª ã¢ãã«ã®ãã¼ãã®è¨å® æ¨è« 使ç¨ãªã½ã¼ã¹ ã¯ããã« å ¬éãããã®ã§ã触ã£ã¦ããã¾ã rinnaã¯Qwen-7Bã¨14Bã®æ¥æ¬èªç¶ç¶äºåå¦ç¿ã¢ãã«â¦
ç°å¢ æºå ã¢ãã«ã®ãã¼ã æ¨è« ãµã³ãã«ããã³ãã ã¾ã©ãã®ããã³ãã æ¨è«é度 使ç¨ãªã½ã¼ã¹ ç°å¢ L4 GPU æºå å¿ è¦ãªã©ã¤ãã©ãªãå ¥ãã¾ã !pip -q install --upgrade accelerate autoawq !pip install torch==2.1.0+cu121 torchtext==0.16.0+cpu torchdâ¦
ã¯ããã« æºå æ¨è« ã¢ãã«ã®ãã¼ã ããã³ããããæ¨è« çµæ ã¯ããã« æºå !pip install torch !pip install transformers !pip install sentencepiece !pip install accelerate !pip install protobuf æ¨è« ã¢ãã«ã®ãã¼ã import torch from transformerâ¦
ã¯ããã« ç°å¢ æºå æ¨è« æ¨è«ãµã¼ãã¼ æ¨è«APIãå©ã çµæ 使ç¨ãªã½ã¼ã¹ åè ã¯ãã㫠以ä¸ã®LLMãåããã¦ããã¾ã huggingface.co GitHubã¯ä»¥ä¸ã¿ããã§ã github.com ç°å¢ Linux CLI GPU (L4 GPU : GPU RAM 24GB) æºå conda create -y --name openchatâ¦
ã¯ããã« ç°å¢ ã©ã¤ãã©ãªã®ã¤ã³ã¹ãã¼ã« ã¢ãã«ã®ãã¦ã³ãã¼ã ãµã³ãã«é³å£°ã®ä½æ ãµã³ãã«é³å£°ããããã³ãããä½æãã ããã¹ããããã¼ã¯ã³ãä½æãã é³å£°ãä½æãã 使ç¨ãªã½ã¼ã¹ ææ ã¯ããã« Bert-VITS2ãDALL-E Xãªã©TTSã©ã¤ãã©ãªãå¢ãã¦ãã¦â¦
ç°å¢ æºå æ¨è« åç ç°å¢ L4 GPU æºå # TheBloke/Mixtral-Fusion-4x7B-Instruct-v0.1-GGUF # llama.cppã®mixtralãã©ã³ããã¯ãã¼ã³ï¼mergeãæ¸ãã ãã-b mixtralãä¸è¦ï¼ !git clone -b mixtral https://github.com/ggerganov/llama.cpp %cd llama.cpp !â¦
ç°å¢ å®è¡ æºå æ¨è« çµæ ãã° ç°å¢ L4 GPU(GPU RAM 24GB) å®è¡ ã¢ãã«ããã¼ããã¦ãããæ¨è«ãå®è¡ãã¾ã æºå # llama.cppã®mixtralãã©ã³ããã¯ãã¼ã³ï¼mergeãæ¸ãã ãã-b mixtralãä¸è¦ï¼ !git clone -b mixtral https://github.com/ggerganov/llamaâ¦
åãã« ç°å¢ ç¹å®ã®hashå¤ã®ãªãã¸ããªã®clone ä»®æ³ç°å¢ã®æ§ç¯ æ°ããç°å¢ã®æ§ç¯ ç°å¢ã®æå¹å ã©ã¤ãã©ãªã®ã¤ã³ã¹ãã¼ã« ãã©ã«ãåã³ãã¡ã¤ã«ã®ä½æ è¨å®ãã¡ã¤ã«ã®å¤æ´ ã¢ãã«ã®ãã¦ã³ãã¼ã é³å£°ãã¡ã¤ã«ã®ã¢ãããã¼ãåã³æ¸ãèµ·ãããã¡ã¤ã«ã®æºå conâ¦
ç°å¢ æºå æ¨è« ãµã³ãã«è³ªå ã¾ã©ãã®è³ªå æ¥æ¬ã®é åãèãã¦ã¿ã å¿ è¦ãªãªã½ã¼ã¹ ç°å¢ Google Colob T4 æºå !pip install transformers !pip install accelerate æ¨è« ã¢ãã«ç¨ã®ãã¼ããè¡ãã¾ãã from transformers import AutoModelForCausalLM, Autâ¦
åãã« ç°å¢ æºå æ¨è« ã¢ãã«ã®ãã¼ã é³å£°ã®ããã¹ãå 使ç¨ãããªã½ã¼ã¹ åãã« rinnaããããwhisperã®ç«¶åã«ãªãå¾ãã¢ãã«ãåºãã®ã§ã触ã£ã¦ããã¾ã rinnaã¯å¤§è¦æ¨¡è¨èªã¢ãã«GPTãæ´»ç¨ããæ¥æ¬èªé³å£°èªèã¢ãã«ãNue ASRããå ¬éãã¾ãããäºåå¦ç¿â¦
åãã« ã¨ã©ã¼ 対å¿æ¹æ³ åãã« magic-animateã触ãéã«CUDAã®veråé¡ã§å°ã£ãã®ã§ã解決çã®ã¡ã¢ github.com ã¨ã©ã¼ RuntimeError: Detected that PyTorch and torchvision were compiled with different CUDA versions. PyTorch has CUDA Version=11.7 anâ¦
åãã« æºå å®è¡ åãã« ãã«ãã¢ã¼ãã«ã®ã¢ãã«ãåºãã¿ãããªã®ã§ã触ã£ã¦ããã¾ã Announcing Nous Hermes 2.5 Vision! @NousResearch's latest release builds on my Hermes 2.5 model, adding powerful new vision capabilities thanks to @stablequanâ¦
åãã« ç°å¢ æºå å®è¡ CLI åã㫠以ä¸ã®ãã«ãã¢ã¼ãã«ã®ã¢ãã«ãåºã¦ããã®ã§ã試ãã¦ããã¾ã github.com ç°å¢ Google Colob æºå !git clone https://github.com/qnguyen3/hermes-llava.git %cd hermes-llava !pip install --upgrade pip # enable PEP â¦
ã¯ããã« ç°å¢ æºå docker hubã®Tokenãä½æ GitHub Secretã®ç»é² Docker imageã®ä½æ GitHub Actionsã«ããèªåå DockerHubã«pushæã«uploadãããActionã®ä½æ uploadããimageã®ãããã°ç¨ã®Actionã®ä½æ ã¯ããã« Dockerã使ã£ã¦ããã¨èªåã§imageãã«â¦
ã¯ããã« VITS2ããã精度ããããæ¼æçããã¾ãåºããã¨ããBert-VITS2ã§å¦ç¿ããã¦ããã¾ã çµè« å¦ç¿ã¯æåããï¼ãã®çæããé³å£°ããã¾ããã£ã¦ããããæåã¯ãã¦ãã¾ãã ç°å¢ Google Colob åèãµã¤ã ãã¡ãã®è¨äºãåèã«é²ãã¦ããã¾ã zenn.devâ¦