ï½¢Qwen3.5-9Bï½£ ã¯2026å¹´ã«ãªãªã¼ã¹ããã髿§è½ãã¼ã«ã«LLMã®ã²ã¨ã¤ãHuggingFaceãLM Studioã§ç¡æã§ãã¦ã³ãã¼ãã§ãã Photo: ãã¿ãã¾ããã¿ 2026å¹´ãé²åãç¶ããçæAIã§ããã伸ã³ã¦ããã®ã¯OpenAIã®GPTãAnthropicã®Claudeã ãã§ã¯ããã¾ããã ç¡æã§å ¬éããã¦ãã¦ãã¦ã³ãã¼ããã¦èªåã®PCã§åãããï½¢ãã¼ã«ã«LLMï½£ã大ããªçºå±ãè¦ãã¦ãã¾ãã ãã®è¨äºã§ã¯2026å¹´ä¸åæã«ããã¦ã®ãã¼ã«ã«LLMã®ååãã¾ã¨ãã¾ãã 1. ãã¼ã«ã«LLMã®é«æ§è½åãæ¢ã¾ããªãï½¢Gemma-4-31Bï½£ã§ç»åèªèããã¦ããã¨ãããåã¢ãã«ã¯Googleã2026å¹´4æã«ãªãªã¼ã¹ããé常ã«é«æ§è½ãªãã¼ã«ã«LLMImage: ãã¿ãã¾ããã¿2026å¹´ã®ãã¼ã«ã«LLMã§ç¹å¾´çãªã®ããæ§è½ãå¤§å¹ ã«åä¸ãããã¨ã å ·ä½çãªã¢ãã«ã¨ãã¦ã¯ã3æã«ãªãª
Anthropicã®AIã¨ã¼ã¸ã§ã³ãã¤ã³ãã©ãClaude Managed Agentsãã«ãã¨ã¼ã¸ã§ã³ãã®åä½ãæ¹åãã¦ã¡ã¢ãªæ¶è²»ãæãããããªã¼ãã³ã°(dreaming)ãæ©è½ã追å ããã¾ããã New in Claude Managed Agents: dreaming, outcomes, and multiagent orchestration | Claude https://claude.com/blog/new-in-claude-managed-agents Live from Code with Claude: we're launching dreaming in Claude Managed Agents as a research preview. Outcomes, multiagent orchestration, and webhooks are now i
AIéçºä¼æ¥ã®SubquadraticãAIã¢ãã«ãSubQããçºè¡¨ãã¾ãããSubQã¯ä¸»æµã®Transformerãã¼ã¹AIã¢ãã«ã¨ã¯ç°ãªãã¢ã¼ããã¯ãã£ã§éçºãããã¢ãã«ã§ãæå¤§1200ä¸ãã¼ã¯ã³ã¨ããé·å¤§ãªã³ã³ããã¹ãã¦ã£ã³ãã¦ãåãã¦ãã¾ããã¾ãããã¹ãã¢ãã«ã§ãããSubQ 1M-Previewãã¯é·å¤§ãã¼ã¯ã³ãå ¥åããéã®å¦çæ§è½ã§Claude Opus 4.7ã大ããä¸åã£ã¦ãã¾ãã Subquadratic â Efficiency is Intelligence https://subq.ai/ Introducing SubQ: The First Fully Subquadratic LLM https://subq.ai/introducing-subq è¨äºä½ææç¹ã§ä¸»æµã®AIã¢ãã«ã¯ãTransformerãã¨ããæ©æ¢°å¦ç¿ã¢ã¼ããã¯ãã£ã«åºã¥ãã¦éçºããã¦ãã¾ã
Playwright CLI v0.1.9 ã§è¿½å ãããã¢ããã¼ã·ã§ã³æ©è½ã¯ AI ã¨ã¼ã¸ã§ã³ãã«è¦è¦çãªãã£ã¼ãããã¯ãä¸ããããã«ä¾¿å©ãªæ©è½ã§ããã¢ããã¼ã·ã§ã³æ©è½ãå©ç¨ããã¨ããã©ã¦ã¶ã®è¦ç´ ã鏿ãã¦ããã®è¦ç´ ã«å¯¾ããã³ã¡ã³ããæ®ããã¨ãã§ãã¾ããAI ã¨ã¼ã¸ã§ã³ãã¯ãã®ã¢ããã¼ã·ã§ã³ãæ®ãããè¦ç´ ãç°¡åã«ç¹å®ã§ãããããã©ã®ã³ã¼ããä¿®æ£ããã°ããã®ãã夿ãããããªãã¾ãã AI ã¨ã¼ã¸ã§ã³ãã使ç¨ãã¦ããã³ãã¨ã³ããéçºããéãã©ã®ããã«è¦è¦çãªãã£ã¼ãããã¯ãæä¾ãããã¯ãããã課é¡ã§ããAI ã¨ã¼ã¸ã§ã³ãã¯èªèº«ãæ¸ããã³ã¼ãã«å¯¾ãã¦ãã¹ãã Lint ããå¾ããããã£ã¼ãããã¯ããã¨ã«æ¹åããã¨ãããµã¤ã¯ã«ãç¹°ãè¿ããã¨ã«ãã£ã¦ã³ã¼ãã®å質ãåä¸ããããã¨ãã§ãã¾ããããããããã³ãã¨ã³ãã®éçºã§ã¯ãä¾ãã° CSS ãå®éã«ã©ã®ããã«é©ç¨ããã¦ããããããã㯠JavaScr
ãã¼ã«ã«LLMã®ä¸çã§ã¯ãæ¯é±ã®ããã«ãæå¤§â¯åéãã¨ããè¦åºããæµãã¦ãã¾ããä»é±é£ã³è¾¼ãã§ããã®ã¯äºæ¬ç«ã¦ã§ããã ä¸ã¤ã¯Googleç´æ£ãGemma 4ãã¡ããªã¼åãã®ãMulti-Token Predictionï¼MTPï¼ãã©ãã¿ã¼ããããä¸ã¤ã¯Apple Siliconå°ç¨ã®ãMTPLXãã¨ããMLXãã©ã¼ã¯ãåè ã¯æå¤§3åãå¾è ã¯2.24åãæ°åã ãè¦ãã¨ã¨ã¦ãé åçã§ãã ãã¤ãã®ããã«ã8GBã®MacBook Neoã§åãã®ã確ããã¦ã¿ã¾ãããçµè«ããè¨ãã¨ã両æ¹ã¨ãä»ã®ç§ã®ç°å¢ã§ã¯åãã¾ããããã ããã®ãåããªãçç±ããããããéã£ã¦ãã¦ããã¼ã«ã«LLMã®æé©åãã¬ã³ããèªãã«ã¯é¢ç½ã顿ã ã£ãã®ã§ãæ´çãã¦æ¸ãã¦ããã¾ãã ãã®1ï¼Gemma 4 MTPãã©ãã¿ã¼ï¼Googleç´æ£ï¼Googleã2026å¹´5æã«åºããããã°è¨äºãAccelerating Gemma 4
â» æ¥æ¬èªã® ââ ã¯æ¥æ¬èªç¹åã¢ãã«ï¼Llama 3.1ãã¼ã¹ãæ¥æ¬èªãã¼ã¿ã§è¿½å å¦ç¿ãããã®ï¼ â» ã©ã¤ã»ã³ã¹: Apache 2.0ã»MITã¯åç¨å©ç¨OKã»æ¹å¤èªç±ãMeta/Llama Communityã¯åç¨OKã ãå©ç¨è¦ç´ã¸ã®åæãå¿ è¦ Qwenã¯æ¥æ¬èªãå¼·ãã¦ã©ã¤ã»ã³ã¹ãç·©ããæåã¯ããã§æ±ºã¾ãã ã¨æã£ãã ã¯ãã ã£ãã Qwenãéãå§ãã¦ã Qwen 3.5ã使ããã¨æã£ã¦èª¿ã¹ã¦ããããä¸ç©ãªè¨äºãè¦ã¤ããã ãã¡ã¯ããã§ãã¯ãã¦ã¿ãããå ¨é¨æ¬å½ã ã£ããæ£ç´ãããã¾ã§ã¨ã¯æããªãã£ãã 2026å¹´3æä»¥éã«èµ·ãããã¨: æè¡ãªã¼ãï¼Lin Junyangï¼ãäºå®ä¸ã®è§£ä»»ããè¾ä»»ãã¨çºè¡¨ããããããã¼ã ã¡ã³ãã¼ããèªåã®ææã§ã¯ãªãã£ããã¨ç¤ºåãã¦ã ã³ã¢ã¡ã³ãã¼ãMetaãByteDanceã¸æµåº å¾ä»»ã«ã¯Google DeepMindã®Geminiéçºè ãæèã忥åã·
ãGoogle Chromeãã¦ã¼ã¶ã¼ã«æç¢ºãªç¢ºèªãåããªãã¾ã¾ç´4GBã®ãªã³ããã¤ã¹AIã¢ãã«ãPCã«ãã¦ã³ãã¼ããã¦ãããã¨ãã©ã¤ãã·ã¼ç£æ»ã®å°éå®¶ã§ããã¢ã¬ã¯ãµã³ãã¼ã»ãã³ãæ°ãææãã¦ãã¾ãã Google Chrome silently installs a 4 GB AI model on your device without consent. At a billion-device scale the climate costs are insane. â That Privacy Guy! https://www.thatprivacyguy.com/blog/chrome-silent-nano-install/ åé¡ã®ãã¡ã¤ã«ã¯ãweights.binãã¨ããååã§ããOptGuideOnDeviceModelãã¨ããChromeãããã¡ã¤ã«å ã®ãã©ã«ãã«ä¿åãããã¨
ãã¼ã«ã«PCã§AIã使ãå§ããã¨ããDockerã®è¨å®ãGPUãã©ã¤ãã®ä¾åé¢ä¿ã®è§£æ±ºã«æéããããã±ã¼ã¹ãããã¾ããã¯ã¼ã¯ã¹ãã¼ã·ã§ã³ã¡ã¼ã«ã¼ã®Puget Systemsãå ¬éãããPuget Systems Docker App Packsãã¯ã1è¡ã®ã³ãã³ããå®è¡ããã ãã§çæAIãå¤§è¦æ¨¡è¨èªã¢ãã«ã®å®è¡ç°å¢ãèªåæ§ç¯ã§ãããªã¼ãã³ã½ã¼ã¹ã®ã»ããã¢ãããã¼ã«ã§ãããComfyUIãã«ããç»åçæããããã¼ã åãã®ãã¼ã«ã«LLMãµã¼ãã¼ã¾ã§ãç¨éã«å¿ããè¤æ°ã®ç°å¢ãã³ãã¬ã¼ããé¸ã¶ã ãã§ãããã«AIã¯ã¼ã¯ããã¼ãéå§ã§ãã¾ãã Puget-Systems/puget-docker-app-packs: Puget Systems App Pack is Designed to Get AI & Data Scientists https://github.com/Puget-Syste
A hands-on workshop where you write every piece of a GPT training pipeline yourself, understanding what each component does and why. Andrej Karpathy's nanoGPT was my first real exposure to LLMs and transformers. Seeing how a working language model could be built in a few hundred lines of PyTorch completely changed how I thought about AI and inspired me to go deeper into the space. This workshop is
ãã¼ã«ã«LLM 6ã¢ãã«ãµã¤ãºå¥æ¯è¼ï¼gemma3 / qwen3 / gpt-oss ãOllamaã§å®æ¸¬ Ollamaã使ã£ã¦ãã¼ã«ã«ç°å¢ã§åããã3ãã¡ããªã¼ã»6ãµã¤ãºã®LLMãã5ã¤ã®ã¦ã¼ã¹ã±ã¼ã¹ã«ãã´ãªã§å®éçã«ãã³ããã¼ã¯ãã¾ããããã©ã®ã¢ãã«ã»ãµã¤ãºãé¸ã¶ããã®å¤æææã¨ãã¦æ´»ç¨ãã¦ãã ããã å®é¨ç°å¢ã»è¨å® ãã¼ãã¦ã§ã¢ é ç® å¤
Gemma 4ã·ãªã¼ãºã¯åä½ããã¤ã¹ã¨æ§è½ã®è¦³ç¹ããã2種ã«åé¡ã§ãã¾ãã ã»E2Bã»E4Bï¼æ®éã®ãã¼ãPCã§åããã軽éã¢ãã« ã»26B-A4Bï¼ãã¤ã¨ã³ããã·ã³ã§åãã髿§è½ã¢ãã« ããããé常ã«å®ç¨æ§ãé«ãã¢ãã«ã¨ãªã£ã¦ãããã¨ããããåããããã®ãé¸ã¹ã°OKã§ãã Apache 2.0ã©ã¤ã»ã³ã¹ã§æä¾ããã¦ãããå人å©ç¨ã¯ãã¡ããã®ãã¨ãåç¨ã§ã®å©ç¨ãå¯è½ã§ãã å°éãµã¤ãã«ããè©ä¾¡ãé«ãï½¢Gemini 3ã®ç ç©¶ã»æè¡ã«åºã¥ãã¦ãã¼ã«ã«LLMã®éçãå¼ãä¸ããããã«Gemma 4ãéçºããï½£ã¨Googleã¯è¿°ã¹ã¦ãã¾ãããããã¯ç ½ãæå¥ã§ã¯ããã¾ããã AIè©ä¾¡ãµã¤ãï½¢Arena.aiï½£ã«ãããä¸é¨ãã¼ã«ã«LLMã®æ§è½è©ä¾¡ã°ã©ããgemma-4-31Bã»gemma-4-26B-A4Bã¯ãµã¤ãºã«å¯¾ããè©ä¾¡ãé«ããªã£ã¦ããImage: GoogleAIè©ä¾¡ãµã¤ãï½¢Arena.aiï½£ã§
ãã£ããå®ç¨çãï½¢OpenCodeï½£Ããã¼ã«ã«LLMã§âç¡æClaude Codeâãã¦ã¿ã2026.05.02 13:0045,597 ãã¿ãã¾ããã¿ ãªã¼ãã³ã½ã¼ã¹ã®AIã³ã¼ãã£ã³ã°ã¨ã¼ã¸ã§ã³ãï½¢OpenCodeï½£ããã¼ã«ã«LLMï½¢Gemma-4-26b-a4bï½£ã§åããã¦ç°¡åãªã²ã¼ã ãä½ã£ã¦ããã¨ãã Image: ãã¿ãã¾ããã¿ AIã§ããããªãã¨ãã§ããåé¢ãæéããã£ãããããããã«ã å人çã«æè¿ããã¼ãæ°ã«ãªã£ã¦ãã¾ããæ°ããAIã¢ããªããµã¼ãã¹ãå°å ¥ãããã¦ããã³ã¹ããå³ãããã¨ãå¤ãã æ°ã¥ãããï½¢å®ã使ãã¦å®ç¨æ§ãããAIãã¢ããªã¯ãªãããªï¼ï½£ã¨æ¥ã æ¢ãããã«ãªã£ã¦ãã¾ããã ãããªä¸ããªããªãããçµã¿åãããè¦ã¤ãã¾ããã ãªã¼ãã³ã½ã¼ã¹ã®AIã³ã¼ãã£ã³ã°ã¨ã¼ã¸ã§ã³ãï½¢OpenCodeï½£ãGoogleã®ææ°ãã¼ã«ã«LLMï½¢Gemma 4ï½£ã¨çµã¿åããã¦ä½¿ã£ã¦ã¿ããã
Qwen3.6ã¨Gemma 4ãåãææã«åããããªãµã¤ãºã®ã¢ãã«ãåºã¦ããã®ã§æ¯ã¹ã¦ã¿ã¾ãã Qwen3.6-35B-A3B: Agentic Coding Power, Now Open to All Qwen3.6-27B: Flagship-Level Coding in a 27B Dense Model Gemma 4 ã¢ãã«ã«ã¼ã  | Google AI for Developers Gemma 4 31Bã ãQ4_K_Mã§ãä»ã¯Q4_K_XL-UDã§è©¦ãã¦ãã¾ãã ã¢ãã« å ¬éæ¥ 4K 36K 260K å ¥å åºå Qwen3.6-35B-A3B 4/15 23.7GB 24.4GB 28.6GB 2.3s 73.7tok/sec Qwen3.6-27B 4/22 20.0GB 24.5GB 49.8GB 9.7s 25.9tok/sec Gemma 4 26B-A4B
docker run -it --name hf python:3.14 bash apt update apt -y install vim pip install transformers torch from transformers import pipeline pipe = pipeline(task="text-generation", model="distilgpt2") print(pipe("Hello")) Can I Run AI locally? æ¦è¦ https://www.canirun.ai/ WebGPU ãªã©ã®æè¡ãç¨ãã¦ã¢ã¯ã»ã¹å PC ã®ã¹ããã¯ã調ã¹ãã¹ããã¯ã«å¿ããã¢ãã«ããªã¹ãã¢ãããã¦ããããµã¤ãã§ãã ãã¦ã³ãã¼ããµã¤ãºãã¡ã¢ãªæ¶è²»éã»é度ãªã©ã®ç®å®ãæç¤ºãã¦ããã¾ãã Llama.cpp æ¦è¦ LLaMa, Mistral, Gemma
LLMs have been showing limitations when it comes to cultural coverage and competence, and in some cases show regional biases such as amplifying Western and Anglocentric viewpoints. While there have been works analysing the cultural capabilities of LLMs, there has not been specific work on highlighting LLM regional preferences when it comes to cultural-related questions. In this work, we propose a
ãªãªã¼ã¹ãé害æ å ±ãªã©ã®ãµã¼ãã¹ã®ãç¥ãã
ææ°ã®äººæ°ã¨ã³ããªã¼ã®é ä¿¡
å¦çãå®è¡ä¸ã§ã
j次ã®ããã¯ãã¼ã¯
kåã®ããã¯ãã¼ã¯
lãã¨ã§èªã
eã³ã¡ã³ãä¸è¦§ãéã
oãã¼ã¸ãéã
{{#tags}}- {{label}}
{{/tags}}