90 results
NOV 21, 2024 / Mobile
The winners of the Gemini API Developer Competition showcased the potential of the Gemini API in creating impactful solutions, from AI-powered personal assistants to tools for accessibility and creativity.
NOV 19, 2024 / Firebase
Explore Firebase's new AI-powered app development tools and resources, including demos, documentation, and best practices at Firebase Demo Day 2024.
NOV 14, 2024 / Gemini
The integration of Gemini's 1.5 models with Sublayer's Ruby-based AI agent framework enables developer teams to automate their documentation process, streamline workflows, and build AI-driven applications.
NOV 13, 2024 / Gemma
vLLM's continuous batching and Dataflow's model manager optimizes LLM serving and simplifies the deployment process, delivering a powerful combination for developers to build high-performance LLM inference pipelines more efficiently.
OCT 23, 2024 / Cloud
The Responsible Generative AI Toolkit is being expanded with new features to support responsible AI development across all LLMs, including SynthID Text for watermarking.
OCT 08, 2024 / Cloud
The Google Chat API has been launched, allowing developers to build Chat apps that enable real-time collaboration between Google Chat and other systems.
OCT 02, 2024 / Gemma
The season showcases new applications of Gemma, including a personal AI code assistant and projects for non-English tasks and business email processing.
SEP 26, 2024 / AI
Vertex AI Prompt Optimizer, a new managed automated prompt engineering service, can help save time and effort in prompt engineering while ensuring performing prompts ready for your generative AI applications.
SEP 03, 2024 / DeepMind
Controlled Generation for Gemini 1.5 Pro and Flash improves the handoff from data science teams to developers, enhancing the integration of AI output and ensuring AI-generated responses adhere to a defined schema.
AUG 16, 2024 / Gemma
Use the Gemma language model to gauge customer sentiment, summarize conversations, and assist with crafting responses in near real-time with minimal latency.