tag:google.com,2016:speech-release-notes Speech-to-Text - Release notes Google Cloud Platform 2025-11-13T00:00:00-08:00 November 13, 2025 tag:google.com,2016:speech-release-notes#November_13_2025 2025-11-13T00:00:00-08:00 <![CDATA[

Feature

Speech-to-Text has just launched chirp_3 in public preview for the regions asia-south1, europe-west2, europe-west3 and northamerica-northeast1.

For more information about the Chirp 3 model, see Chirp 3 Transcription: Enhanced multilingual accuracy.

]]>
October 13, 2025 tag:google.com,2016:speech-release-notes#October_13_2025 2025-10-13T00:00:00-07:00 <![CDATA[

Feature

Speech-to-Text is excited to announce the General Availability (GA) of the Chirp 3: Transcription, the latest generation of Google's multilingual Automatic Speech Recognition (ASR)-specific generative model, delivering state-of-the-art ASR accuracy and multilingual capabilities. Available exclusively in the Speech-to-Text API V2, Chirp 3 delivers significant enhancements in transcription accuracy and speed over previous versions. Under the new chirp_3 model identifier, you can now leverage powerful new capabilities, including speaker diarization to identify different speakers and automatic language detection for multilingual audio. The model supports all major recognition methods —StreamingRecognize, Recognize, and BatchRecognize- making it suitable for both real-time and batch processing. Chirp 3 also offers advanced features such as speech adaptation for custom vocabularies and a built-in denoiser to improve results from noisy audio.

To explore the new Chirp 3: Transcription model's capabilities and learn how to leverage its full potential, please visit our updated documentation page.

]]>
August 29, 2025 tag:google.com,2016:speech-release-notes#August_29_2025 2025-08-29T00:00:00-07:00 <![CDATA[

Feature

Speech-to-Text has just launched chirp_3 in Public Preview. With this Public Preview, Chirp 3: Transcription we are now expanding language transcription in more than 85+ languages and locales, in addition to StreamingRecognize and SyncRecognize requests for real-time and short-form audio. Under the chirp_3 model flag, you can experience significant improvements in accuracy and speed, and leverage powerful features like Speaker Diarization and language-agnostic transcription.

To explore the new Chirp 3: Transcription model's capabilities and learn how to leverage its full potential, please visit our official documentation page.

]]>
April 11, 2025 tag:google.com,2016:speech-release-notes#April_11_2025 2025-04-11T00:00:00-07:00 <![CDATA[

Feature

Speech-to-Text has launched chirp_3 in Private Preview. Chirp 3: Transcription is the latest generation of Google's multilingual Automatic Speech Recognition (ASR)-specific generative models that further enhances its ASR accuracy and multilingual capabilities. Under the new chirp_3 model flag, you can experience significant improvements in accuracy and speed, and leverage powerful new features like Speaker Diarization and language-agnostic transcription. Chirp 3 supports BatchRecognize requests within the Speech-to-Text v2 API, making it ideal for transcribing long-form audio.

To explore the new Chirp 3: Transcription model's capabilities and to learn how to leverage its full potential, please visit our official documentation page. To gain access to this Private Preview, please contact our sales team.

]]>
January 27, 2025 tag:google.com,2016:speech-release-notes#January_27_2025 2025-01-27T00:00:00-08:00 <![CDATA[

Feature

Speech-to-Text is generally available (GA) in the Chirp 2 model in asia-southeast1, us-central1, and europe-west4.

For more information about the Chirp 2 model, see Chirp 2: Enhanced multilingual accuracy. For code samples, see Get started with Chirp 2 using Speech-to-Text V2 SDK in GitHub.

]]>
October 07, 2024 tag:google.com,2016:speech-release-notes#October_07_2024 2024-10-07T00:00:00-07:00 <![CDATA[

Feature

Speech-to-Text has updated the Generally Available Chirp 2 model, further enhancing its ASR accuracy and multilingual capabilities. Under the existing chirp_2 model flag, you can experience significant improvements in accuracy and speed, as well as support for word-level timestamps, model adaptation, and speech translation. Finally, Chirp 2 can support Streaming Recognizer requests, in addition to the already supported Sync and Batch Recognition requests, allowing its use in realtime applications.

Explore the new chirp_2 model's capabilities and learn how to leverage its full potential by visiting our updated documentation and tutorials.

]]>
January 09, 2024 tag:google.com,2016:speech-release-notes#January_09_2024 2024-01-09T00:00:00-08:00 <![CDATA[

Feature

Model adaptation is now available for latest_long models in 13 languages. Also, its quality was substantially improved for latest_short models. To determine whether this feature is available for your language, see Language support.

]]>
January 08, 2024 tag:google.com,2016:speech-release-notes#January_08_2024 2024-01-08T00:00:00-08:00 <![CDATA[

Feature

Speech-to-Text has launched a new model, named chirp_telephony to bring the accuracy gains of our chirp model to telephony-specific use cases. The new model is a fine-tuned version of our very successful chirp model, based on the Universal large Speech Model(USM) architecture, on audio that originated from a phone call typically recorded at an 8 kHz sampling rate. For more information, see Speech-to-Text supported languages.

]]>
November 06, 2023 tag:google.com,2016:speech-release-notes#November_06_2023 2023-11-06T00:00:00-08:00 <![CDATA[

Feature

Speech-to-Text has launched two models, named telephony and telephony_short. The two models are customized to recognize audio that originates from a phone call and corresponds to the most recent versions of the existing phone_call model. For more information, see Speech-to-Text supported languages.

]]>
February 07, 2023 tag:google.com,2016:speech-release-notes#February_07_2023 2023-02-07T00:00:00-08:00 <![CDATA[

Announcement

We are removing SpeechContext.strength field within the next 4 weeks, because it has been deprecated and unused for more than a year. The documentation doesn't have references to this field anymore, and the clients aren't supposed to use it.

]]>
November 11, 2022 tag:google.com,2016:speech-release-notes#November_11_2022 2022-11-11T00:00:00-08:00 <![CDATA[

Change

Speech-to-Text has updated its pricing policy. Enhanced models are no longer priced differently than standard models. Usage of all models will be reported to and priced like standard models. Also, all Cloud Speech-to-Text requests will now be rounded up to the nearest 1 second, with no minimum audio length (requests were previously rounded up to the nearest 15 seconds). See the Pricing page for details.

]]>
October 03, 2022 tag:google.com,2016:speech-release-notes#October_03_2022 2022-10-03T00:00:00-07:00 <![CDATA[

Feature

Speaker Diarization is now available for "Latest" models in en-US. This feature recognizes multiple speakers in the same audio clip. Latest models use a new model for diarization from previous models. For more information see Speaker Diarization.

]]>
April 21, 2022 tag:google.com,2016:speech-release-notes#April_21_2022 2022-04-21T00:00:00-07:00 <![CDATA[

Change

"Latest" models are available in more than 20 languages. These models employ new end-to-end machine learning techniques and can improve the accuracy of your recognized speech. For more information see Latest models.

]]>
November 08, 2021 tag:google.com,2016:speech-release-notes#November_08_2021 2021-11-08T00:00:00-08:00 <![CDATA[

Feature

Speech-to-Text has launched two new medical speech models, which are tailored for recognition of words that are common in medical settings. See the medical models documentation for more details.

]]>
July 21, 2021 tag:google.com,2016:speech-release-notes#July_21_2021 2021-07-21T00:00:00-07:00 <![CDATA[

Announcement

Speech-to-Text has launched a GA version of the Spoken Emoji and Spoken Punctuation features. See the documentation for details.

]]>
June 28, 2021 tag:google.com,2016:speech-release-notes#June_28_2021 2021-06-28T00:00:00-07:00 <![CDATA[

Feature

The Speech-to-Text now supports multi-region endpoints as a GA feature. See the multi-region endpoints documentation for more information.

]]>
May 24, 2021 tag:google.com,2016:speech-release-notes#May_24_2021 2021-05-24T00:00:00-07:00 <![CDATA[

Feature

Speech-to-Text now supports Spoken Punctuation and Spoken Emoji as Preview features. See the documentation for details.

]]>
May 07, 2021 tag:google.com,2016:speech-release-notes#May_07_2021 2021-05-07T00:00:00-07:00 <![CDATA[

Feature

The Speech-to-Text model adaptation feature is now a GA feature. See the model adaptation concepts page for more information about using this feature.

]]>
March 23, 2021 tag:google.com,2016:speech-release-notes#March_23_2021 2021-03-23T00:00:00-07:00 <![CDATA[

Feature

Speech-to-Text now allows you to upload your longrunning transcription results directly into a Cloud Storage bucket. See the asynchronous speech recognition documentation for more details.

]]>
March 15, 2021 tag:google.com,2016:speech-release-notes#March_15_2021 2021-03-15T00:00:00-07:00 <![CDATA[

Feature

Speech-to-Text has launched the Model Adaptation feature. You can now create custom classes and build phrase sets to improve your transcription results.

]]>
January 26, 2021 tag:google.com,2016:speech-release-notes#January_26_2021 2021-01-26T00:00:00-08:00 <![CDATA[

Feature

Speech-to-Text now supports regional EU and US endpoints. See the multi-region endpoints documentation for more information.

]]>
August 25, 2020 tag:google.com,2016:speech-release-notes#August_25_2020 2020-08-25T00:00:00-07:00 <![CDATA[

Feature

Speech-to-Text has launched the new On-Prem API. Speech-to-Text On-Prem enables easy integration of Google speech recognition technologies into your on-premises solution.

]]>
March 05, 2020 tag:google.com,2016:speech-release-notes#March_05_2020 2020-03-05T00:00:00-08:00 <![CDATA[

Feature

The speaker diarization, automatic punctuation, speech adaptation boost, and enhanced telephony model features are now available for new languages. See the supported languages page for a complete list.

Feature

Cloud Speech-to-Text now supports seven new languages: Burmese, Estonian, Uzbek, Punjabi, Albanian, Macedonian, and Mongolian.

Change

Class tokens are now available for general use. You can use class tokens with speech adaptation to help the model recognize concepts in your recorded audio data.

]]>
November 26, 2019 tag:google.com,2016:speech-release-notes#November_26_2019 2019-11-26T00:00:00-08:00 <![CDATA[

Change

Automatic punctuation is now available for general use. Cloud Speech-to-Text can insert punctuation into transcription results, including commas, periods, and question marks.

]]>
July 23, 2019 tag:google.com,2016:speech-release-notes#July_23_2019 2019-07-23T00:00:00-07:00 <![CDATA[

Feature

Cloud Speech-to-Text has several endless streaming tutorials that demonstrate how to transcribe an infinite audio stream.

Feature

You can now use speech adaptation to provide 'hints' to Cloud Speech-to-Text when it performs speech recognition. This feature is now in beta.

]]>
June 18, 2019 tag:google.com,2016:speech-release-notes#June_18_2019 2019-06-18T00:00:00-07:00 <![CDATA[

Feature

Cloud Speech-to-Text now has expanded to a 5 minute limit for streaming recognition. To use streaming recognition with the 5 minute limit, you must use the v1p1beta1 API version.

Feature

Cloud Speech-to-Text now supports transcription of MP3 encoded audio data. As this feature is in beta, you must use the v1p1beta1 API version.

]]>
April 04, 2019 tag:google.com,2016:speech-release-notes#April_04_2019 2019-04-04T00:00:00-07:00 <![CDATA[

Deprecated

The v1beta version of the service is no longer available for use. You must migrate your solutions to either the v1 or v1p1beta1 version of the API.

]]>
February 20, 2019 tag:google.com,2016:speech-release-notes#February_20_2019 2019-02-20T00:00:00-08:00 <![CDATA[

Feature

Data logging is now available for general use. When you enable data logging, you can reduce the cost of using Cloud Speech-to-Text by allowing Google to log your data in order to improve the service.

Feature

Enhanced models are now available for general use. Using enhanced models can improve audio transcription results.

Change

Using enhanced models no longer requires you to opt-in for data logging. Enhanced models are available for use by any transcription requests for a different price as standard models.

Feature

Selecting a transcription model is now available for general use. You can select different speech recognition models when you send a request to Cloud Speech-to-Text, including a model optimized for transcribing audio data from video files.

Feature

Cloud Speech-to-Text can transcribe audio data that includes multiple channels. This feature is now available for general use.

Feature

You can now include more details about your audio source files in transcription requests to Cloud Speech-to-Text in the form of recognition metadata, which can improve the results of the speech recognition. This feature is now available for general use.

]]>
July 24, 2018 tag:google.com,2016:speech-release-notes#July_24_2018 2018-07-24T00:00:00-07:00 <![CDATA[

Feature

Cloud Speech-to-Text provides word-level confidence Developers can use this feature to get the degree of confidence on a word-by-word level. This feature is in Beta.

Feature

Cloud Speech-to-Text can automatically detect the language used in an audio file. To use this feature, developers must specify alternative languages in their transcription request. This feature is in Beta.

Feature

Cloud Speech-to-Text can identify different speakers present in an audio file. This feature is in Beta.

Feature

Cloud Speech-to-Text can transcribe audio data that includes multiple channels. This feature is in Beta.

]]>
April 09, 2018 tag:google.com,2016:speech-release-notes#April_09_2018 2018-04-09T00:00:00-07:00 <![CDATA[

Feature

Cloud Speech-to-Text now provides data logging and enhanced models. Developers that want to take advantage of the enhanced speech recognition models can opt-in for data logging. This feature is in Beta.

Feature

Cloud Speech-to-Text can insert punctuation into transcription results, including commas, periods, and question marks. This feature is in Beta.

Feature

You can now select different speech recognition models when you send a request to Cloud Speech-to-Text, including a model optimized for transcribing audio from video files. This feature is in Beta.

Feature

You can now include more details about your audio source files in transcription requests to Cloud Speech-to-Text in the form of recognition metadata, which can improve the results of the speech recognition. This feature is in Beta.

]]>