Quickstart: Generate text using the Vertex AI Gemini API

In this quickstart, you send the following multimodal requests to the Vertex AI Gemini API and view the responses:

  • A text prompt
  • A prompt and an image
  • A prompt and a video file (with an audio track)

You can complete this quickstart by using a programming language SDK in your local environment or the REST API.

Prerequisites

Completing this quickstart requires you to:

  • Set up a Google Cloud project and enable the Vertex AI API
  • On your local machine:
    • Install, initialize, and authenticate with the Google Cloud CLI
    • Install the SDK for your language

Set up a Google Cloud project

Set up your Google Cloud project and enable the Vertex AI API.

  1. Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
  2. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Go to project selector

  3. Make sure that billing is enabled for your Google Cloud project.

  4. Enable the Vertex AI API.

    Enable the API

  5. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Go to project selector

  6. Make sure that billing is enabled for your Google Cloud project.

  7. Enable the Vertex AI API.

    Enable the API

Set up the Google Cloud CLI

On your local machine, set up and authenticate with the Google Cloud CLI. If you are familiar with the Gemini API in Google AI Studio, note that the Vertex AI Gemini API uses Identity and Access Management instead of API keys to manage access.

  1. Install and initialize the Google Cloud CLI.

  2. If you previously installed the gcloud CLI, ensure your gcloud components are updated by running this command.

    gcloud components update
  3. To authenticate with the gcloud CLI, generate a local Application Default Credentials (ADC) file by running this command. The web flow launched by the command is used to provide your user credentials.

    gcloud auth application-default login

    For more information, see Set up Application Default Credentials.

Set up the SDK for your programming language

On your local machine, click one of the following tabs to install the SDK for your programming language.

Python

Install and update the Vertex AI SDK for Python by running this command.

pip3 install --upgrade "google-cloud-aiplatform>=1.64"

Node.js

Install or update the aiplatform SDK for Node.js by running this command.

npm install @google-cloud/vertexai

Java

To add google-cloud-vertexai as a dependency, add the appropriate code for your environment.

Maven with BOM

Add the following HTML to your pom.xml:

<dependencyManagement>
<dependencies>
  <dependency>
    <artifactId>libraries-bom</artifactId>
    <groupId>com.google.cloud</groupId>
    <scope>import</scope>
    <type>pom</type>
    <version>26.34.0</version>
  </dependency>
</dependencies>
</dependencyManagement>
<dependencies>
<dependency>
  <groupId>com.google.cloud</groupId>
  <artifactId>google-cloud-vertexai</artifactId>
</dependency>
</dependencies>

Maven without BOM

Add the following to your pom.xml:

<dependency>
  <groupId>com.google.cloud</groupId>
  <artifactId>google-cloud-vertexai</artifactId>
  <version>0.4.0</version>
</dependency>

Gradle without BOM

Add the following to your build.gradle:

implementation 'com.google.cloud:google-cloud-vertexai:0.4.0'

Go

Review the available Vertex AI API Go packages to determine which package best meets your project's needs.

  • (Recommended) cloud.google.com/go/vertexai

    vertexai is a human authored package that provides access to common capabilities and features.

    This package is recommended as the starting point for most developers building with the Vertex AI API. To access capabilities and features not yet covered by this package, use the auto-generated aiplatform package instead.

    To install this package, run this command.

    go get cloud.google.com/go/vertexai
  • cloud.google.com/go/aiplatform

    aiplatform is an auto-generated package.

    This package is intended for projects that require access to Vertex AI API capabilities and features not yet provided by the human authored vertexai package.

    To install this package, run this command.

    go get cloud.google.com/go/aiplatform

C#

Install the Google.Cloud.AIPlatform.V1 package from NuGet. Use your preferred method of adding packages to your project. For example, right-click the project in Visual Studio and choose Manage NuGet Packages....

REST

  1. Configure your environment variables by entering the following. Replace PROJECT_ID with the ID of your Google Cloud project.

    MODEL_ID="gemini-1.5-flash-002"
    PROJECT_ID="PROJECT_ID"
  2. Use Google Cloud CLI to provision the endpoint by running this command.

    gcloud beta services identity create --service=aiplatform.googleapis.com --project=${PROJECT_ID}

Send a prompt to the Vertex AI Gemini API

Use the following code to send a prompt to the Vertex AI Gemini API. This sample returns a list of possible names for a specialty flower store.

You can run the code from the command line, by using an IDE, or by including the code in your application.

Python

To send a prompt request, create a Python file (.py) and copy the following code into the file. Set the value of PROJECT_ID to the ID of your Google Cloud project. After updating the values, run the code.

import vertexai
from vertexai.generative_models import GenerativeModel

# TODO(developer): Update and un-comment below line
# PROJECT_ID = "your-project-id"
vertexai.init(project=PROJECT_ID, location="us-central1")

model = GenerativeModel("gemini-1.5-flash-002")

response = model.generate_content(
    "What's a good name for a flower shop that specializes in selling bouquets of dried flowers?"
)

print(response.text)
# Example response:
# **Emphasizing the Dried Aspect:**
# * Everlasting Blooms
# * Dried & Delightful
# * The Petal Preserve
# ...

Node.js

To send a prompt request, create a Node.js file (.js) and copy the following code into the file. Replace PROJECT_ID with the ID of your Google Cloud project. After updating the values, run the code.

const {VertexAI} = require('@google-cloud/vertexai');

/**
 * TODO(developer): Update these variables before running the sample.
 */
async function generate_from_text_input(projectId = 'PROJECT_ID') {
  const vertexAI = new VertexAI({project: projectId, location: 'us-central1'});

  const generativeModel = vertexAI.getGenerativeModel({
    model: 'gemini-1.5-flash-001',
  });

  const prompt =
    "What's a good name for a flower shop that specializes in selling bouquets of dried flowers?";

  const resp = await generativeModel.generateContent(prompt);
  const contentResponse = await resp.response;
  console.log(JSON.stringify(contentResponse));
}

Java

To send a prompt request, create a Java file (.java) and copy the following code into the file. Set your-google-cloud-project-id to your Google Cloud project ID. After updating the values, run the code.

import com.google.cloud.vertexai.VertexAI;
import com.google.cloud.vertexai.api.GenerateContentResponse;
import com.google.cloud.vertexai.generativeai.GenerativeModel;
import com.google.cloud.vertexai.generativeai.ResponseHandler;
import java.io.IOException;

public class TextInput {

  public static void main(String[] args) throws IOException {
    // TODO(developer): Replace these variables before running the sample.
    String projectId = "your-google-cloud-project-id";
    String location = "us-central1";
    String modelName = "gemini-1.5-flash-001";
    String textPrompt =
        "What's a good name for a flower shop that specializes in selling bouquets of"
            + " dried flowers?";

    String output = textInput(projectId, location, modelName, textPrompt);
    System.out.println(output);
  }

  // Passes the provided text input to the Gemini model and returns the text-only response.
  // For the specified textPrompt, the model returns a list of possible store names.
  public static String textInput(
      String projectId, String location, String modelName, String textPrompt) throws IOException {
    // Initialize client that will be used to send requests. This client only needs
    // to be created once, and can be reused for multiple requests.
    try (VertexAI vertexAI = new VertexAI(projectId, location)) {
      GenerativeModel model = new GenerativeModel(modelName, vertexAI);

      GenerateContentResponse response = model.generateContent(textPrompt);
      String output = ResponseHandler.getText(response);
      return output;
    }
  }
}

Go

To send a prompt request, create a Go file (.go) and copy the following code into the file. Replace projectID with the ID of your Google Cloud project. After updating the values, run the code.

import (
	"context"
	"encoding/json"
	"fmt"
	"io"

	"cloud.google.com/go/vertexai/genai"
)

func generateContentFromText(w io.Writer, projectID string) error {
	location := "us-central1"
	modelName := "gemini-1.5-flash-001"

	ctx := context.Background()
	client, err := genai.NewClient(ctx, projectID, location)
	if err != nil {
		return fmt.Errorf("error creating client: %w", err)
	}
	gemini := client.GenerativeModel(modelName)
	prompt := genai.Text(
		"What's a good name for a flower shop that specializes in selling bouquets of dried flowers?")

	resp, err := gemini.GenerateContent(ctx, prompt)
	if err != nil {
		return fmt.Errorf("error generating content: %w", err)
	}
	// See the JSON response in
	// https://pkg.go.dev/cloud.google.com/go/vertexai/genai#GenerateContentResponse.
	rb, err := json.MarshalIndent(resp, "", "  ")
	if err != nil {
		return fmt.Errorf("json.MarshalIndent: %w", err)
	}
	fmt.Fprintln(w, string(rb))
	return nil
}

C#

To send a prompt request, create a C# file (.cs) and copy the following code into the file. Set your-project-id to your Google Cloud project ID. After updating the values, run the code.


using Google.Cloud.AIPlatform.V1;
using System;
using System.Threading.Tasks;

public class TextInputSample
{
    public async Task<string> TextInput(
        string projectId = "your-project-id",
        string location = "us-central1",
        string publisher = "google",
        string model = "gemini-1.5-flash-001")
    {

        var predictionServiceClient = new PredictionServiceClientBuilder
        {
            Endpoint = $"{location}-aiplatform.googleapis.com"
        }.Build();
        string prompt = @"What's a good name for a flower shop that specializes in selling bouquets of dried flowers?";

        var generateContentRequest = new GenerateContentRequest
        {
            Model = $"projects/{projectId}/locations/{location}/publishers/{publisher}/models/{model}",
            Contents =
            {
                new Content
                {
                    Role = "USER",
                    Parts =
                    {
                        new Part { Text = prompt }
                    }
                }
            }
        };

        GenerateContentResponse response = await predictionServiceClient.GenerateContentAsync(generateContentRequest);

        string responseText = response.Candidates[0].Content.Parts[0].Text;
        Console.WriteLine(responseText);

        return responseText;
    }
}

REST

To send this prompt request, run the curl command from the command line or include the REST call in your application.

curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
https://us-central1-aiplatform.googleapis.com/v1/projects/${PROJECT_ID}/locations/us-central1/publishers/google/models/${MODEL_ID}:generateContent -d \
$'{
  "contents": {
    "role": "user",
    "parts": [
      {
        "text": "What\'s a good name for a flower shop that specializes in selling bouquets of dried flowers?"
      }
    ]
  }
}'
The model returns a response. Note that the response is generated in sections with each section separately evaluated for safety.

Send a prompt and an image to the Vertex AI Gemini API

Use the following code to send a prompt that includes text and an image to the Vertex AI Gemini API. This sample returns a description of the provided image (image for Java sample).

Python

To send a prompt request, create a Python file (.py) and copy the following code into the file. Set the value of PROJECT_ID to the ID of your Google Cloud project. After updating the values, run the code.

import vertexai

from vertexai.generative_models import GenerativeModel, Part

# TODO(developer): Update and un-comment below line
# PROJECT_ID = "your-project-id"
vertexai.init(project=PROJECT_ID, location="us-central1")

model = GenerativeModel("gemini-1.5-flash-002")

response = model.generate_content(
    [
        Part.from_uri(
            "gs://cloud-samples-data/generative-ai/image/scones.jpg",
            mime_type="image/jpeg",
        ),
        "What is shown in this image?",
    ]
)

print(response.text)
# That's a lovely overhead shot of a rustic-style breakfast or brunch spread.
# Here's what's in the image:
# * **Blueberry scones:** Several freshly baked blueberry scones are arranged on parchment paper.
# They look crumbly and delicious.
# ...

Node.js

To send a prompt request, create a Node.js file (.js) and copy the following code into the file. Replace PROJECT_ID with the ID of your Google Cloud project. After updating the values, run the code.

const {VertexAI} = require('@google-cloud/vertexai');

/**
 * TODO(developer): Update these variables before running the sample.
 */
async function createNonStreamingMultipartContent(
  projectId = 'PROJECT_ID',
  location = 'us-central1',
  model = 'gemini-1.5-flash-001',
  image = 'gs://generativeai-downloads/images/scones.jpg',
  mimeType = 'image/jpeg'
) {
  // Initialize Vertex with your Cloud project and location
  const vertexAI = new VertexAI({project: projectId, location: location});

  // Instantiate the model
  const generativeVisionModel = vertexAI.getGenerativeModel({
    model: model,
  });

  // For images, the SDK supports both Google Cloud Storage URI and base64 strings
  const filePart = {
    fileData: {
      fileUri: image,
      mimeType: mimeType,
    },
  };

  const textPart = {
    text: 'what is shown in this image?',
  };

  const request = {
    contents: [{role: 'user', parts: [filePart, textPart]}],
  };

  console.log('Prompt Text:');
  console.log(request.contents[0].parts[1].text);

  console.log('Non-Streaming Response Text:');

  // Generate a response
  const response = await generativeVisionModel.generateContent(request);

  // Select the text from the response
  const fullTextResponse =
    response.response.candidates[0].content.parts[0].text;

  console.log(fullTextResponse);
}

Java

To send a prompt request, create a Java file (.java) and copy the following code into the file. Set your-google-cloud-project-id to your Google Cloud project ID. After updating the values, run the code.

import com.google.cloud.vertexai.VertexAI;
import com.google.cloud.vertexai.api.GenerateContentResponse;
import com.google.cloud.vertexai.generativeai.ContentMaker;
import com.google.cloud.vertexai.generativeai.GenerativeModel;
import com.google.cloud.vertexai.generativeai.PartMaker;
import java.io.IOException;

public class Quickstart {

  public static void main(String[] args) throws IOException {
    // TODO(developer): Replace these variables before running the sample.
    String projectId = "your-google-cloud-project-id";
    String location = "us-central1";
    String modelName = "gemini-1.5-flash-001";

    String output = quickstart(projectId, location, modelName);
    System.out.println(output);
  }

  // Analyzes the provided Multimodal input.
  public static String quickstart(String projectId, String location, String modelName)
      throws IOException {
    // Initialize client that will be used to send requests. This client only needs
    // to be created once, and can be reused for multiple requests.
    try (VertexAI vertexAI = new VertexAI(projectId, location)) {
      String imageUri = "gs://generativeai-downloads/images/scones.jpg";

      GenerativeModel model = new GenerativeModel(modelName, vertexAI);
      GenerateContentResponse response = model.generateContent(ContentMaker.fromMultiModalData(
          PartMaker.fromMimeTypeAndData("image/png", imageUri),
          "What's in this photo"
      ));

      return response.toString();
    }
  }
}

Go

To send a prompt request, create a Go file (.go) and copy the following code into the file. Replace projectID with the ID of your Google Cloud project. After updating the values, run the code.

import (
	"context"
	"encoding/json"
	"fmt"
	"io"

	"cloud.google.com/go/vertexai/genai"
)

func tryGemini(w io.Writer, projectID string, location string, modelName string) error {
	// location := "us-central1"
	// modelName := "gemini-1.5-flash-001"

	ctx := context.Background()
	client, err := genai.NewClient(ctx, projectID, location)
	if err != nil {
		return fmt.Errorf("error creating client: %w", err)
	}
	gemini := client.GenerativeModel(modelName)

	img := genai.FileData{
		MIMEType: "image/jpeg",
		FileURI:  "gs://generativeai-downloads/images/scones.jpg",
	}
	prompt := genai.Text("What is in this image?")

	resp, err := gemini.GenerateContent(ctx, img, prompt)
	if err != nil {
		return fmt.Errorf("error generating content: %w", err)
	}
	rb, err := json.MarshalIndent(resp, "", "  ")
	if err != nil {
		return fmt.Errorf("json.MarshalIndent: %w", err)
	}
	fmt.Fprintln(w, string(rb))
	return nil
}

C#

To send a prompt request, create a C# file (.cs) and copy the following code into the file. Set your-project-id to your Google Cloud project ID. After updating the values, run the code.


using Google.Api.Gax.Grpc;
using Google.Cloud.AIPlatform.V1;
using System.Text;
using System.Threading.Tasks;

public class GeminiQuickstart
{
    public async Task<string> GenerateContent(
        string projectId = "your-project-id",
        string location = "us-central1",
        string publisher = "google",
        string model = "gemini-1.5-flash-001"
    )
    {
        // Create client
        var predictionServiceClient = new PredictionServiceClientBuilder
        {
            Endpoint = $"{location}-aiplatform.googleapis.com"
        }.Build();

        // Initialize content request
        var generateContentRequest = new GenerateContentRequest
        {
            Model = $"projects/{projectId}/locations/{location}/publishers/{publisher}/models/{model}",
            GenerationConfig = new GenerationConfig
            {
                Temperature = 0.4f,
                TopP = 1,
                TopK = 32,
                MaxOutputTokens = 2048
            },
            Contents =
            {
                new Content
                {
                    Role = "USER",
                    Parts =
                    {
                        new Part { Text = "What's in this photo?" },
                        new Part { FileData = new() { MimeType = "image/png", FileUri = "gs://generativeai-downloads/images/scones.jpg" } }
                    }
                }
            }
        };

        // Make the request, returning a streaming response
        using PredictionServiceClient.StreamGenerateContentStream response = predictionServiceClient.StreamGenerateContent(generateContentRequest);

        StringBuilder fullText = new();

        // Read streaming responses from server until complete
        AsyncResponseStream<GenerateContentResponse> responseStream = response.GetResponseStream();
        await foreach (GenerateContentResponse responseItem in responseStream)
        {
            fullText.Append(responseItem.Candidates[0].Content.Parts[0].Text);
        }

        return fullText.ToString();
    }
}

REST

You can send this prompt request from from your IDE, or you can embed the REST call into your application where appropriate.

curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
https://us-central1-aiplatform.googleapis.com/v1/projects/${PROJECT_ID}/locations/us-central1/publishers/google/models/${MODEL_ID}:generateContent -d \
$'{
  "contents": {
    "role": "user",
    "parts": [
      {
      "fileData": {
        "mimeType": "image/jpeg",
        "fileUri": "gs://generativeai-downloads/images/scones.jpg"
        }
      },
      {
        "text": "Describe this picture."
      }
    ]
  }
}'

The model returns a response. Note that the response is generated in sections with each section separately evaluated for safety.

Send a prompt and a video to the Vertex AI Gemini API

Use the following code to send a prompt that includes text, audio, and video to the Vertex AI Gemini API. This sample returns a description of the provided video, including anything important from the audio track.

You can send this prompt request by using the command line, using your IDE, or by including the REST call in your application.

Python

To send a prompt request, create a Python file (.py) and copy the following code into the file. Set the value of PROJECT_ID to the ID of your Google Cloud project. After updating the values, run the code.


import vertexai
from vertexai.generative_models import GenerativeModel, Part

# TODO(developer): Update and un-comment below line
# PROJECT_ID = "your-project-id"

vertexai.init(project=PROJECT_ID, location="us-central1")

model = GenerativeModel("gemini-1.5-flash-002")

prompt = """
Provide a description of the video.
The description should also contain anything important which people say in the video.
"""

video_file = Part.from_uri(
    uri="gs://cloud-samples-data/generative-ai/video/pixel8.mp4",
    mime_type="video/mp4",
)

contents = [video_file, prompt]

response = model.generate_content(contents)
print(response.text)
# Example response:
# Here is a description of the video.
# ... Then, the scene changes to a woman named Saeko Shimada..
# She says, "Tokyo has many faces. The city at night is totally different
# from what you see during the day."
# ...

Node.js

To send a prompt request, create a Node.js file (.js) and copy the following code into the file. Replace PROJECT_ID with the ID of your Google Cloud project. After updating the values, run the code.

const {VertexAI} = require('@google-cloud/vertexai');

/**
 * TODO(developer): Update these variables before running the sample.
 */
async function analyze_video_with_audio(projectId = 'PROJECT_ID') {
  const vertexAI = new VertexAI({project: projectId, location: 'us-central1'});

  const generativeModel = vertexAI.getGenerativeModel({
    model: 'gemini-1.5-flash-001',
  });

  const filePart = {
    file_data: {
      file_uri: 'gs://cloud-samples-data/generative-ai/video/pixel8.mp4',
      mime_type: 'video/mp4',
    },
  };
  const textPart = {
    text: `
    Provide a description of the video.
    The description should also contain anything important which people say in the video.`,
  };

  const request = {
    contents: [{role: 'user', parts: [filePart, textPart]}],
  };

  const resp = await generativeModel.generateContent(request);
  const contentResponse = await resp.response;
  console.log(JSON.stringify(contentResponse));
}

Java

To send a prompt request, create a Java file (.java) and copy the following code into the file. Set your-google-cloud-project-id to your Google Cloud project ID. After updating the values, run the code.


import com.google.cloud.vertexai.VertexAI;
import com.google.cloud.vertexai.api.GenerateContentResponse;
import com.google.cloud.vertexai.generativeai.ContentMaker;
import com.google.cloud.vertexai.generativeai.GenerativeModel;
import com.google.cloud.vertexai.generativeai.PartMaker;
import com.google.cloud.vertexai.generativeai.ResponseHandler;
import java.io.IOException;

public class VideoInputWithAudio {

  public static void main(String[] args) throws IOException {
    // TODO(developer): Replace these variables before running the sample.
    String projectId = "your-google-cloud-project-id";
    String location = "us-central1";
    String modelName = "gemini-1.5-flash-001";

    videoAudioInput(projectId, location, modelName);
  }

  // Analyzes the given video input, including its audio track.
  public static String videoAudioInput(String projectId, String location, String modelName)
      throws IOException {
    // Initialize client that will be used to send requests. This client only needs
    // to be created once, and can be reused for multiple requests.
    try (VertexAI vertexAI = new VertexAI(projectId, location)) {
      String videoUri = "gs://cloud-samples-data/generative-ai/video/pixel8.mp4";

      GenerativeModel model = new GenerativeModel(modelName, vertexAI);
      GenerateContentResponse response = model.generateContent(
          ContentMaker.fromMultiModalData(
              "Provide a description of the video.\n The description should also "
                  + "contain anything important which people say in the video.",
              PartMaker.fromMimeTypeAndData("video/mp4", videoUri)
          ));

      String output = ResponseHandler.getText(response);
      System.out.println(output);

      return output;
    }
  }
}

Go

To send a prompt request, create a Go file (.go) and copy the following code into the file. Replace projectID with the ID of your Google Cloud project. After updating the values, run the code.

import (
	"context"
	"errors"
	"fmt"
	"io"
	"mime"
	"path/filepath"

	"cloud.google.com/go/vertexai/genai"
)

// generateMultimodalContent shows how to send video and text prompts to a model, writing the response to
// the provided io.Writer.
func generateMultimodalContent(w io.Writer, projectID, location, modelName string) error {
	// location := "us-central1"
	// modelName := "gemini-1.5-flash-001"
	ctx := context.Background()

	client, err := genai.NewClient(ctx, projectID, location)
	if err != nil {
		return fmt.Errorf("unable to create client: %w", err)
	}
	defer client.Close()

	model := client.GenerativeModel(modelName)

	// Given a video file URL, prepare video file as genai.Part
	part := genai.FileData{
		MIMEType: mime.TypeByExtension(filepath.Ext("pixel8.mp4")),
		FileURI:  "gs://cloud-samples-data/generative-ai/video/pixel8.mp4",
	}

	res, err := model.GenerateContent(ctx, part, genai.Text(`
			Provide a description of the video.
			The description should also contain anything important which people say in the video.
	`))
	if err != nil {
		return fmt.Errorf("unable to generate contents: %w", err)
	}

	if len(res.Candidates) == 0 ||
		len(res.Candidates[0].Content.Parts) == 0 {
		return errors.New("empty response from model")
	}

	fmt.Fprintf(w, "generated response: %s\n", res.Candidates[0].Content.Parts[0])
	return nil
}

C#

To send a prompt request, create a C# file (.cs) and copy the following code into the file. Set your-project-id to your Google Cloud project ID. After updating the values, run the code.


using Google.Cloud.AIPlatform.V1;
using System;
using System.Threading.Tasks;

public class VideoInputWithAudio
{
    public async Task<string> DescribeVideo(
        string projectId = "your-project-id",
        string location = "us-central1",
        string publisher = "google",
        string model = "gemini-1.5-flash-001")
    {

        var predictionServiceClient = new PredictionServiceClientBuilder
        {
            Endpoint = $"{location}-aiplatform.googleapis.com"
        }.Build();

        string prompt = @"Provide a description of the video.
The description should also contain anything important which people say in the video.";

        var generateContentRequest = new GenerateContentRequest
        {
            Model = $"projects/{projectId}/locations/{location}/publishers/{publisher}/models/{model}",
            Contents =
            {
                new Content
                {
                    Role = "USER",
                    Parts =
                    {
                        new Part { Text = prompt },
                        new Part { FileData = new() { MimeType = "video/mp4", FileUri = "gs://cloud-samples-data/generative-ai/video/pixel8.mp4" }}
                    }
                }
            }
        };

        GenerateContentResponse response = await predictionServiceClient.GenerateContentAsync(generateContentRequest);

        string responseText = response.Candidates[0].Content.Parts[0].Text;
        Console.WriteLine(responseText);

        return responseText;
    }
}

REST

curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
https://us-central1-aiplatform.googleapis.com/v1/projects/${PROJECT_ID}/locations/us-central1/publishers/google/models/${MODEL_ID}:generateContent -d \
$'{
  "contents": {
    "role": "user",
    "parts": [
      {
      "fileData": {
        "mimeType": "video/mp4",
        "fileUri": "gs://cloud-samples-data/generative-ai/video/pixel8.mp4"
        }
      },
      {
        "text": "Provide a description of the video. The description should also contain anything important which people say in the video."
      }
    ]
  }
}'

The model returns a response. Note that the response is generated in sections with each section separately evaluated for safety.

What's next