Configure connectors in the Shared VPC host project
If your organization uses Shared VPC, you can set up Serverless VPC Access connectors in either the service project or the host project. This guide shows how to set up a connector in the host project.
If you need to set up a connector in a service project, see Configure connectors in service projects. To learn about the advantages of each method, see Connecting to a Shared VPC network.
Before you begin
-
Check the Identity and Access Management (IAM) roles for the account you are using. The active account must have the following roles on the host project:
Select the host project in your preferred environment.
Set the default project in the gcloud CLI to the host project by running the following in your terminal:
gcloud config set project HOST_PROJECT_ID
Replace the following:
HOST_PROJECT_ID
: the ID of the Shared VPC host project
Create a Serverless VPC Access connector
To send requests to your VPC network and receive the corresponding responses, you must create a Serverless VPC Access connector. You can create a connector by using the Google Cloud CLI or Terraform:
gcloud
Update
gcloud
components to the latest version:gcloud components update
Enable the Serverless VPC Access API for your project:
gcloud services enable vpcaccess.googleapis.com
Create a Serverless VPC Access connector:
gcloud compute networks vpc-access connectors create CONNECTOR_NAME \ --region=REGION \ --subnet=SUBNET \ --subnet-project=HOST_PROJECT_ID \ # Optional: specify minimum and maximum instance values between 2 and 10, default is 2 min, 10 max. --min-instances=MIN \ --max-instances=MAX \ # Optional: specify machine type, default is e2-micro --machine-type=MACHINE_TYPE
Replace the following:
CONNECTOR_NAME
: a name for your connector. The name must follow the Compute Engine naming convention and be less than 21 characters. Hyphens (-
) count as two characters.REGION
: a region for your connector; this must match the region of your serverless service. If your service is in the regionus-central
oreurope-west
, useus-central1
oreurope-west1
.SUBNET
: the name of an unused/28
subnet.- Subnets must be used exclusively by the connector. They cannot be used by other resources such as VMs, Private Service Connect, or load balancers.
- To confirm that your subnet is not used for
Private Service Connect or Cloud Load Balancing, check
that the subnet
purpose
isPRIVATE
by running the following command in the gcloud CLI: Replace the following:gcloud compute networks subnets describe SUBNET_NAME
SUBNET_NAME
: the name of your subnet
HOST_PROJECT_ID
: the ID of the host projectMIN
: the minimum number of instances to use for the connector. Use an integer between2
and9
. Default is2
. To learn about connector scaling, see Throughput and scaling.MAX
: the maximum number of instances to use for the connector. Use an integer between3
and10
. Default is10
. If traffic requires it, the connector scales out to[MAX]
instances, but does not scale back in. To learn about connector scaling, see Throughput and scaling.MACHINE_TYPE
:f1-micro
,e2-micro
, ore2-standard-4
. To learn about connector throughput, including machine type and scaling, see Throughput and scaling.
For more details and optional arguments, see the
gcloud
reference.Verify that your connector is in the
READY
state before using it:gcloud compute networks vpc-access connectors describe CONNECTOR_NAME \ --region=REGION
Replace the following:
CONNECTOR_NAME
: the name of your connector; this is the name that you specified in the previous stepREGION
: the region of your connector; this is the region that you specified in the previous step
The output should contain the line
state: READY
.
Terraform
You can use a Terraform resource
to enable the vpcaccess.googleapis.com
API.
You can use Terraform modules to create a VPC network and subnet and then create the connector.
Enable Cloud Run functions for the service project
Enable the Cloud Run functions API for the service project. This is necessary for adding IAM roles in subsequent steps and for the service project to use Cloud Run functions.
Run the following in your terminal:
gcloud services enable cloudfunctions.googleapis.com --project=SERVICE_PROJECT_ID
Replace the following:
SERVICE_PROJECT_ID
: the ID of the service project
Provide access to the connector
Provide access to the connector by granting the Cloud Run functions Service Agent Serverless VPC Access User IAM role to the service project on the host project. You must also grant the same role to the Cloud Run Service Agent.
Run the following command in your terminal:
gcloud projects add-iam-policy-binding HOST_PROJECT_ID \ --member=serviceAccount:service-SERVICE_PROJECT_NUMBER@gcf-admin-robot.iam.gserviceaccount.com \ --role=roles/vpcaccess.user
Replace the following:
HOST_PROJECT_ID
: the ID of the Shared VPC host projectSERVICE_PROJECT_NUMBER
: the project number associated with the service project. This is different than the project ID. You can find the project number by running the following command:gcloud projects describe SERVICE_PROJECT_ID
Also grant the role to the Cloud Run Service Agent by running the following command:
gcloud projects add-iam-policy-binding HOST_PROJECT_ID \ --member=serviceAccount:service-SERVICE_PROJECT_NUMBER@serverless-robot-prod.iam.gserviceaccount.com \ --role=roles/vpcaccess.user
Make the connector discoverable
On the host project's IAM policy, you must grant the following two predefined roles to the principals who deploy Cloud Run services:
- Serverless VPC Access Viewer (
vpcaccess.viewer
): Required. - Compute Network Viewer (
compute.networkViewer
): Optional but recommended. Allows the IAM principal to enumerate subnets in the Shared VPC network.
Alternatively, you can use custom roles or other predefined roles that
include all the permissions of the Serverless VPC Access Viewer
(vpcaccess.viewer
) role.
Run the following commands in your terminal:
gcloud projects add-iam-policy-binding HOST_PROJECT_ID \ --member=PRINCIPAL \ --role=roles/vpcaccess.viewer gcloud projects add-iam-policy-binding HOST_PROJECT_ID \ --member=PRINCIPAL \ --role=roles/compute.networkViewer
Replace the following:
HOST_PROJECT_ID
: the ID of the Shared VPC host projectPRINCIPAL
: the principal who deploys Cloud Run services. Learn more about the--member
flag.
Configure your function to use the connector
For each function that requires access to your Shared VPC, you must specify the connector for the function. The following steps show how to configure your function to use a connector.
Set the gcloud CLI to use the project containing the function:
Replace the following:gcloud config set project PROJECT_ID
PROJECT_ID
: the ID of the project containing the function that requires access to your Shared VPC. If the function is in the host project, this is the host project ID. If the function is in a service project, this is the service project ID.
Use the
--vpc-connector
flag and deploy your function:gcloud functions deploy FUNCTION_NAME --vpc-connector=CONNECTOR_NAME
Replace the following:
FUNCTION_NAME
: the name of your functionCONNECTOR_NAME
: the name of your connector. Use the fully qualified name when deploying from a Shared VPC service project (as opposed to the host project), for example: whereprojects/HOST_PROJECT_ID/locations/CONNECTOR_REGION/connectors/CONNECTOR_NAME
HOST_PROJECT_ID
is the ID of the host project,CONNECTOR_REGION
is the region of your connector, andCONNECTOR_NAME
is the name that you gave your connector.
For more control over which requests are routed through the connector, see Egress settings.
Next steps
- Connect to Memorystore from Cloud Run functions.
- Configure network settings for Cloud Run functions.
- Monitor admin activity with Serverless VPC Access audit logging.
- Protect resources and data by creating a service perimeter with VPC Service Controls.
- Learn about the Identity and Access Management (IAM) roles associated with Serverless VPC Access. See Serverless VPC Access roles in the IAM documentation for a list of permissions associated with each role.