Configure connectors in the Shared VPC host project
If your organization uses Shared VPC, you can set up Serverless VPC Access connectors in either the service project or the host project. This guide shows how to set up a connector in the host project.
If you need to set up a connector in a service project, see Configure connectors in service projects. To learn about the advantages of each method, see Connecting to a Shared VPC network.
Before you begin
-
Check the Identity and Access Management (IAM) roles for the account you are currently using. The active account must have the following roles on the host project:
Select the host project in your preferred environment.
Console
Open the Google Cloud console dashboard.
In the menu bar at the top of the dashboard, click the project dropdown menu and select the host project.
gcloud
Set the default project in the gcloud CLI to the host project by running the following in your terminal:
gcloud config set project HOST_PROJECT_ID
Replace the following:
HOST_PROJECT_ID
: the ID of the Shared VPC host project
Create a Serverless VPC Access connector
To send requests to your VPC network and receive the corresponding responses, you must create a Serverless VPC Access connector. You can create a connector by using the Google Cloud console, Google Cloud CLI, or Terraform:
Console
Enable the Serverless VPC Access API for your project.
Go to the Serverless VPC Access overview page.
Click Create connector.
In the Name field, enter a name for your connector. The name must follow the Compute Engine naming convention and be less than 21 characters. Hyphens (
-
) count as two characters.In the Region field, select a region for your connector. This must match the region of your serverless service.
If your service is in the region
us-central
oreurope-west
, useus-central1
oreurope-west1
.In the Network field, select the VPC network to attach your connector to.
Click the Subnetwork pulldown menu:
Select an unused
/28
subnet.- Subnets must be used exclusively by the connector. They cannot be used by other resources such as VMs, Private Service Connect, or load balancers.
- To confirm that your subnet is not used for
Private Service Connect or Cloud Load Balancing, check that
the subnet
purpose
isPRIVATE
by running the following command in the gcloud CLI:gcloud compute networks subnets describe SUBNET_NAME
ReplaceSUBNET_NAME
with the name of your subnet.
(Optional) To set scaling options for additional control over the connector, click Show Scaling Settings to display the scaling form.
- Set the minimum and maximum number of instances for your connector,
or use the defaults, which are 2 (min) and 10 (max). The
connector scales out to the maximum specified as traffic increases,
but the connector does not scale back in when traffic decreases. You
must use values between
2
and10
, and theMIN
value must be less than theMAX
value. - In the Instance Type pulldown menu, choose the machine type to be used for the
connector, or use the default
e2-micro
. Notice the cost sidebar on the right when you choose the instance type, which displays bandwidth and cost estimations.
- Set the minimum and maximum number of instances for your connector,
or use the defaults, which are 2 (min) and 10 (max). The
connector scales out to the maximum specified as traffic increases,
but the connector does not scale back in when traffic decreases. You
must use values between
Click Create.
A green check mark will appear next to the connector's name when it is ready to use.
gcloud
Update
gcloud
components to the latest version:gcloud components update
Enable the Serverless VPC Access API for your project:
gcloud services enable vpcaccess.googleapis.com
Create a Serverless VPC Access connector:
gcloud compute networks vpc-access connectors create CONNECTOR_NAME \ --region=REGION \ --subnet=SUBNET \ --subnet-project=HOST_PROJECT_ID \ # Optional: specify minimum and maximum instance values between 2 and 10, default is 2 min, 10 max. --min-instances=MIN \ --max-instances=MAX \ # Optional: specify machine type, default is e2-micro --machine-type=MACHINE_TYPE
Replace the following:
CONNECTOR_NAME
: a name for your connector. The name must follow the Compute Engine naming convention and be less than 21 characters. Hyphens (-
) count as two characters.REGION
: a region for your connector; this must match the region of your serverless service. If your service is in the regionus-central
oreurope-west
, useus-central1
oreurope-west1
.SUBNET
: the name of an unused/28
subnet.- Subnets must be used exclusively by the connector. They cannot be used by other resources such as VMs, Private Service Connect, or load balancers.
- To confirm that your subnet is not used for
Private Service Connect or Cloud Load Balancing, check
that the subnet
purpose
isPRIVATE
by running the following command in the gcloud CLI:gcloud compute networks subnets describe SUBNET_NAME
Replace the following:SUBNET_NAME
: the name of your subnet
HOST_PROJECT_ID
: the ID of the host projectMIN
: the minimum number of instances to use for the connector. Use an integer between2
and9
. Default is2
. To learn about connector scaling, see Throughput and scaling.MAX
: the maximum number of instances to use for the connector. Use an integer between3
and10
. Default is10
. If traffic requires it, the connector scales out to[MAX]
instances, but does not scale back in. To learn about connector scaling, see Throughput and scaling.MACHINE_TYPE
:f1-micro
,e2-micro
, ore2-standard-4
. To learn about connector throughput, including machine type and scaling, see Throughput and scaling.
For more details and optional arguments, see the
gcloud
reference.Verify that your connector is in the
READY
state before using it:gcloud compute networks vpc-access connectors describe CONNECTOR_NAME \ --region=REGION
Replace the following:
CONNECTOR_NAME
: the name of your connector; this is the name that you specified in the previous stepREGION
: the region of your connector; this is the region that you specified in the previous step
The output should contain the line
state: READY
.
Terraform
You can use a Terraform resource
to enable the vpcaccess.googleapis.com
API.
You can use Terraform modules to create a VPC network and subnet and then create the connector.
Enable Cloud Run functions for the service project
Enable the Cloud Run functions API for the service project. This is necessary for adding IAM roles in subsequent steps and for the service project to use Cloud Run functions.
Console
Open the page for the Cloud Run functions API.
In the menu bar at the top of the dashboard, click the project dropdown menu and select the service project.
Click Enable.
gcloud
Run the following in your terminal:
gcloud services enable cloudfunctions.googleapis.com --project=SERVICE_PROJECT_ID
Replace the following:
SERVICE_PROJECT_ID
: the ID of the service project
Provide access to the connector
Provide access to the connector by granting the Cloud Run functions Service Agent Serverless VPC Access User IAM role to the service project on the host project. You must also grant the same role to the Cloud Run Service Agent.
Console
Open the IAM page.
Click the project dropdown menu and select the host project.
Click Grant access.
In the New principals field, enter the email address of the Cloud Run functions Service Agent for the service project:
service-SERVICE_PROJECT_NUMBER@gcf-admin-robot.iam.gserviceaccount.com
Replace the following:
SERVICE_PROJECT_NUMBER
: the project number associated with the service project. This is different than the project ID. You can find the project number on the service project's Project Settings page in the Google Cloud console.
In the New principals field, also enter the email address of the Cloud Run Service Agent for the service project:
service-SERVICE_PROJECT_NUMBER@serverless-robot-prod.iam.gserviceaccount.com
In the Role field, select Serverless VPC Access User.
Click Save.
gcloud
Run the following command in your terminal:
gcloud projects add-iam-policy-binding HOST_PROJECT_ID \ --member=serviceAccount:service-SERVICE_PROJECT_NUMBER@gcf-admin-robot.iam.gserviceaccount.com \ --role=roles/vpcaccess.user
Replace the following:
HOST_PROJECT_ID
: the ID of the Shared VPC host projectSERVICE_PROJECT_NUMBER
: the project number associated with the service project. This is different than the project ID. You can find the project number by running the following command:gcloud projects describe SERVICE_PROJECT_ID
Also grant the role to the Cloud Run Service Agent by running the following command:
gcloud projects add-iam-policy-binding HOST_PROJECT_ID \ --member=serviceAccount:service-SERVICE_PROJECT_NUMBER@serverless-robot-prod.iam.gserviceaccount.com \ --role=roles/vpcaccess.user
Make the connector discoverable
On the host project's IAM policy, you must grant the following two predefined roles to the principals who deploy Cloud Run services:
- Serverless VPC Access Viewer (
vpcaccess.viewer
): Required. - Compute Network Viewer (
compute.networkViewer
): Optional but recommended. Allows the IAM principal to enumerate subnets in the Shared VPC network.
Alternatively, you can use custom roles or other predefined roles that
include all the permissions of the Serverless VPC Access Viewer
(vpcaccess.viewer
) role.
Console
Open the IAM page.
Click the project dropdown menu and select the host project.
Click Grant access.
In the New principals field, enter the email address of the principal that should be able to see the connector from the service project. You can enter multiple emails in this field.
In the Role field, select both of the following roles:
- Serverless VPC Access Viewer
- Compute Network Viewer
Click Save.
gcloud
Run the following commands in your terminal:
gcloud projects add-iam-policy-binding HOST_PROJECT_ID \ --member=PRINCIPAL \ --role=roles/vpcaccess.viewer gcloud projects add-iam-policy-binding HOST_PROJECT_ID \ --member=PRINCIPAL \ --role=roles/compute.networkViewer
Replace the following:
HOST_PROJECT_ID
: the ID of the Shared VPC host projectPRINCIPAL
: the principal who deploys Cloud Run services. Learn more about the--member
flag.
Configure your function to use the connector
For each function that requires access to your Shared VPC, you must specify the connector for the function. The following steps show how to configure your function to use a connector.
Console
Open the Cloud Run functions overview page.
Click the project dropdown menu and select the service project.
Click Create function. Alternatively, click an existing function to go to its details page, and click Edit.
Expand the advanced settings by clicking Runtime, build....
In the Connections tab under Egress settings, select your connector in the VPC connector field.
gcloud
Set the gcloud CLI to use the project containing the function:
gcloud config set project PROJECT_ID
Replace the following:PROJECT_ID
: the ID of the project containing the function that requires access to your Shared VPC. If the function is in the host project, this is the host project ID. If the function is in a service project, this is the service project ID.
Use the
--vpc-connector
flag and deploy your function:gcloud functions deploy FUNCTION_NAME --vpc-connector=CONNECTOR_NAME
Replace the following:
FUNCTION_NAME
: the name of your functionCONNECTOR_NAME
: the name of your connector. Use the fully qualified name when deploying from a Shared VPC service project (as opposed to the host project), for example:projects/HOST_PROJECT_ID/locations/CONNECTOR_REGION/connectors/CONNECTOR_NAME
whereHOST_PROJECT_ID
is the ID of the host project,CONNECTOR_REGION
is the region of your connector, andCONNECTOR_NAME
is the name that you gave your connector.
For more control over which requests are routed through the connector, see Egress settings.
Next steps
- Connect to Memorystore from Cloud Run functions.
- Configure network settings for Cloud Run functions.
- Monitor admin activity with Serverless VPC Access audit logging.
- Protect resources and data by creating a service perimeter with VPC Service Controls.
- Learn about the Identity and Access Management (IAM) roles associated with Serverless VPC Access. See Serverless VPC Access roles in the IAM documentation for a list of permissions associated with each role.