Skip to main content
Self-hosted agents allow you to run env zero deployment workloads on your own Kubernetes cluster.
  • Execution is contained within your own servers/infrastructure
  • The agent requires an internet connection but no inbound network access.
  • Secrets can be stored on your own infrastructure.
Feature AvailabilitySelf-hosted agents are only available to Enterprise level customers. Click here for more details

Requirements

Cluster InstallationThe Agent can be run on an existing Kubernetes cluster in a dedicated namespace, or you can create a cluster just for the agent.Use our k8s-modules repository, which contains Terraform code for easier cluster installation. You can use the main provider folder for a complete installation, or a specific module to fulfill only certain requirements.

Autoscaler (recommended, but optional)

  • While optional, configuring horizontal auto-scaling will allow your cluster to adapt to the concurrency and deployment requirements based on your env zero usage. Otherwise, your deployment concurrency will be limited by the cluster’s capacity. Please also see Job Limits if you wish you to control the maximum concurrent deployment.
  • The env zero agent will create a new pod for each deployment you run on env zero.
    Pods are ephemeral and will be destroyed after a single deployment.
  • A pod running a single deployment requires at least cpu: 460m and memory: 1500Mi, so the cluster nodes must be able to provide this resource request. Limits can be adjusted by providing custom configuration during chart installation.
  • Minimum node requirements: an instance with at least 2 CPU and 8GiB memory.
For the EKS cluster, you can use this TF example.

Persistent Volume/Storage Class (optional)

  • env zero will store the deployment state and working directory on a persistent volume in the cluster.
  • Must support Dynamic Provisioning and ReadWriteMany access mode.
  • The requested storage space is 300Gi.
  • The cluster must include a StorageClass named env0-state-sc.
  • The Storage Class should be set up with reclaimPolicy: Retain, to prevent data loss in case the agent needs to be replaced or uninstalled.
We recommend the current implementations for the major cloud providers:
CloudSolution
AWSEFS CSI
For the EKS cluster, you can use this TF example - EFS CSI-Driver/StorageClass
GCPFilestore, OpenSource NFS
AzureAzure Files
PVC AlternativeBy default, the deployment state and working directory is stored in a PV (Persistent Volume) which is configured on your Kubernetes cluster. Whenever PV creation or management is difficult, or not required, you can use env zero-Hosted Encrypted State with env0StateEncryptionKey.

Sensitive Secrets

  • Using secrets stored on the env zero platform is not allowed for self-hosted agents, since self-hosted agents allow you to store secrets on your own infrastructure.
  • Customers using self-hosted agents may use their own Kubernetes Secret to store sensitive values - see env0ConfigSecretName below.
  • If you are migrating from SaaS to a self-hosted agent, deployments attempting to use these secrets will fail.
  • This includes sensitive configuration variables, SSH keys, and Cloud Deployment credentials. The values for these secrets should be replaced with references to your secret store, as detailed in the table below.
  • In order to use an external secret store, authentication to the secret store must be configured using a custom Helm values file. The required parameters are detailed in Custom/Optional Configuration.
  • Storing secrets is supported using these secret stores:
Secret storeSecret reference formatSecret Region & Permissions
AWS Secrets Manager (us-east-1)${ssm:<secret-name>}Set by the awsSecretsRegion helm value. Defaults to us-east-1
Role must have permissions: secretsmanager:GetSecretValue
GCP Secrets Manager${gcp:<secret-id>}Your GCP project’s default region
Access to the secret must be possible using the customerGoogleCredentials configuration or using GKE workload identity. The customerGoogleProject configuration must be supplied and will be used to access secrets in that project only. The permission ‘secrets.versions.access’ is required
Azure Key Vault${azure:<secret-name>@<vault-name>}Your Azure subscription’s default region
HashiCorp Vault${vault:<path>.<key>@<namespace>} where @<namespace> is optional
OCI Vault Secrets${oci:<secret-id>}The region defined in the credentials provided in the agent configuration.
Allow storing secrets in env zeroAlternatively, you could explicitly allow env zero to store secrets on its platform, by opting-in in your organization’s policy - For more info read here

Internal Values

The following secrets are required for the agent components to communicate with env zero’s backend, they are generated and supplied in your values file.
  • awsAccessKeyIdEncoded
  • awsSecretAccessKeyEncoded
  • env0ApiGwKeyEncoded

Custom/Optional Configuration

A Helm values.yml will be provided by env zero with the configuration env zero provides. The customer will need to provide a values.customer.yml with optional values to enable specific features. For the complete list of configuration options, see Custom/Optional Configuration.

Further Configuration

The env zero agent externalizes a wide array of values that may be set to configure the agent. We do our best to support all common configuration case scenarios, but sometimes a more exotic or pre-released configuration is required. For such advanced cases, see this reference example of utilizing Kustomize alongside Helm Post Rendering to further customize our chart.

Job Limits

You may wish to add a limit on the number of concurrent runs. To do so, add a Resource Quota to the agent namespace with a parameter on count/jobs.batch. See here for more details.

Installation

  1. Add our Helm Repo
    helm repo add env0 https://env0.github.io/self-hosted
    
  2. Update Helm Repo
    helm repo update
    
  3. Download the configuration file: <your_agent_key>_values.yaml from Organization Settings -> Agents tab
Organization Settings Agents tab
  1. Install the Helm Charts
    helm install --create-namespace env0-agent env0/env0-agent --namespace env0-agent -f <your_agent_key>_values.yaml -f values.customer.yaml
    # values.customer.yaml should contain any optional configuration options as detailed above
    
TF exampleExample for helm installHelm provider must be greater >= 2.5.0
Installing from sourceIf you decide not to install the helm chart from our helm repo, and you want to install using the source code (for example by using git clone) you might need to run:
helm dependency build <path-to-the-source-code>

Upgrade

helm upgrade env0-agent env0/env0-agent --namespace env0-agent
Upgrade ChangesPreviously, you would have had to download the values.yaml file. This is no longer required for an upgrade. However, we do recommend keeping the version of the values.yaml file you used to install the agent with, in case a rollback is required during the upgrade progress.
Custom Agent Docker ImageIf you extended the docker image on the agent, you should update the agent version in your custom image as well.

Verify Installation/Upgrade

After installing a new version of the env zero agent helm chart, it is highly recommended to verify the installation by running:
helm test env0-agent --namespace env0-agent --logs --timeout 1m

Using the helm template command

Alternatively to using helm to install the agent directly, you could use helm template in order to generate the K8S yaml files for you. Then you’d be able to run these files with a different K8S pipeline, like running kubectl apply or using ArgoCD. In order to generate the yaml files using helm template, you should first add the env0 helm chart
helm repo add env0 https://env0.github.io/self-hosted
helm repo update
Then, run the following command. If your Kubernetes cluster is version 1.21 and up:
helm template env0-agent env0/env0-agent --kube-version=<KUBERNETES_VERSION> --api-version=batch/v1/CronJob -n <MY_NAMESPACE> -f values.yaml
If your Kubernetes cluster version is less than 1.21:
helm template env0-agent env0/env0-agent --kube-version=<KUBERNETES_VERSION> -n <MY_NAMESPACE> -f values.yaml
  • <KUBERNETES_VERSION> is the version of your kubernetes cluster
  • <MY_NAMESPACE> is the k8s namespace in which the agent will be installed
  • values.yaml is the values file downloaded from env0’s Organization Settings -> Agents tab. You can also add your own custom values into said file.
Using env0ConfigSecretName with the helm template commandIf using helm template, the feature that checks the Kubernetes secret defined by the env0ConfigSecretName Helm value to determine whether the PVC should be created will not function. This feature relies on an active connection to the cluster

🌐 Required Outbound Domains

Wildcard / DomainPurpose / Used By
*.env0.com, *.amazonaws.comenv0 SaaS Platform — Required for the agent to communicate with the env0 SaaS platform.
ghcr.ioGitHub Container Registry — Hosts the Docker image of the env0 agent.
*.hashicorp.comHashiCorp — Used to download Terraform binaries.
registry.terraform.io, registry.opentofu.orgModule Registries — Used to download public modules from the Terraform or OpenTofu registries.
github.com, gitlab.com, bitbucket.orgVersion Control Systems (VCS) — Used for Git operations over ports 22, 9418, 80, and 443.
*.infracost.ioInfracost — Used for cost estimation functionality.
github.comExternal Tools & Integrations — Used to download external tools required for custom flows or env0 native integrations.
dl.k8s.ioKubernetes — Used to download and install kubectl.
get.helm.shHelm — Used to download and install helm.
dl.google.comGoogle Cloud SDK — Used to download and install gcloud.

💡 Note: All domains listed above require outbound HTTPS (port 443) access from the env0 agent.
Only open access to the domains for the features you are actually using.
Firewall RulesNote that if your cluster is behind a managed firewall, you might need to whitelist the Cluster’s API server’s FQDN and corresponding Public IP.