Installing APIM on Kubernetes is easy with the help of our Helm Chart. This Helm Chart supports versions: 3.0.x and higher and will deploy the following:
-
Gravitee Management API.
-
Gravitee Management UI.
-
Gravitee Portal UI.
-
Gravitee Gateway.
-
MongoDB replica set or PostgresSQL (optional dependency).
-
Elasticsearch Cluster (optional dependency).
Add the Gravitee.io Helm charts repo using the commmand below:
$ helm repo add graviteeio https://helm.gravitee.io
Now, install the chart from the Helm repo with the release name
graviteeio-apim3x
.
To prevent potential issues in the future, it is best practice to create a separate namespace for your installation in order to prevent the use of the default Kubernetes namespace. The installation command provided immediately below assumes that such best practice is followed, however this is not a mandatory requirement.
To install the Helm Chart using a dedicated namespace (we use gravitee-apim
as an example), run the following command:
helm install graviteeio-apim3x graviteeio/apim3 --create-namespace --namespace gravitee-apim
To install the Helm Chart using the default namespace (not recommended), run the following command:
helm install graviteeio-apim3x graviteeio/apim3
Note
|
If you choose to modify the values.yml configuration file prior to the installation, make sure to
include it by adding -f values.yaml as an argument. For example: $ helm install graviteeio-apim3x graviteeio/apim3 --create-namespace --namespace gravitee-apim -f values.yaml .
|
You can also package this chart directory into a chart archive by running:
$ helm package .
Now, to install the chart using the chart archive, run:
$ helm install apim3-3.0.0.tgz
Specify each parameter using the --set key=value[,key=value]
argument to helm install
.
Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,
$ helm install my-release -f values.yaml gravitee
Tip: You can use the default values.yaml.
Note
|
When you install APIM, it automatically uses the default values from the 'values.yml` config file which can be modified using the parameters in the tables the end of this page. For example, you can change the default value of host name from apim.exmaple.com to your specific host name by searching through our documentation and replacing the values you want to modify.
|
The following tables list the configurable parameters of the Gravitee chart and their default values.
You can rely on kubernetes ConfigMaps and Secrets to initialize Gravitee settings since APIM 3.15.0. To use this feature, you have to create the ServiceAccount that allows APIM to connect to the Kubernetes API (the helm chart should do it by default) and then you simply have to define your application settings like this:
-
for a Secret :
kubernetes://<namespace>/secrets/<my-secret-name>/<my-secret-key>
-
for a ConfigMap :
kubernetes://<namespace>/configmaps/<my-configmap-name>/<my-configmap-key>
Here is an example for the mongodb uri initialized from the mongo
secret deployed in the default
namespace:
mongo:
uri: kubernetes://default/secrets/mongo/mongouri
Tip
|
If you need to access a secret, you have to create a role within your namespace. If you are deploying in another namespace and you need to access a secret there, you have to create a separate role in that namespace. The two roles can have the same name, but they are completely separate objects - each role only gives access to the namespace it is created in. For more information about roles, see Role and ClusterRole in the Kubernetes documentation. |
If you want to use an external configuration file, such as gravitee.yml
for the gateway or API management, or constant.json
for the UI, add the following lines to the helm chart.
extraVolumes: | - name: config configMap: name: gravitee-config-configmap-name
Where gravitee-config-configmap-name
is the configmap name containing the external configuration file.
External configuration files are only available for the AE Helm chart 1.1.42 and above, the AM Helm chart 1.0.53 and above, and the APIM Helm chart 3.1.60 and above.
To configure common features such as:
-
chaos testing (see chaoskube chart).
-
configuration database (see mongodb chart).
-
logs database (see elasticsearch chart).
Parameter | Description | Default |
---|---|---|
|
Enable Chaos test |
false |
|
Enable oauth login |
true |
|
Enable LDAP login |
false |
To install MongoDB via Helm command, run the following:
helm install mongodb bitnami/mongodb --set auth.rootPassword=r00t
There are three ways to configure MongoDB connections.
-
The simplest way is to provide the MongoDB URI.
Parameter | Description | Default |
---|---|---|
|
Mongo URI |
|
-
If no
mongo.uri
is provided, you can provide amongo.servers
raw definition in combination withmongo.dbname
, plus eventual authentication configuration:
mongo:
servers: |
- host: mongo1
port: 27017
- host: mongo2
port: 27017
dbname: gravitee
auth:
enabled: false
username:
password:
-
If neither
mongo.uri
ormongo.servers
are provided, you must define the following configuration options:
Parameter | Description | Default |
---|---|---|
|
Whether Mongo replicaset is enabled or not |
|
|
Mongo replicaset name |
|
|
Mongo host address |
|
|
Mongo host port |
|
|
Mongo DB name |
|
|
Enable Mongo DB authentication |
|
|
Mongo DB username |
|
|
Mongo DB password |
|
Parameter | Description | Default |
---|---|---|
|
Enable deployment of Mongo replicaset |
|
See MongoDB for detailed documentation on helm chart.
Please be aware that the mongodb-replicaset installed by Gravitee is NOT recommended in production and it is just for testing purpose and running APIM locally.
Note
|
You may encounter issues while running this Helm Charts on Apple Silicon M1 (see bitnami/charts#7305). If you want to deploy MongoDB on M1 we encourage you to switch to an other Helm Charts for deploying MongoDB. |
To install a new PostgresSQL database, use the command below and update the username
, password
, and databasename
parameters:
helm install --set postgresqlUsername=postgres --set postgresqlPassword=P@ssw0rd --set postgresqlDatabase=graviteeapim postgres-apim bitnami/postgresql
Check that PostgreSQL pod is up and running before proceeding by running kubectl get pods
as indicated below.
$ kubectl get pods NAME READY UP-TO-DATE AVAILABLE AGE postgres-apim-postgresql-0 1/1 Running 0 98s
For PostgrestSQL, use the information below in values.yml
and replace the username
, password
,
URL
and database name
with details for your specific instance.
jdbc: driver: https://jdbc.postgresql.org/download/postgresql-42.2.23.jar url: jdbc:postgresql://postgres-apim-postgresql:5432/graviteeapim username: postgres password: P@ssw0rd management: type: jdbc
Parameter | Description | Default |
---|---|---|
|
Elasticsearch username and password enabled |
false |
|
Elasticsearch username |
|
|
Elasticsearch password |
|
|
Elasticsearch TLS enabled |
false |
|
Elasticsearch TLS keystore type (jks, pem or pfx) |
|
|
Elasticsearch TLS keystore path (jks, pfx) |
|
|
Elasticsearch TLS keystore password (jks, pfx) |
|
|
Elasticsearch TLS certs (only pems) |
|
|
Elasticsearch TLS keys (only pems) |
|
|
Elasticsearch index |
|
|
Elasticsearch endpoint array |
|
Parameter | Description | Default |
---|---|---|
|
Enable deployment of Elasticsearch cluster |
|
See Elasticsearch for detailed documentation on optional requirements Helm chart.
Please be aware that the Elasticsearch installed by Gravitee is NOT recommended in production and it is just for testing purpose and running APIM locally.
To install Redis, use the command below :
helm install --set auth.password=p@ssw0rd redis-apim bitnami/redis
See Redis for detailed documentation on helm chart (like how to use Sentinel).
Check that Redis pod is up and running before proceeding by running kubectl get pods
as indicated below.
$ kubectl get pod NAME READY STATUS RESTARTS AGE redis-apim-master-0 1/1 Running 0 105s redis-apim-replicas-0 1/1 Running 0 105s redis-apim-replicas-1 1/1 Running 0 68s redis-apim-replicas-2 1/1 Running 0 40s
To use Redis for rate limit policy, use the information below in values.yml
and replace the host
, port
and password
with details for your specific instance.
You can enable ssl by setting ssl
to true.
ratelimit: type: redis gateway: ratelimit: redis: host: redis-apim-master port: 6379 password: p@ssw0rd ssl: false
If you want to connect to a Sentinel cluster, you need to specify the master
and the nodes
.
gateway: ratelimit: password: p@ssw0rd ssl: false sentinel: master: redis-master nodes: - host: sentinel1 port: 26379 - host: sentinel2 port: 26379
Parameter | Description | Default |
---|---|---|
|
UI service name |
|
|
Base URL to access to the Management API (if set to
|
|
|
UI Portal title (if set to |
|
|
UI Management title (if set to |
|
|
UI link to documentation (if set to
|
|
|
API Key header name (if set to |
|
|
Whether to enable developer mode (if
set to |
|
|
Whether to enable user creation
(if set to |
|
|
Whether to enable support features (if
set to |
|
|
Whether to enable API rating (if set to
|
|
|
Whether to enable analytics features
(if set to |
|
|
Tracking ID used for analytics (if
set to |
|
|
How many replicas of the UI pod |
|
|
Gravitee UI image repository |
|
|
Gravitee UI image tag |
|
|
K8s image pull policy |
|
|
K8s image pull secrets, used to pull both
Gravitee UI image and |
|
|
Whether auto-scaling is enabled or not |
|
|
If |
|
|
If |
|
|
If
|
|
|
UI service name |
|
|
K8s publishing service type |
|
|
K8s UI service external port |
|
|
K8s UI service internal port (container) |
|
|
K8s UI service internal port name (container) |
|
|
Whether Ingress is enabled or not |
|
|
If |
|
|
Supported Ingress annotations to configure ingress controller |
|
|
|
|
|
Ingress TLS K8s secret name containing the TLS private key and certificate |
|
|
K8s pod deployment limits definition for CPU |
|
|
K8s pod deployment limits definition for memory |
|
|
K8s pod deployment requests definition for CPU |
|
|
K8s pod deployment requests definition for memory |
|
|
K8s pod deployment postStart command definition |
|
|
K8s pod deployment preStop command definition |
|
Parameter | Description | Default |
---|---|---|
|
API service name |
|
|
Whether to enable API debug logging or not |
|
|
Logging level for Gravitee classes |
|
|
Logging level for Jetty classes |
|
|
Logback standard output encoder pattern |
|
|
Whether to enable file logging or not |
|
|
Logback file rolling policy configuration |
|
|
Logback file encoder pattern |
|
|
List of additional logback loggers. Each logger is defined by a |
|
|
API exposition through HTTPS protocol activation |
|
|
Keystore type for API exposition through HTTPS protocol |
|
|
Keystore path for API exposition through HTTPS protocol |
|
|
Keystore password for API exposition through HTTPS protocol |
|
|
Truststore type for client authentication through 2 way TLS |
|
|
Truststore path for client authentication through 2 way TLS |
|
|
Truststore password for client authentication through 2 way TLS |
|
|
HTTP core service authentication password |
|
|
HTTP core service port exposed in container |
|
|
HTTP core service bind IP or host inside container (0.0.0.0 for exposure on every interfaces) |
|
|
HTTP core service authentication password |
|
|
Ingress for HTTP core
service authentication (requires
|
|
|
The ingress path which should match for incoming requests to the management technical API. |
|
|
If
|
|
|
Supported Ingress annotations to configure ingress controller |
|
|
|
|
|
Ingress TLS K8s secret name containing the TLS private key and certificate |
|
|
Whether a service is added or not for technical API |
|
|
K8s service
external port (internal port is defined by
|
|
|
Listening path for the API |
|
|
HTTP client global timeout |
|
|
HTTP client proxy type |
|
|
HTTP client proxy host for HTTP protocol |
|
|
HTTP client proxy port for HTTP protocol |
|
|
HTTP client proxy username for HTTP protocol |
|
|
HTTP client proxy password for HTTP protocol |
|
|
HTTP client proxy host for HTTPS protocol |
|
|
HTTP client proxy port for HTTPS protocol |
|
|
HTTP client proxy username for HTTPS protocol |
|
|
HTTP client proxy password for HTTPS protocol |
|
|
Whether to enable default application creation on first user authentication |
|
|
Whether to enable user anonymization on deletion |
|
|
Whether to enable support feature |
|
|
Whether to enable API rating feature |
|
|
Email sending activation |
|
|
SMTP server host |
|
|
SMTP server port |
|
|
Email sending address |
|
|
SMTP server username |
|
|
SMTP server password |
|
|
Email subjects template |
|
|
SMTP server authentication activation |
|
|
SMTP server TLS activation |
|
|
Hostname that is resolvable by the SMTP server |
|
|
The portal URL used in emails |
|
|
Policy to restart K8 pod |
|
|
|
|
|
If
api.updateStrategy.type is set to The deployment controller will stop the bad rollout automatically and
will stop scaling up the new replica set. This depends on the
|
|
|
How many replicas for the API pod |
|
|
Gravitee API image repository |
|
|
Gravitee API image tag |
|
|
K8s image pull policy |
|
|
K8s image pull secrets, used to pull both
Gravitee Management API image and |
|
|
Environment variables, defined as a list of |
|
|
K8s publishing service type |
|
|
K8s service external port |
|
|
K8s service internal port (container) |
|
|
K8s service internal port name (container) |
|
|
Whether auto-scaling is enabled or not |
|
|
If |
|
|
If |
|
|
If
|
|
|
Whether Ingress is enabled or not |
|
|
The ingress path which should match for incoming requests to the management API. |
|
|
If |
|
|
Supported Ingress annotations to configure ingress controller |
|
|
|
|
|
Ingress TLS K8s secret name containing the TLS private key and certificate |
|
|
Whether to use HTTP or HTTPS to communicate with Management API, defaults to https |
|
|
Whether to use HTTP or HTTPS to communicate with Management API, defaults to https |
|
|
K8s pod deployment limits definition for CPU |
|
|
K8s pod deployment limits definition for memory |
|
|
K8s pod deployment requests definition for CPU |
|
|
K8s pod deployment requests definition for memory |
|
|
K8s pod deployment postStart command definition |
|
|
K8s pod deployment preStop command definition |
|
Parameter | Description | Default |
---|---|---|
|
Gateway service name |
|
|
Whether to enable Gateway debug logging or not |
|
|
List of additional logback loggers. Each logger is defined by a |
|
|
API exposition through HTTPS protocol activation |
|
|
Keystore type for API exposition through HTTPS protocol |
|
|
Keystore path for API exposition through HTTPS protocol |
|
|
Keystore password for API exposition through HTTPS protocol |
|
|
Client authentication through 2 way TLS activation |
|
|
Truststore type for client authentication through 2 way TLS |
|
|
Truststore path for client authentication through 2 way TLS |
|
|
Truststore password for client authentication through 2 way TLS |
|
|
Logging level for Gravitee classes |
|
|
Logging level for Jetty classes |
|
|
Logback standard output encoder pattern |
|
|
Whether to enable file logging or not |
|
|
Logback file rolling policy configuration |
|
|
Logback file encoder pattern |
|
|
Gateway deployment type: |
|
|
How many replicas of the Gateway pod |
|
|
Gravitee Gateway image repository |
|
|
Gravitee Gateway image tag |
|
|
K8s image pull policy |
|
|
K8s image pull secrets, used to pull
both Gravitee Gateway image and |
|
|
Environment variables, defined as a list of |
|
|
K8s publishing service type |
|
|
K8s Gateway service external port |
|
|
K8s Gateway service internal port (container) |
|
|
K8s Gateway service internal port name (container) |
|
|
Whether auto-scaling is enabled or not |
|
|
If |
|
|
If |
|
|
If
|
|
|
Whether websocket protocol is enabled or not |
|
|
Header used for the API Key. Set an empty value to prohibit its use. |
|
|
Query parameter used for the API Key. Set an empty value to prohibit its use. |
|
|
Sharding tags (comma separated list) |
`` |
|
Whether Ingress is enabled or not |
|
|
The ingress path which should match for incoming requests to the gateway. |
|
|
If |
|
|
Supported Ingress annotations to configure ingress controller |
|
|
|
|
|
Ingress TLS K8s secret name containing the TLS private key and certificate |
|
|
K8s pod deployment limits definition for CPU |
|
|
K8s pod deployment limits definition for memory |
|
|
K8s pod deployment requests definition for CPU |
|
|
K8s pod deployment requests definition for memory |
|
|
K8s pod deployment postStart command definition |
|
|
K8s pod deployment preStop command definition |
|
Parameter | Description | Default |
---|---|---|
alerts.enabled |
Enables AE connectivity |
|
alerts.endpoints |
Defines AE endpoints |
|
alerts.security.enabled |
Enables AE secure connectivity |
|
alerts.security.username |
The AE username |
|
alerts.security.password |
The AE password |
|
alerts.options.sendEventsOnHttp |
Send event on http to AE (websocket otherwise) |
|
alerts.options.useSystemProxy |
Use system proxy to connect to AE |
|
alerts.options.connectTimeout |
AE connection timeout |
|
alerts.options.idleTimeout |
AE idleTimeout timeout |
|
alerts.options.keepAlive |
Keep the connection alive |
|
alerts.options.pipelining |
Enables event pipelining |
|
alerts.options.tryCompression |
Enables event compression |
|
alerts.options.maxPoolSize |
Set the maximum numner of connection |
|
alerts.options.bulkEventsSize |
Send events by packets |
|
alerts.options.bulkEventsWait |
Duration for events to be ready to be sent |
|
alerts.options.ssl.trustall |
Ssl trust all |
|
alerts.options.ssl.keystore.type |
Type of the keystore (jks, pkcs12, pem) |
|
alerts.options.ssl.keystore.path |
Path to the keystore |
|
alerts.options.ssl.keystore.password |
Path to the keystore |
|
alerts.options.ssl.keystore.certs |
Keystore cert paths (array, only for pem) |
|
alerts.options.ssl.keystore.keys |
Keystore key paths (array, only for pem) |
|
alerts.options.ssl.truststore.type |
Type of the truststore |
|
alerts.options.ssl.truststore.path |
Path to the truststore |
|
alerts.options.ssl.truststore.password |
Password of the truststore |
|
alerts.engines.<cluster-name>.endpoints |
Defines AE endpoints on the cluster <cluster-name> |
|
alerts.engines.<cluster-name>.security.username |
The AE username on the cluster <cluster-name> |
|
alerts.engines.<cluster-name>.security.password |
The AE password on the cluster <cluster-name> |
|
alerts.engines.<cluster-name>.ssl.trustall |
Ssl trust all on the cluster <cluster-name> |
|
alerts.engines.<cluster-name>.ssl.keystore.type |
Type of the keystore (jks, pkcs12, pem) on the cluster <cluster-name> |
|
alerts.engines.<cluster-name>.ssl.keystore.path |
Path to the keystore (jks, pkcs12, pem) on the cluster <cluster-name> |
|
alerts.engines.<cluster-name>.ssl.keystore.password |
Path to the keystore on the cluster <cluster-name> |
|
alerts.engines.<cluster-name>.ssl.keystore.certs |
Keystore cert paths (array, only for pem) on the cluster <cluster-name> |
|
alerts.engines.<cluster-name>.ssl.keystore.keys |
Keystore key paths (array, only for pem) on the cluster <cluster-name> |
|
alerts.engines.<cluster-name>.ssl.truststore.type |
Type of the truststore on the cluster <cluster-name> |
|
alerts.engines.<cluster-name>.ssl.truststore.path |
Path to the truststore on the cluster <cluster-name> |
|
alerts.engines.<cluster-name>.ssl.truststore.password |
Password of the truststore on the cluster <cluster-name> |
|
For Enterprise plugin, and only for them, you have to include a license in APIM. You can define it by:
-
fill the
license.key
field in thevalues.yml
file. -
add helm arg:
--set license.key=<license.key in base64>
To get the license.key value, encode your file license.key
in base64
:
-
linux:
base64 -w 0 license.key
-
macOS:
base64 license.key
Example:
export GRAVITEESOURCE_LICENSE_B64="$(base64 -w 0 license.key)"
helm install \
--set license.key=${GRAVITEESOURCE_LICENSE_B64} \
--create-namespace --namespace gravitee-apim \
graviteeio-apim3x \
graviteeio/apim3
Parameter | Description | Default |
---|---|---|
license.key |
string |
license.key file encoded in base64 |
The Gravitee.io API Management Helm Chart supports OpenShift > 3.10 This chart is only supporting Ingress standard objects and not the specific OpenShift Routes, reason why OpenShift is supported started from 3.10.
There are two major considerations to have in mind when deploying Gravitee.io API Management within OpenShift: 1_ Use full host domain instead of paths for all the components (ingress paths are not well supported by OpenShift) 2_ Override the security context to let OpenShift to define automatically the user-id and the group-id to run the containers.
Also, for Openshift to automatically create Routes from Ingress, you must define the ingressClassName to "none".
Here is a standard values.yaml used to deploy Gravitee.io APIM into OpenShift:
api:
ingress:
management:
ingressClassName: none
path: /management
hosts:
- api-graviteeio.apps.openshift-test.l8e4.p1.openshiftapps.com
annotations:
route.openshift.io/termination: edge
portal:
ingressClassName: none
path: /portal
hosts:
- api-graviteeio.apps.openshift-test.l8e4.p1.openshiftapps.com
annotations:
route.openshift.io/termination: edge
securityContext: null
deployment:
securityContext:
runAsUser: null
runAsGroup: null
runAsNonRoot: true
allowPrivilegeEscalation: false
capabilities:
drop: ["ALL"]
seccompProfile:
type: RuntimeDefault
gateway:
ingress:
ingressClassName: none
path: /
hosts:
- gw-graviteeio.apps.openshift-test.l8e4.p1.openshiftapps.com
annotations:
route.openshift.io/termination: edge
securityContext: null
deployment:
securityContext:
runAsUser: null
runAsGroup: null
runAsNonRoot: true
allowPrivilegeEscalation: false
capabilities:
drop: ["ALL"]
seccompProfile:
type: RuntimeDefault
portal:
ingress:
ingressClassName: none
path: /
hosts:
- portal-graviteeio.apps.openshift-test.l8e4.p1.openshiftapps.com
annotations:
route.openshift.io/termination: edge
securityContext: null
deployment:
securityContext:
runAsUser: null
runAsGroup: null
runAsNonRoot: true
allowPrivilegeEscalation: false
capabilities:
drop: ["ALL"]
seccompProfile:
type: RuntimeDefault
ui:
ingress:
ingressClassName: none
path: /
hosts:
- console-graviteeio.apps.openshift-test.l8e4.p1.openshiftapps.com
annotations:
route.openshift.io/termination: edge
securityContext: null
deployment:
securityContext:
runAsUser: null
runAsGroup: null
runAsNonRoot: true
allowPrivilegeEscalation: false
capabilities:
drop: ["ALL"]
seccompProfile:
type: RuntimeDefault
By setting the value to null
for runAsUser
and runAsGroup
it forces OpenShift to define the correct values for you while deploying the Helm Chart.