These are my revision notes for my CKA exam. Hope someone finds them useful.
alias k=kubectl
#k() { kubectl $@; }
source <(kubectl completion bash|sed '/ *complete .*kubectl$/{;h;s/kubectl$/k/p;g;}')
v() { vim -c ":set shiftwidth=2 tabstop=2 softtabstop=-1 expandtab" ${1?} && kubectl apply -f $1; }
# replace all "k" with "kubectl" for notes portability
#gsed -i 's/^\([#(]\)*k /\1kubectl /g;s/k get/kubectl get/g;s/k apply/kubectl apply/g;s/k desc/kubectl desc/g' README.md;gif README.md
consider these:
vim configuration for yaml
# configuration for vim for... something... *.md perhaps but maybe *.yaml
# shiftwidth=sw=2
# tabstop=ts=2
# softtabstop=sts=-1
# vim: set shiftwidth=2 tabstop=2 softtabstop=-1 expandtab
ansible
1996 pp test
1997 ika package_install.yml
1998 ika package_install.yml
1999 ggrep -s --exclude-dir logs ansible .
2:main:SBGML06654:~/OD/____Future/Kubernetes/CKA/install_kubernetes_with_ansible$
Expand...
iTerm2 on Mac stuff
- Markdown docs
- VS Code as Markdown Note-Taking App
- Languages Supported by Github Flavored Markdown
- collapsed sections
- ultimate markdown cheat sheet
- About READMEs
See markdownlint Configuration and the HTML comments below here in the source file and these rules: MD022, MD031. Show VS Code preview pane: Cmd-K,V
In the many months I've been studying in my spare time, there have been some new Kubernetes releases.
Alpha feature (may now be GA)
- Ephemeral containers & PodSecurity move from alpha to beta
- dual ipv4/ipv6 stack moves to stable/GA
Kubernetes is built on Go.
Some interesting background reading...
- How To Call Kubernetes API using Go - Types and Common Machinery - annoyingly, this domain has gone away but I grabbed the HTML from Google's cache.
- more to come perhaps...
kubectl run redis -n finance --image=redis
kubectl run nginx-pod --image=nginx:alpine
kubectl run redis --image=redis:alpine --labels='tier=db,foo=bar'
#kubectl run custom-nginx --image=nginx
#kubectl expose pod custom-nginx --port=8080
kubectl run custom-nginx --image=nginx --port=8080
kubectl run httpd --image=httpd:alpine --port=80 --expose
kubectl run --restart=Never --image=busybox static-busybox --dry-run=client -o yaml --command -- sleep 1000 > /etc/kubernetes/manifests/static-busybox.yaml
Commands using PODS.template (work in progress)
kubectl get po -o jsonpath="{.items[*].spec.containers[*].image}"
kubectl get po -o custom-columns-file=PODS.template
vim PODS.template && kubectl get po -o custom-columns-file=PODS.template
watch -n1 kubectl get po -o custom-columns-file=${func_path?}/../CKA/PODS.template
While you can specify a containerPort in the pod, it is purely informational as per Should I configure the ports in the Kubernetes deployment?
ports:
- name: mysql
containerPort: 3306 # purely informational
Reference (Bookmark this page for exam. It will be very handy):
Create an NGINX Pod
kubectl run nginx --image=nginx
Generate POD Manifest YAML file (-o yaml). Don't create it(--dry-run)
kubectl run nginx --image=nginx --dry-run=client -o yaml
Get pods that are in a state of Running or not Running
kubectl get po ${ns:+-n $ns} -l app=primary-01 -o name --field-selector=status.phase=Running
kubectl get po ${ns:+-n $ns} -l app=primary-01 -o name --field-selector=status.phase!=Running
kubectl create replicaset foo-rs --image=httpd:2.4-alpine --replicas=2
kubectl scale replicaset new-replica-set --replicas=5
kubectl edit replicaset new-replica-set
kubectl create deployment httpd-frontend --image=httpd:2.4-alpine --replicas=2
kubectl create deployment webapp --image=kodekloud/webapp-color --replicas=3
kubectl create deployment webapp --image=kodekloud/webapp-color --replicas=3 -o yaml --dry-run=client | sed '/strategy:/d;/status:/d' > pink.yaml
kubectl set image deployment nginx nginx=nginx:1.18
kubectl get all
Should always specify requests and limits in a resources section for each container in a pod.
spec:
containers:
- name: nginx
image: nginx:latest
resources:
limits:
memory: 200Mi
cpu: 200m
requests:
memory: 128Mi
cpu: 100m
Create a deployment
kubectl create deployment --image=nginx nginx
Generate Deployment YAML file (-o yaml). Don't create it(--dry-run)
kubectl create deployment --image=nginx nginx --dry-run=client -o yaml
Generate Deployment YAML file (-o yaml). Don't create it(--dry-run) with 4 Replicas (--replicas=4)
In k8s version 1.19+, we can specify the --replicas option to create a deployment with 4 replicas.
kubectl create deployment --image=nginx nginx --replicas=4 --dry-run=client -o yaml > nginx-deployment.yaml
kubectl create svc clusterip redis-service --tcp=6379:6379 # worked in practice test but should use expose
kubectl expose pod redis --name=redis-service --port=6379
kubectl create ns dev-ns
kubectl create deployment redis-deploy -n dev-ns --image=redis --replicas=2
kubectl describe $(kubectl get po -o name|head -1)
kubectl replace -f nginx.yaml
kubectl delete $(kubectl get po -o name)
kubectl run nginx --image=nginx --dry-run=client -o yaml
Manual scheduling, add nodeName
property in pod spec
---
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
nodeName: node01
containers:
- image: nginx
name: nginx
See also
labels in spec>selector and spec>template must match
kubectl get po --selector app=foo
kubectl get po --selector env=dev|grep -vc NAME
kubectl get all --selector env=prod|egrep -vc '^$|NAME'
kubectl get all --selector env=prod|grep -c ^[a-z]
kubectl get all --selector env=prod --no-headers|wc -l
kubectl get all --selector env=prod,bu=finance,tier=frontend --no-headers
kubectl get all -l env=prod # short switch
taint-effect is what happens to pods that do not tolerate the taint
- NoSchedule
- PreferNoSchedule
- NoExecute
use kubectl describe node NODE
to list taints
Schedule a Pod using required node affinity
use a -
suffix to the effect to remove it
# remove taint on master to allow it to run pods
kubectl taint nodes controlplane node-role.kubernetes.io/master:NoSchedule-
kubectl taint nodes controlplane node-role.kubernetes.io/control-plane:NoSchedule-
# prevent master from running pods again (default)
kubectl taint nodes controlplane node-role.kubernetes.io/master:NoSchedule
kubectl taint nodes controlplane node-role.kubernetes.io/control-plane:NoSchedule
kubectl taint nodes node1 key=value:taint-effect
kubectl taint nodes node1 app=blue:NoSchedule
kubectl taint nodes node1 app=blue:NoSchedule- # to remove
#spec>tolerations>- key: "app" (all in quotes)
kubectl describe node kubemaster | grep Taint
#kubectl get po bee -o yaml | sed '/tolerations:/a\ - key: spray\n value: mortein\n effect: NoSchedule\n operator: Equal'|kubectl apply -f -
kubectl get po bee -o yaml | sed '/tolerations:/a\ - effect: NoSchedule\n key: spray\n operator: Equal\n value: mortein'|kubectl apply -f -
- label nodes first
- spec:>nodeSelector:>size:Large
kubectl label nodes node1 size=Large
#spec:>template:>spec:>affinity:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- store
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: color
operator: In
values:
- blue
# this should let the pod run on a controlplane node
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node-role.kubernetes.io/master
operator: Exists
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- tomd-nginx
# will get stuck on deployment restart if: num pods = num nodes
# though individual pod restarts seem ok
topologyKey: "kubernetes.io/hostname"
Create pod > yaml and edit/copy the resources lines
kubectl describe po elephant | grep Reason
kubectl describe po elephant | sed -n '/Last State/{;N;p}'
Same creation process as for deployments only without the replicas
kubectl get ds -A
kubectl get ds -n kube-system kube-flannel-ds
kubectl describe ds kube-flannel-ds|grep Image
kubectl get ds kube-flannel-ds -o yaml
kubectl create deployment -n kube-system elasticsearch --image=k8s.gcr.io/fluentd-elasticsearch:1.20 -o yaml --dry-run=client | \
sed 's/Deployment$/DaemonSet/;/replicas:/d;/strategy:/d;/status:/d' > ds.yaml
kubectl create -f ds.yaml
sed 's/Deployment$/DaemonSet/;/replicas:/d;/strategy:/d;/status:/d' deployment.yaml | kubectl apply -f -
- as an option in kubelet.service (systemd)
- or as a --config switch to a file containing the staticPodPath opt
# static pods will have the node name appended to the name
kubectl get po -A -o wide
# check the static pod path in the kubelet config
sudo grep staticPodPath $(ps -wwwaux | \
sed -n '/kubelet /s/.*--config=\(.*\) --.*/\1/p' | \
awk '/^\//{print $1}')
ls .* # dummy command to close italics
kubectl run static-pod-nginx --image=nginx --dry-run=client -o yaml | \
egrep -v 'creationTimestamp:|resources:|status:|Policy:' \
>static-pod-nginx.yaml
Similar to deployments.
Probably not in the CKA exam but still of interest.
kubectl create deployment ${name?} --image=nginx:1.23.1-alpine --replicas=2 --dry-run=client -o yaml | \
sed "s/Deployment$/StatefulSet/;s/strategy:.*/service Name: $name/;/status:/d" | \
kubectl apply -f -
- advanced-scheduling-in-kubernetes
- how-does-the-kubernetes-scheduler-work
- how-does-kubernetes-scheduler-work
- use /etc/kubernetes/manifests/kube-scheduler.yaml as a source
- add --scheduler-name= option
- change --leader-elect to false
- change --port to your desired port
- update port in probes to the same as above
kubectl create -f my-scheduler.yaml # not as a static pod
echo -e ' schedulerName: my-scheduler' >> pod.yaml
kubectl create -f pod.yaml
kubectl get events
git clone https://github.com/kodekloudhub/kubernetes-metrics-server.git
kubectl create -f kubernetes-metrics-server
kubectl top node
kubectl top pod
- RollingUpdate - a few pods at a time
- Recreate - all destroyed in one go and then all recreated
- you might need this for pods attaching to a PVC in ReadWriteOnce mode
spec:
strategy:
type: Recreate
kubectl set image deployment/myapp nginx=nginx:1.9.1
# ^^^^^-The name of the container inside the pod
kubectl edit deployment/myapp
#OR#
kubectl rollout restart deployment/myapp
kubectl annotate deployment/myapp kubernetes.io/change-cause="foo" # replace deprecated --record
kubectl rollout status deployment/myapp
kubectl rollout history deployment/myapp
kubectl rollout undo deployment/myapp
kubectl get po -o=custom-columns=NAME:.metadata.name,IMAGE:.spec.containers[].image
- kube command == docker entrypoint
- kube args == docker cmd
- with CMD command line params get replaced entirely
- with ENTRYPOINT command line params get appended
- ENTRYPOINT => command that runs on startup
- CMD => default params to command at startup
- always specify in json format
ENTRYPOINT ["sleep"]
CMD ["5"]
# default command will be "sleep 5"
command: ["printenv"]
args: ["HOSTNAME", "KUBERNETES_PORT"]
# or
command:
- printenv
args:
- HOSTNAME
- KUBERNETES_PORT
docker command to pass in env vars
docker run --rm -ti -p 8080:8080 -e APP_COLOR=blue kodekloud/webapp-color
kubernetes yaml equivalent
spec:
containers:
- env:
- name: APP_COLOR
value: green
kubectl create configmap --from-literal=APP_COLOR=green \
--from-literal=APP_MOD=prod
kubectl create configmap --from-file=app_config.properties
use the contents of an entire directory:
kubectl create configmap tomd-test-ssl-certs --from-file=path/to/dir
or configure in yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
APP_COLOR: green
APP_MOD: prod
image: foo
envFrom:
configMapRef:
name: app-config
- encrypt-data
- Read about the protections and risks of using secrets here
- There are other better ways of handling sensitive data like passwords in Kubernetes, such as using tools like Helm Secrets, HashiCorp Vault.
kubectl create secret generic app-secret --from-literal=DB_PASSWD=green \
--from-literal=MYSQL_PASSWD=prod
kubectl create secret generic app-secret --from-file=app_secret.properties
kubectl describe secret app-secret
kubectl get secret app-secret -o yaml
create with
apiVersion: v1
kind: Secret
metadata:
name: app-secret
data:
# encode with: echo -n "password123" | base64
# decode with: echo -n "encodedstring" | base64 --decode
DB_PASSWD: green
MYSQL_PASSWD: prod
use as env var with
image: foo
envFrom:
- secretRef:
name: app-config
or as a volume with
volumes:
- name: app-secret-volume
secret:
secretName: app-secret
#ls /opt/app-secret-volumes
#cat /opt/app-secret-volumes/DB_PASSWD
apiVersion: v1
kind: Pod
metadata:
labels:
run: yellow
name: yellow
spec:
containers:
- image: busybox
name: lemon
- image: redis
name: gold
kubectl describe pod blue # check the state field of the initContainer and reason: Completed
initContainers:
- command:
- sh
- -c
- sleep 600
image: busybox
name: red-init
# componentstatus (cs) is deprecated in v1.19+
kubectl get cs
kubectl drain node01 --ignore-daemonsets
kubectl drain node01 --ignore-daemonsets --delete-emptydir-data
kubectl cordon node01
kubectl uncordon node01
etcdctl member list # list the members in the cluster
etcdctl endpoint status # list this server's status
etcdctl endpoint status --cluster # list all server status
etcdctl endpoint health --cluster # all endpoint health
etcdsrv() { kubectl get -n kube-system pod $(kubectl get -n kube-system po|awk "/apiserver/{print \$1;exit}") -oyaml|sed -n '/etcd-servers/s/.*=//p'; }
# my company only (probably)
systemctl status etcd-member.service
systemctl status etcd-backup.service
journalctl -eu etcd-backup.service
#NUKE#systemctl stop etcd-member.service && rm -rf /var/lib/etcd/*
export ETCDCTL=3
etcdctl
kubeadm version -o short
kubectl drain controlplane --ignore-daemonsets
apt update && apt-cache madison kubeadm
apt-get install -y --allow-change-held-packages kubeadm=1.20.0-00
yum list --showduplicates kubeadm --disableexcludes=kubernetes
yum install -y kubeadm-1.21.x-0 --disableexcludes=kubernetes
kubeadm upgrade plan v1.20.0
kubeadm config images pull
kubeadm upgrade apply -y v1.20.0
apt-get install -y --allow-change-held-packages kubelet=1.20.0-00 kubectl=1.20.0-00
yum install -y kubelet-1.21.x-0 kubectl-1.21.x-0 --disableexcludes=kubernetes
sudo systemctl daemon-reload && sudo systemctl restart kubelet
sudo systemctl daemon-reload
sudo systemctl restart kubelet
kubectl uncordon controlplane
# one-liners (for unjoined node)
sudo yum makecache -y fast && yum list --showduplicates kubeadm --disableexcludes=kubernetes
ver=1.21.2
sudo yum install -y kubeadm-${ver?}-0 --disableexcludes=kubernetes
sudo yum install -y kubelet-${ver?}-0 kubectl-$ver-0 --disableexcludes=kubernetes && sudo systemctl daemon-reload && sudo systemctl restart kubelet
kubectl drain node01 --ignore-daemonsets --force
apt update && apt-cache madison kubeadm
apt-get install -y --allow-change-held-packages kubeadm=1.20.0-00
kubeadm upgrade node # this is quick as just a config upgrade
apt-get install -y --allow-change-held-packages kubelet=1.20.0-00 kubectl=1.20.0-00
# where does this bit get done?
# ===
yum list docker-ce --showduplicates
# ===
sudo systemctl daemon-reload && sudo systemctl restart kubelet
kubectl uncordon node01
kubectl get all --all-namespaces -o yaml \
> all-deploy-services.yaml
ETCDCTL_API=3 etcdctl snapshot save snapshot.db
ETCDCTL_API=3 etcdctl snapshot status snapshot.db
crt=$(kubectl describe -n kube-system po etcd-controlplane|awk -F= '/--cert-file/{print $2}')
ca=$( kubectl describe -n kube-system po etcd-controlplane|awk -F= '/--trusted-ca-file/{print $2}')
key=$(kubectl describe -n kube-system po etcd-controlplane|awk -F= '/--key-file/{print $2}')
ETCDCTL_API=3 etcdctl --cacert=${ca?} --cert=${crt?} --key=${key?} snapshot save /opt/snapshot-pre-boot.db
service kube-apiserver stop
ETCDCTL_API=3 etcdctl snapshot restore snapshot.db \
--data-dir /new/data/dir
# !!CARE!! edit data-dir to point to restore location in volumes section
vim /etc/kubernetes/manifests/etcd.yaml
Nick Perry kubesec.io
kubectl run -oyaml --dry-run=client nginx --image=nginx > nginx.yaml
docker run -i kubesec/kubesec scan - < nginx.yaml
basic, deprecated
curl -vk https://master_node_ip:6443/api/v1/pods -u "userid:passwd
curl -vk https://master_node_ip:6443/api/v1/pods --header "Authorization: Bearer ${token?}"
view certificates
kubectl get po kube-apiserver-controlplane -o yaml -n kube-system|grep cert
kubectl get po etcd-controlplane -o yaml -n kube-system|grep cert
openssl x509 -text -noout -in /etc/kubernetes/pki/apiserver.crt
openssl x509 -text -noout -in /etc/kubernetes/pki/etcd/server.crti|grep CN # etcd
openssl x509 -text -noout -in /etc/kubernetes/pki/apiserver.crt|grep Not # validity
openssl x509 -text -noout -in /etc/kubernetes/pki/ca.crt|grep Not # CA validity
for f in $(grep pki /etc/kubernetes/manifests/etcd.yaml|egrep 'key|crt'|awk -F= '{print $2}'); do echo +++ $f;test -f $f && echo y || echo n;done
vim /etc/kubernetes/manifests/etcd.yaml
docker logs $(docker ps|grep -v pause:|awk '/etcd/{print $1}')
grep pki /etc/kubernetes/manifests/kube-apiserver.yaml|grep '\-ca'
certificates API
openssl genrsa -out jane.key 2048
openssl req -new -key jane.key -subj "/CN=jane" -out jane.csr
cat jane.csr | base64
csr in yaml
# v1.19 = v1
apiVersion: certificates.k8s.io/v1beta1
kind: CertificateSigningRequest
metadata:
name: jane
spec:
groups:
- system:authenticated
usages:
- digital signature
- key encipherment
- server
signerName: kubernetes.io/kube-apiserver-client
# cat jane.csr|base64
request:
base64text
approving csr
kubectl get csr
kubectl certificate approve jane
kubectl get csr -o yaml
cat jane.b64|base64 --decode
cat akshay.csr|base64|sed 's/^/ /'>>akshay.yaml
sed -i "/request:/s/$/ $(echo $csr|sed 's/ //g')/" a
curl https://kubecluster:6443/api/v1/nodes \
--key admin.key \
--cert admin.crt \
--cacert ca.crt \
curl https://controlplane:6443/api/v1/nodes --key $PWD/dev-user.key --cert $PWD/dev-user.crt --cacert /etc/kubernetes/pki/ca.crt
kubectl get nodes \
--server controlplane:6443 \
--client-key admin.key \
--client-certificate admin.crt \
--certificate-authority ca.crt
kubectl get nodes \
--kubeconfig config
kubectl config view
kubectl config view --kubeconfig=my-custom-config
kubectl config use-context prod-user@production
kubectl config set-context --current --namespace=alpha
kubectl config -h
kubectl config --kubeconfig=my-kube-config current-context
kubectl config --kubeconfig=my-kube-config use-context research
contexts:
- name: kubernetes-admin@kubernetes
context:
cluster: kubernetes
namespace: default
user: kubernetes-admin
better to use full path to crt etc. or base64 encode it
curl -k https://controlplane:6443/ --key $PWD/dev-user.key --cert $PWD/dev-user.crt --cacert /etc/kubernetes/pki/ca.crt
curl -k https://controlplane:6443/apis --key $PWD/dev-user.key --cert $PWD/dev-user.crt --cacert /etc/kubernetes/pki/ca.crt
# =OR=
kubectl proxy # starts on localhost:8001 and proxy uses creds from kubeconfig file
curl -k https://localhost:8001
kubectl describe -n kube-system po kube-apiserver-controlplane
Authorisation Mechanisms
- Node
- system:node:node01
- ABAC (Attribute)
- need to restart kube-apiserver
- RBAC (Role)
- Webhook
- for outsourcing authorisation
- e.g. Open Policy Agent
- AlwaysAllow
- kube-apiserver switch
--authorization-mode=AlwaysAllow
by default - comma separated list
- kube-apiserver switch
- AlwaysDeny
These are namespaced and created within a namespace (if not specified, created in default namespace)
Role yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: developer
# namespace: tbc
rules:
# blank for core groups, names for anything else
- apiGroups: [""]
resources: ["pods"]
verbs: ["list,"get","create","update","delete"]
#resourceNames: ["red","blue"] # optional, specific resources e.g. only certain pods
RoleBinding yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: devuser-developer-binding
# namespace: tbc
subjects:
- kind: User
name: dev-user
verbs: rbac.authorization.k8s.io
roleRef:
# blank for core groups, names for anything else
- kind: Role
name: developer
verbs: rbac.authorization.k8s.io
kubectl create -f developer-role.yaml
kubectl create -f devuser-developer-binding.yaml
kubectl get roles
kubectl get rolebindings
kubectl describe role developer
kubectl describe rolebinding devuser-developer-rolebinding
Can I?
kubectl auth can-i create deployments
kubectl auth can-i delete nodes
kubectl auth can-i create deployments --as dev-user
kubectl auth can-i create pods --as dev-user
kubectl auth can-i create pods --as dev-user --namespace test
# can edit in-place
kubectl edit role developer -n blue
These apply to the cluster scoped resources, rather than namespaced resources.
- nodes
- PV
- CSR
- clusterroles
- clusterrolebindings
- namespaces
For a list, run:
kubectl api-resources --namespaced=true
kubectl api-resources --namespaced=false
kubectl get roles
kubectl get roles -n kube-system -o yaml
kubectl get role -n blue developer -o yaml
Cluster Role
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: cluster-administrator
# NB. no namespace:
rules:
- apiGroups: # [""] or ["apps","extensions"] for example
- ""
resources: # ["nodes"] or
# optional
- nodes
verbs: # ["get","list"] or
- list
- get
- create
- delete
Cluster RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: cluster-admin-role-binding
# NB. no namespace:
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: cluster-administrator
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: cluster-admin
kubectl get rolebinding -n blue dev-user-binding -o yaml
kubectl create -f rb.yaml
Can I?
kubectl auth can-i create pods
kubectl auth can-i create pods --as dev-user
# don't need to delete & recreate - can edit in-place
kubectl edit role developer -n blue
User by machines e.g.
- Prometheus
- Jenkins
- to deploy applications on a cluster
kubectl create serviceaccount dashboard-sa
kubectl get sa
kubectl describe sa dashboard-sa | grep ^Token
# token is a secret object. to view it use
kubectl describe secret $(kubectl describe sa dashboard-sa | awk '/^Token/{print $2}')
# token is a bearer token
curl -kv https://x.x.x.x:6443/api \
--header "Authorization: Bearer ${token?}"
mount service account token as a volume into pod
the default sa in each ns is automatically created as a volume mount in all created pods
mounted at /var/run/secrets/kubernetes.io/serviceaccount
so you can access it
# inside template>pod spec, NOT deployment spec!
spec:
serviceAccountName: dashboard-sa
# must delete & recreate pod (deployment handles this)
# to prevent automount sa
automountServiceAccountToken: false
image: docker.io/nginx/nginx
# ^^^^^-- image/repository
# ^^^^^-------- user/account
# ^^^^^^^^^-------------- registry
image: gcr.io/kubernetes-e2e-test-images/dnsutils
docker login private-registry.io
docker run private-registry.io/apps/internal-app
creating a registry
kubectl create secret docker-registry regcred \
--docker-server=private-registry.io \
--docker-username=registry-user \
--docker-password=password123 \
[email protected]
spec:
containers:
- image: nginx:latest
name: ignition-nginx
imagePullSecrets:
- name: regcred
security settings & capabilities
apiVersion: v1
kind: Pod
metadata:
name: web-pod
spec:
securityContext:
runAsUser: 1000
containers:
- name: nginx
image: nginx:1.20
command: ["sleep", "3600"]
# container settings override pod settings
securityContext:
# i.e. docker run --user=999 ubuntu sleep 3600
runAsUser: 999
# capabilities are only supported at the container level, not the pod level
# i.e. docker run --cap-add MAC_ADMIN ubuntu
capabilities:
add: ["MAC_ADMIN"]
docker pull kodekloud/webapp-conntest
- Ingress is to the pod
- Egress is from the pod
- only looking at direction in which the traffic originated
- the response is not in scope
in the from: or to: sections, each hyphen prefix is a rule and they are all OR'd together. i.e. only need to match one rule without a hyphen prefix it is a criteria for a rule and they are all AND'd together. i.e. must match ALL criteria
/var/lib/docker ??? - what is here?
- aufs
- containers
- image
- volumes
- data_volume created here by
docker volume create data_volume
- datavolume2 auto-created on the fly if !exist
- data_volume created here by
# -v is the old volume mount syntax
-v /data/mysql:/var/lib/mysql
# --mount is the new volume mount syntax
--mount type=bind,source=/data/mysql,target=/var/lib/mysql
depends on underlying OS e.g. aufs, zfs, btrfs, Device Mapper, Overlay, Overlay2
NB. volumes, plural!
spec:
containers:
- image: nginx:alpine
name: nginx
# NB. volumes, plural!
volumes:
- name: local-pvc
persistentVolumeClaim:
claimName: local-pvc
kubectl exec webapp -- cat /log/app.log
access mode must match the pv
NB. Mounts, plural!
spec:
containers:
- image: nginx:alpine
name: nginx
# NB. Mounts, plural!
volumeMounts:
- name: local-pvc
mountPath: "/var/www/html"
kubectl get sc
kubectl describe sc ${x?} | grep '"provisioner":'
kubectl describe sc portworx-io-priority-high |awk -F= '/"provisioner":/{print $2}'|jq '.'
kubectl describe sc portworx-io-priority-high |awk -F= '/^Annotations/{print $2}'|jq '.'
kubectl describe sc ${x?} | grep 'no-provision'
A pod needs to be created as a consumer or it remains in "pending"
kubectl describe pvc local-pvc
and look for events
The example Storage Class called local-storage
makes use of VolumeBindingMode
set to WaitForFirstConsumer
. This will delay the binding and provisioning of a PersistentVolume until a Pod using the PersistentVolumeClaim is created.
Additional topics such as StatefulSets are out of scope for the exam. However, if you wish to learn them, they are covered in the Certified Kubernetes Application Developer (CKAD) course.
Switching & Routing
export dev=eth0
ip link
ip addr add 192.168.1.10/24 dev $dev
vim /etc/network/interfaces # <= make IPs permanent here
ip route add 192.168.2.0/24 via 192.168.1.1
ip route add default via 192.168.1.1 # or 0.0.0.0
echo 1 > /proc/sys/net/ipv4/ip_forward
DNS
cat /etc/resolv.conf # nameserver, search
grep ^hosts /etc/nsswitch.conf
CoreDNS
ip netns add red
ip netns add blue
ip netns exec red ip link # exec ip command inside a netns
ip -n red link # or with this
ip netns exec red arp
ip netns exec red route
a virtual ethernet pair/cable is often called a pipe
ip link add veth-red type veth peer name veth-blue # create pipe
ip link set veth-red netns red # attach pipe to interface
ip link set veth-blue netns blue # ...at each end
ip -n red addr add 192.168.15.1 dev veth-red # assign IP
ip -n blue addr add 192.168.15.2 dev veth-blue # ...to each namespace
ip -n red link set veth-red up # bring up interface
ip -n blue link set veth-blue up # ...in each namespace
ip netns exec red ping 192.168.15.2 # ping blue IP
ip netns exec red arp # identified its neighbour
ip netns exec blue arp # ...in each namespace
arp # but the host has no visibility
virtual switch
- linux bridge
- Open vSwitch
ip link add
ip link add v-net-0 type bridge # create vswitch (bridge)
ip link set dev v-net-0 up # bring up the vswitch
ip -n veth-red link-del veth-red # del link, other end auto-del
ip link add veth-red type veth peer name veth-red-br # create pipe for red
ip link add veth-blue type veth peer name veth-blue-br # ...and blue
ip link set veth-red netns red # attach pipe to red ns
ip link set veth-blue netns blue # ...and blue
ip link set veth-red-br master v-net-0 # attach pipe to red ns
ip link set veth-blue-br master v-net-0 # ...and blue
ip -n red addr add 192.168.15.1 dev veth-red # assign IP
ip -n blue addr add 192.168.15.2 dev veth-blue # ...to each namespace
ip -n red link set veth-red up # bring up interface
ip -n blue link set veth-blue up # ...in each namespace
ip addr add 192.168.15.5/24 dev v-net-0 # assign IP for host
ip addr add 192.168.15.5/24 dev v-net-0 # assign IP for host
ping -c3 192.168.15.1 # ping from host
ip netns exec blue ping 192.168.1.3 # destination unreachable
ip netns exec blue \
ip route add 192.168.1.0/24 via 192.168.15.5 # route to host network
ip netns exec blue ping 192.168.1.3 # no reply, need NAT
iptables -t nat -A POSTROUTING \
-s 192.168.15.0/24 -j MASQUERADE # add SNAT
ip netns exec blue ping 192.168.1.3 # now reachable
ip netns exec blue ping 8.8.8.8 # destination unreachable
ip netns exec blue \
ip route add default via 192.168.15.5 # route via host
ip netns exec blue ping 8.8.8.8 # Internet-a-go-go!
iptables -t nat -A PREROUTING \
--dport 80 --to-destination 192.168.15.2:80 -j DNAT # port forward rule to blue ns
While testing the Network Namespaces, if you come across issues where you can't ping one namespace from the other, make sure you set the NETMASK while setting IP Address. i.e: 192.168.1.10/24
ip -n red addr add 192.168.1.10/24 dev veth-red
Another thing to check is FirewallD/IP Table rules. Either add rules to IP Tables to allow traffic from one namespace to another. Or disable IP Tables all together (Only in a learning environment).
docker run --network none nginx # cannot talk to each other or outside world
docker run --network host nginx # only on local host http://192.168.1.2:80
docker run nginx # bridge 172.17.0.0/16 by default
# creates a network namespace
# equivalent to
ip link add docker0 type bridge
ip addr # docker0 is an interface to the host so had an IP
docker run nginx:1.21.1 # creates a network namespace
ip netns # generated hex ID
docker inspect ${containerid?} # netns is end of SandboxID
docker network ls # name is bridge by default
ip link # but called docker0 by the host
Docker,rkt,Mesos,k8s (all use CNI and it's called implemented as "bridge")
bridge
is a plugin for CNI
Other CNI plugin examples:
- bridge, vlan, ipvlan, macvlan, windows
- dhcp, host-local
- weave, flannel, cilium, vmwarensx, calico, infoblox
Docker does NOT implement CNI but rather CNM (container network model) so you can't run docker run --network=cni-bridge nginx
but you could use:
docker run --network=none nginx
bridge add ${id?} /var/run/netns/${id?}
N.B. CNI and CKA Exam...
An important tip about deploying Network Addons in a Kubernetes cluster.In the upcoming labs, we will work with Network Addons. This includes installing a network plugin in the cluster. While we have used weave-net as an example, please bear in mind that you can use any of the plugins which are described here:
In the CKA exam, for a question that requires you to deploy a network addon, unless specifically directed, you may use any of the solutions described in the link above.
However, the documentation currently does not contain a direct reference to the exact command to be used to deploy a third party network addon.
The links above redirect to third party/ vendor sites or GitHub repositories which cannot be used in the exam. This has been intentionally done to keep the content in the Kubernetes documentation vendor-neutral.
At this moment in time, there is still one place within the documentation where you can find the exact command to deploy weave network addon:
Stacked control plane and etcd nodes (step 2)
- Must have unique:
- hostname
- mac
- Master must have ports open:
- 6443 (apiserver)
- 10250 (kubelet)
- 10251 (scheduler)
- 10252 (controller-mgr)
- Workers must have ports open:
- 10250 (kubelet)
- 30000-32767 (container services)
- Etcd
- 2379 (etcd server)
- 2380 (etcd clients)
Useful commands
ip link
ip link show eth0
ip addr
ip addr add 192.168.15.5/24 dev v-net-0
ip route add 192.168.1.0/24 via 192.168.2.1
cat /proc/sys/net/ipv4/ip_forward
arp
netstat -plnt
Every pod must
- havd IP
- connectivity to all pods on node
- connectivity to all pods on other nodes
net-script.sh <add|delete>
vim -c ":set syntax=sh" /etc/systemd/system/multi-user.target.wants/kubelet.service
ps -ef|grep kubelet|grep cni
sudo -p four: vim -c ":set syntax=sh" /var/lib/kubelet/config.yaml
ls /opt/cni/bin/
ls /etc/cni/net.d/
cat /etc/cni/net.d/10-*|jq '.'
an agent/service on each node which communicate with each other each agent stores topo creates bridge called "weave" (separate to bridge created by docker etc.) pod can be attached to multiple bridge networks weave ensures pod has route to agent agent then takes care of other pods performs encapsulation
can be deployed ad daemons on node os or as daemonset (ideally)
kubectl apply -f "...url..."
kubectl get po -n kube-system
kubectl logs weave-net-... -n kube-system
The responsibility of the CNI plugin
- host-local
- dhcp
weave uses 10.32.0.0/12 by default weave assigns a potion (configurable) to each node
- ClusterIP - only accessible from within the cluster, runs on the cluster
- NodePort - exposes service via a port on each node to it is accessible from outside the cluster
kube-proxy does this:
- monitors API for new services
- gets IP from predefined range
- creates forwarding rules on the cluster (each node?)
kube-proxy can create this ip:port rule in a number of different ways
- userspace
- ipvs
- iptables [default]
kube-proxy --proxy-mode [userspace|ipvs|iptables] ...
ClusterIPs are allocated from 10.0.0.0/24 by default, often 10.96.0.0/12 is used. N.B. must not overlap with PodNetwork, typically 10.244.0.0/16
kube-api-server --service-cluster-ip-range CIDR
kubectl get svc # list ClusterIP
kubectl get svc db-service
iptables -L -t nat | grep db-service
sudo grep 'new service' /var/log/kube-proxy.log # location varies
sudo grep 'new service' /var/log/pods/kube-system_kube-proxy-*/kube-proxy/*.log
kubectl logs -n kube-system kube-proxy-kxg8g|less
# if no logs, check process verbosity
Services
- when service created, kube dns record is created, can use service name, within same namespace e.g. web-service
- when in a different namespace, append the namespace as a domain e.g. web-service.apps
- all records of a type e.g. services are grouped together in a subdomain, svc e.g. web-service.apps.svc
- all services & pods are in a root domain, cluster.local by default e.g. web-service.apps.svc.cluster.local
Pods
- Pods dns not created by default but can be enabled
- ip has dots substituted by dashes e.g. 10-244-2-5
- namespace as before, or default
- type is pod e.g. 10-244-2-5.apps.pods.cluster.local
/etc/coredns/Corefile
- kubernetes plugin in Corefile is where the TLD for the cluster is set e.g. cluster.local
pods insecure
enables creation of DNS records for pods- Corefile is passed in as a ConfigMap
kubectl get cm -n kube-system coredns -o yaml
For pods, their resolv.conf is configured with the Service registered by coredns kubectl get svc -n kube-system kube-dns
which is done by the kubelet
search domains are only possible for services; pods must use fqdns
ns=kube-system
kubectl logs ${ns:+-n $ns} $(
kubectl get po -A -l k8s-app=kube-dns -o name|head -1)
nslookup pod-i-p-addr.namespace.pod.cluster.local
nslookup service-name.namespace.svc.cluster.local
# e.g.
nslookup 10-244-69-111.test.pod.cluster.local
nslookup nginx-service.test.svc.cluster.local
nslookup nginx-service.prod.svc.cluster.local
# from prod
nslookup nginx-service.test
# from within the same test namespace
nslookup nginx-service
See DNS for Services and Pods for more details.
-
Ingress Controller
- nginx
- GCE
- Contour
- haproxy
- traefik
- istio
-
Ingress Resources
-
to split by url
- one rule, two paths
-
to split by host
- two rules, one path
kubectl describe ingress foo
Now, in k8s version 1.20+ we can create an Ingress resource from the imperative way like this:-
# kubectl create ingress <ingress-name> --rule="host/path=service:port"
kubectl create ingress ingress-test --rule="wear.my-online-store.com/wear*=wear-service:80"
Find more information and examples in the below reference link:-
References:-
- usage
- METALLB IN LAYER 2 MODE
- Using MetalLB to add the LoadBalancer Service to Kubernetes Environments which supports multiple networks
Maximums
- 5000 nodes
- 150,000 pods
- 300,000 total containers
- 100 pods per node
- API Server
- active-active
- one node addressed, through LB
- Controller Manager
- active-standby
- leader election by getting lock on Kube-controller-manager endpoint
- lease for 15s, leader renews every 10s (by default)
- etcd
- stacked topology
- easier, less resilient/fault tolerant
- external etcd topology
- harder
- api sever has a list of etcd servers
- since etcd is distributed, can read/write to any instance
- distributed consensus with RAFT protocol
- write complete if can be confirmed on majority of cluster nodes (quorum)
- quorum = N/2+1 (should be an odd number of nodes)
- stacked topology
--initial-cluster-peer="one...,two..." # list of peers
export ETCDCTL_API=3
etcdctl put name john
etcdctl get name
etcdctl get / --prefix --keys-only
Maybe not part of CKA - tbc...
ETCDCTL is the CLI tool used to interact with ETCD.
ETCDCTL can interact with ETCD Server using 2 API versions - Version 2 and Version 3. By default its set to use Version 2. Each version has different sets of commands.
For example ETCDCTL version 2 supports the following commands:
etcdctl backup
etcdctl cluster-health
etcdctl mk
etcdctl mkdir
etcdctl set
Whereas the commands are different in version 3
etcdctl snapshot save
etcdctl endpoint health
etcdctl get
etcdctl put
To set the right version of API set the environment variable ETCDCTL_API command
export ETCDCTL_API=3
When API version is not set, it is assumed to be set to version 2. And version 3 commands listed above don't work. When API version is set to version 3, version 2 commands listed above don't work.
Apart from that, you must also specify path to certificate files so that ETCDCTL can authenticate to the ETCD API Server. The certificate files are available in the etcd-master at the following path. We discuss more about certificates in the security section of this course. So don't worry if this looks complex:
--cacert /etc/kubernetes/pki/etcd/ca.crt
--cert /etc/kubernetes/pki/etcd/server.crt
--key /etc/kubernetes/pki/etcd/server.key
So for the commands I showed in the previous video to work you must specify the ETCDCTL API version and path to certificate files. Below is the final form:
# reformatted from one-liner for readability
kubectl exec etcd-master -n kube-system -- sh -c "
ETCDCTL_API=3 etcdctl get / --prefix --keys-only --limit=10 \
--cacert /etc/kubernetes/pki/etcd/ca.crt \
--cert /etc/kubernetes/pki/etcd/server.crt \
--key /etc/kubernetes/pki/etcd/server.key
"
Installing Kubernetes the hard way can help you gain a better understanding of putting together the different components manually.
An optional series on this is available at our youtube channel here: Install Kubernetes Cluster from Scratch
The GIT Repo for this tutorial can be found here: kubernetes-the-hard-way
- provision nodes
- install container runtime (docker)
- install kubeadm
- initialise master
- configure pod network
- join worker nodes to master node
The vagrant file used in the next video is available here: certified-kubernetes-administrator-course
Here's the link to the documentation: install-kubeadm
Expand...
As per the CKA exam changes (effective September 2020), End to End tests is no longer part of the exam and hence it has been removed from the course.
If you are still interested to learn this, please check out the complete tutorial and demos in our YouTube playlist:
Draw map of entire stack
- DB pod
- DB svc
- Web pod
- Web svc
Check top down client>web>db Check service first Check selectors and labels
kubectl describe svc web-svc # grep for Selector:
kubectl describe pod web-pod # check matches in metadata>labels
Check pod is running ok
kubectl get po
kubectl describe po web
kubectl logs web -f
kubectl logs web -f --previous
Repeat for DB svc then pod
Further troubleshooting tips in kubernetes doc Troubleshooting Applications
Check status of nodes
kubectl get no
kubectl get po
kubectl get po -n kube-system
# if deployed as services
service kube-apiserver status
service kube-controller-manager status
service kube-scheduler status
service kubelet status
service kube-proxy status
kubectl logs kube-apiserver-master -n kube-system
# if deployed as services
sudo journalctl -u kube-apiserver
Further troubleshooting tips in kubernetes doc Troubleshooting Clusters
Check status of nodes
kubectl get no # look for NotReady
kubectl describe no node01
# if status Unknown, comms lost with master, possible node failure
# then check LastHeartbeatTime for when it happened
# if crashed, bring it back up
top # check for CPU/Mem issues
df -h # check for disk issues
service kubelet status # check kubelet status
sudo journalctl -u kubelet # check kubelet logs
openssl x509 -text -in /var/lib/kubelet/worker-1.crt # check certs
# check certs are not expired and have been issues by correct CA
# Subject: ... O = system:nodes
tbc
Basics
Always start with a $ to represent the root element (dict with no name).
$[1] # the 2nd item an a root list/array
$.fruit # the dict named fruit
$.fruit.colour # the dict in a dict
Results are returned in an array i.e. square brackets
To limit the output, use a criteria
?() # denotes the check if inside the list
@ # represents each item in the list
@ > 40 # items greater than 40
@ == 40
@ != 40
@ in [40,41,42]
@ nin [40,41,42]
$.car.wheels[?(@.location == "rear-right")].model
Wildcards
$.*.price # price of all cars
$[*].model # model of all cars in array/list
$.*.wheels[*].model # model of all wheels of all models
# literal
$.prizes[5].laureates[2]
# better but overly verbose
$.prizes[?(@.year == "2014")].laureates[?(@.firstname == "Malala")]
# optimal
$.prizes[*].laureates[?(@.firstname == "Malala")]
Lists
$[0:3] # start:end get first 4 elements, NOT including the 4th
$[0:4] # get first 4 elements, INCLUDING the 4th
$[0:8:2] # in increments of 2
$[-1] # the last item in list. not in ALL implementations
$[-1:0] # this works to get the last element
$[-1:] # you can leave out the 0
$[-3:0] # last three elements
JSON PATH Documentation
JSON PATH in kubectl
# develop a JSONPATH query and replace "HERE" between the braces
kubectl get no -o jsonpath='{HERE}'
# example JSON PATH for a pod json
echo '$.status.containerStatuses[?(@.name=="redis-container")].restartCount'
# $ is not mandatory, kubectl adds it
kubectl get no -o jsonpath='
{.items[*].metadata.name}
{.items[*].status.nodeInfo.architecture}
{.items[*].status.capacity.cpu}'
# can use \n and \t etc.
kubectl get no -o jsonpath='{.items[*].metadata.name}{"\n"}{.items[*].status.capacity.cpu}{"\n"}'
# with ranges for pretty output
kubectl get no -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.capacity.cpu}{"\n"}{end}'
# using custom columns
kubectl get no -o custom-columns=NAME:{.metadata.name},CPU:.status.capacity.cpu
# sorting
kubectl get no --sort-by=.status.capacity.cpu
# Mock Exam 3
kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="InternalIP")].address}'
My Homegrown Examples (before I understood JSON PATH)
# get the Pod CIDR
kubectl get nodes -o jsonpath='{.items[*].spec.podCIDR}'
# get images for the running pods
kubectl get po -o jsonpath="{.items[*].spec.containers[*].image}"
# get names & images for pods
kubectl get po -o=custom-columns=NAME:.metadata.name,IMAGE:.spec.containers[].image
Not part of CKA but interesting.
Other exam resources that may be of use
Also not part of CKA but these are interesting articles I found on t'Internet:
- Kubernetes The Hard Way
- Network Service Mash
- clusterapi
- Quick Start
- Cluster API v1alpha3 original blog post
- kubespray now supports kube-vip
These below are generated from a OneTab export with
ot2md()
ot2md() {
# OneTab to Markdown converter
# converts a range of lines from a OneTab export
# into a bullet point Markdown format and puts them on the Mac clipboard
# Usage: ot2md <start_string> <end_string>
local f=export5
sed -n "
/${1?}/,/${2?}/{
# remove whitespace from pipe separator
s/ | /|/
# title fixups
s/ - Stack Overflow//
s/ . Opensource.com//
s/ . Appvia.io//
s/ . Code-sparks//
s/ . The New Stack//
s/ - T&C DOC//
s/ - General Discussions.*//
s/ . GitHub//
s/ . Kubernetes//
s| . kubernetes/kubernetes||
s| . flannel-io/flannel||
s/ . by .* Medium//
s/ . by .* ITNEXT//
# special one-time fixups
s/: .Open/: Open/g
s/.best practice./\"best practice\"/g
s/, bare metal load-balancer for Kubernetes//
s|\. TL/DR . made . plugin to clean up your.||g
# replace strange unicode delimiter chars with hyphens
/ [IiAa] /!s/ . / - /g
p
}
" $f | \
awk -F\| '
{
# remove trailing spaces from link title or you get markdown lint warnings
sub(/ $/,"",$2)
# title fixups
sub(/kubernetes - /,"",$2)
printf("* [%s](%s)\n",$2,$1)
}
' | tee >(pbcopy)
}
- Living with Kubernetes: Cluster Upgrades
- GitHub - cncf/curriculum: Open Source Curriculum for CNCF Certification Courses
- Linux Foundation Certification Exams: Candidate Handbook
- Important Instructions: CKA and CKAD
- kubectl Cheat Sheet
- Tutorial: Deploy Your First Kubernetes Cluster
- Kubernetes Tutorial - Step by Step Guide to Basic Kubernetes Concepts
- MetalLB configuration
- MetalLB troubleshooting
- Load Balancer Services Always Show EXTERNAL-IP Pending
- Kubernetes and MetalLB: LoadBalancer for On-Prem Deployments
- Metallb LoadBalancer is stuck on pending
- external-ip status is pending - Issue #673 - metallb/metallb
- kubectl Cheat Sheet
- pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to list v1.Service: Unauthorized
- Troubleshooting - NGINX Ingress Controller
- The worst so-called "best practice" for Docker
- Working with kubernetes configmaps, part 2: Watchers
- Kubernetes at home - Bringing the pilot to dinner
- Dirty Kubeconfig? Clean it up!
- kubeadm init
- kube-dns ContainerCreating /run/flannel/subnet.env no such file - Issue #36575
- pod cidr not assigned - Issue #728
- Kube-Flannel cant get CIDR although PodCIDR available on node
- How do I access a private Docker registry with a self signed certificate using Kubernetes?
- Test your Kubernetes experiments with an open source web interface
And more OneTab exports
- Implementing Chaos Engineering in K8s: Chaos Mesh Principle Analysis and Control Plane Development
- Siloscape: The Dark Side of Kubernetes - Container Journal
- Single Sign-On SSH With Zero Key Management
- Easy Monitoring of Container Status - Log
- Kubernetes at home - Bringing the pilot to dinner
- Kubernetes Volumes Guide - Examples for NFS and Persistent Volume Book
- Building Docker Images The Proper Way
- matchbox/deployment.md at master - poseidon/matchbox
- Kubernetes Proceeding with Deprecation of Dockershim in Upcoming 1.24 Release
- Kubernetes Nodes - The Complete Guide
Yet more OneTab exports
- Kubernetes ConfigMap Configuration and Reload Strategy
- Restart pods when configmap updates in Kubernetes?
- Chart Development Tips and Tricks
- kustomize/configGeneration.md at 12d1771bb349e1523bc546e314da63c684a7faf2 - kubernetes-sigs/kustomize
- Facilitate ConfigMap rollouts - management - Issue #22368
- stakater/Reloader: controller to watch changes in ConfigMap and Secrets and do rolling upgrades on Pods with their associated Deployment, StatefulSet, DaemonSet and DeploymentConfig
- kubectl expose
- k run httpd --image=httpd:alpine --port=80 --expose
- manual schedulers
- taints & node affinity
- affinity operators: In, Exists
Not Kubernetes but I put these here for some reason:
Another free CA as an alternative to Let's Encrypt (scotthelme.co.uk) - SSL certs, might be useful for Ingress stuff.