ã¿ãã§ãã
Amazon EKS(以ä¸ãEKS)ãæ¨å¹´æ±äº¬ãªã¼ã¸ã§ã³ã«å¯¾å¿ãã¾ããã aws.amazon.com
æ±äº¬ãªã¼ã¸ã§ã³ã« EKS ããããã¨ã§å©ç¨æ¤è¨ããã¦ããæ¹ãå¤ãã®ã§ã¯ãªãã§ããããï¼
ä»åã¯ãEKS ã®å®è·µçãªä½¿ãæ¹ãå¦ã¹ãã¯ã¼ã¯ã·ã§ããã«è§¦ãã¦ã¿ãã®ã§ãã¯ã¼ã¯ã·ã§ããã§ã©ããªãã¨ãå¦ã¹ãã®ããç´¹ä»ãã¾ãã eksworkshop.com
- TL;DR
- ã¯ã¼ã¯ã·ã§ããã®è©³ç´°
- Kubernetes ã®åºæ¬çãªç¨èªã¨ã¢ã¼ããã¯ãã£
- EKS ã®æ¦è¦
- äºåæºå
- EKS ã¯ã©ã¹ã¿ã¼ã®ä½æ
- 管çç»é¢ã®èµ·å
- ãµã³ãã«ã¢ããªã±ã¼ã·ã§ã³ã®ãããã¤
- Kubernetes ã®ããã±ã¼ã¸ç®¡çãã¼ã« Helm ã®å°å ¥
- EFK ã§ã®ãã°ç®¡çåºç¤ã®æ§æ
- CI / CD ããã¼ã®ä½æ
- Kubernetes ã®ã¹ã±ã¼ãªã³ã°
- Prometheus 㨠Grafana ã§ã®ç£è¦
- ã¾ã¨ã
TL;DR
è¨äºã®æ¦è¦ã§ãã
- EKS ã®ã¯ã¼ã¯ã·ã§ããã§æ¬¡ã®ãã¨ãå¦ã¹ã
- Kubernetes ã®åºæ¬çãªç¨èªã¨ã¢ã¼ããã¯ãã£ã EKS ã®æ¦è¦
- CLI ãã¼ã«ã«ãã EKS ã®ä½æ/åé¤
- ããã±ã¼ã¸ããã¼ã¸ã£ã®å°å ¥ã¨ãããã±ã¼ã¸ç®¡çæ¹æ³
- ãã°åºç¤ã®æ§æä¾
- AWS ãµã¼ãã¹ã§ã® CI / CD ã®æ§æä¾
- ã¹ã±ã¼ã«ã¢ã¦ãã®æ§æä¾
- ç£è¦è¨å®ã®æ§æä¾
ã¯ã¼ã¯ã·ã§ããã®è©³ç´°
ã¯ã¼ã¯ã·ã§ããã§ã¯ãåºæ¬çã«ã³ãã³ãæä½ã®ã¿ã§å®äºããããæ§æããã¦ãã¾ãã
ã¾ããã¯ã¼ã¯ã·ã§ããå ¨ä½ãããªã¥ã¼ã ãããã®ã§ãããæè¦æéã®ç®å®ã¨ãã¦ãç´2~3æéãã«ãªãã¾ãã
以ä¸ã§ã¯ãåç« ã®ãµããªã¼ãè¨è¼ãã¦ãããã¨ã«ãã¦ããã¾ãã
Kubernetes ã®åºæ¬çãªç¨èªã¨ã¢ã¼ããã¯ãã£
Kubernetes ã®åºæ¬çãªç¨èª(ãªãã¸ã§ã¯ã)ã¨ãã¦ä»¥ä¸ã®ãã®ãç´¹ä»ããã¦ãã¾ãã
- Nodes: ã¯ã©ã¹ã¿ã¼ãæ§æãããã·ã³ãæãããã¹ã¿ã¼ãã¼ãã¨ã¯ã¼ã«ã¼ãã¼ãã®2種é¡ãããã
- Pod : Kubernetesã§ä½æããã³ç®¡çã§ããããããã¤å¯è½ãªæå°ã®ã³ã³ãã¥ã¼ãã£ã³ã°åä½ã§ããã
- ReplicaSet : å®ç¾©ãããæ°ã® Pod ã常ã«å®è¡ããããã®ä»çµã¿ã
- DaemonSet : ã¯ã¼ã«ã¼ãã¼ãã« Pod ã®åä¸ã¤ã³ã¹ã¿ã³ã¹ãå®è£ ããã
- Deployment : è¤æ°ã® ReplicaSet ã管çãããã¨ã§ããã¼ãªã³ã°ã¢ãããã¼ãããã¼ã«ããã¯ãªã©ãå®ç¾å¯è½ã«ãããªã½ã¼ã¹ã
- Job : ã³ã³ãããå©ç¨ãã¦ä¸åº¦éãã®å¦çãå®è¡ããããªã½ã¼ã¹ã
- Services : åºå®IPã Pod ã®è«çã°ã«ã¼ãã«å²ãå½ã¦ãã
- Label : Kubernetes ä¸ã®ãªãã¸ã§ã¯ãã«é¢é£ä»ãã¨ãã£ã«ã¿ãªã³ã°ã«ä½¿ç¨ããããã¼ã¨å¤ã®ãã¢ã
Kubernetes ã®ã¢ã¼ããã¯ãã£æ¦è¦
Kubernetes ã®ã¢ã¼ããã¯ãã£ã¯æ¬¡ã®3ã¤ã§æ§æããã¦ãã¾ãã
- Cotrol Plane
- API Servers: REST /
kubectl
ã®ããã®ã¨ã³ããã¤ã³ããæä¾ããã - etcd: åæ£åã®ãã¼ããªã¥ã¼ã¹ãã¢ã
- Controller-manager: ã¹ãã¼ã¿ã¹ç®¡çã
- Scheduler: ã¯ã¼ã«ã¼ãã¼ã㸠Pod ã®ã¹ã±ã¸ã¥ã¼ãªã³ã°ãè¡ãã
- API Servers: REST /
- Data Plane : ã¯ã¼ã«ã¼ãã¼ãã§æ§æããã¦ããã
- kubeletï¼APIãµã¼ãã¼ã¨ãã¼ãéã®ã³ã³ã¸ããã¨ãã¦æ©è½ããã
- kube-proxyï¼IPå¤æã¨ã«ã¼ãã£ã³ã°ã管çããã
Kubernetes ã®ç®¡çãã¼ã«
EKS åã³ Kubenetes ã®ç®¡çãã¼ã«ã¨ãã¦ä»¥ä¸ã®ãã®ãç´¹ä»ããã¦ãã¾ãã
調ã¹ãéãã§ãããä»ã«ä»¥ä¸ã®ãã¼ã«ãããã¾ãã
EKS ã®æ¦è¦
EKS 㯠Kubernetes ã AWS ä¸ã§ç°¡åã«å±éã§ãããã«ããã¼ã¸ããµã¼ãã¹ã§ãã
EKS ãæ§æããã®ã¯ãAWSã管çããã³ã³ããã¼ã«ãã¬ã¼ã³(ãã¹ã¿ã¼ãã¼ã)ã¨ãã¦ã¼ã¶ã¼ã管çãããã¼ã¿ãã¬ã¼ã³(ã¯ã¼ã«ã¼ãã¼ã)ã«ãªãã¾ãã
3ã¤ã® Availability Zone ãã¾ããã ãã¹ã¿ã¼ãã¼ãã®ãããã¸ã§ãã³ã°ãã¹ã±ã¼ã«ãã¦ã¼ã¶ã¼ã管çä¸è¦ãªç¹ãç¹å¾´ã§ãã
ã¾ããç°å¸¸ãªã³ã³ããã¼ã«ãã¬ã¼ã³ãã¼ããèªåçã«æ¤åºãã¦ç½®ãæããã³ã³ããã¼ã«ãã¬ã¼ã³ã¸ã®ãããããã¦ããã¾ãã
æé
æéã¨ãã¦ã¯ä»¥ä¸ã®2è¦ç´ ã§æ§æããã¦ãã¾ãã
- EKS ã¯ã©ã¹ã¿ã¼ãã¨ã«ã1 æéããã $0.20ã
- ã¯ã¼ã«ã¼ãã¼ãå®è¡ç¨ã® EC2 ã¤ã³ã¹ã¿ã³ã¹ãEBS ããªã¥ã¼ã ãªã©ã«å¯¾ãã¦æéããããã
- é常㮠EC2 ã EBS ã¨èª²éã¢ãã«ã¯åãã§ãã
äºåæºå
äºåæºåã§ã¯æ¬¡ã®ãã¨ãè¡ãã¾ãã
- AWS Cloud9(以ä¸ãCloud9)ç°å¢ãä½ãã
kubectl
ã¨aws-iam-authenticator
ãjq
ãã¤ã³ã¹ãã¼ã«ããã- ã¢ããªã±ã¼ã·ã§ã³ã®ã½ã¼ã¹ã³ã¼ãã clone ããã
- Cloud9 ã® EC2 ã¤ã³ã¹ã¿ã³ã¹ã« IAM ãã¼ã«ãã¢ã¿ããããã
- Cloud9 ã®ã¯ã¬ãã³ã·ã£ã«è¨å®ãè¡ãã
ãã¼ã«ã®ã¤ã³ã¹ãã¼ã«ã¯ä»¥ä¸ã®ã³ãã³ãã§å®è¡ãã¾ãã
# kubectl ã¤ã³ã¹ãã¼ã« $ sudo curl --silent --location -o /usr/local/bin/kubectl "https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-07-26/bin/linux/amd64/kubectl" $ sudo chmod +x /usr/local/bin/kubectl # IAM Authenticator ã¤ã³ã¹ãã¼ã« $ go get -u -v github.com/kubernetes-sigs/aws-iam-authenticator/cmd/aws-iam-authenticator $ sudo mv ~/go/bin/aws-iam-authenticator /usr/local/bin/aws-iam-authenticator # jq ã¤ã³ã¹ãã¼ã« sudo yum -y install jq
Cloud9 ã®ã¯ã¬ãã³ã·ã£ã«è¨å®ã§ãEKS ãªã©ãªã½ã¼ã¹ä½æã®ããã©ã«ãã®ãªã¼ã¸ã§ã³ãè¨å®ãã¾ãã
ã·ã³ã¬ãã¼ã«ãªã¼ã¸ã§ã³ã® Cloud9 ãå©ç¨ããã®ã§ããap-southeast-1ãã«ãªã£ã¦ãã¾ãã
$ export AWS_REGION=$(curl -s 169.254.169.254/latest/dynamic/instance-identity/document | jq -r .region) $ echo "export AWS_REGION=${AWS_REGION}" >> ~/.bash_profile $ aws configure set default.region ${AWS_REGION} $ aws configure get default.region ap-southeast-1
åèæ å ±
EKS ã¯ã©ã¹ã¿ã¼ã®ä½æ
次ã«ãEKS ã¯ã©ã¹ã¿ã¼ãä½ãã¾ãã
ã¯ã¼ã¯ã·ã§ããã§ã¯eksctl
ã¨ããCLIãã¼ã«ã使ã£ã¦ EKS ã¯ã©ã¹ã¿ã¼ãä½ãã¾ãããã¼ã¸ã§ã³ã¯0.1.23
ã§ãã
$ eksctl version [â¹] version.Info{BuiltAt:"", GitCommit:"", GitTag:"0.1.23"}
ã¯ã©ã¹ã¿ã¼ãä½ãã®ã¯ä»¥ä¸ã®ã³ãã³ãã§å®è¡ãã3ã¤ã®ãã¼ããä½ãã¾ããã¯ã©ã¹ã¿ã¼èµ·åã«ã¯ç´15åã»ã©ãããã¾ãã
$ eksctl create cluster --name=eksworkshop-eksctl --nodes=3 --node-ami=auto --region=${AWS_REGION} [â¹] using region ap-southeast-1 [â¹] setting availability zones to [ap-southeast-1a ap-southeast-1c ap-southeast-1b] [â¹] subnets for ap-southeast-1a - public:192.168.0.0/19 private:192.168.96.0/19 [â¹] subnets for ap-southeast-1c - public:192.168.32.0/19 private:192.168.128.0/19 [â¹] subnets for ap-southeast-1b - public:192.168.64.0/19 private:192.168.160.0/19 [â¹] nodegroup "ng-74de009b" will use "ami-038d55c26bf01998f" [AmazonLinux2/1.11] [â¹] creating EKS cluster "eksworkshop-eksctl" in "ap-southeast-1" region [â¹] will create 2 separate CloudFormation stacks for cluster itself and the initial nodegroup [â¹] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=ap-southeast-1 --name=eksworkshop-eksctl' [â¹] creating cluster stack "eksctl-eksworkshop-eksctl-cluster" [â¹] creating nodegroup stack "eksctl-eksworkshop-eksctl-nodegroup-ng-74de009b" [â¹] --nodes-min=3 was set automatically for nodegroup ng-74de009b [â¹] --nodes-max=3 was set automatically for nodegroup ng-74de009b [â] all EKS cluster resource for "eksworkshop-eksctl" had been created [â] saved kubeconfig as "/home/ec2-user/.kube/config" [â¹] nodegroup "ng-74de009b" has 0 node(s) [â¹] waiting for at least 3 node(s) to become ready in "ng-74de009b" [â¹] nodegroup "ng-74de009b" has 3 node(s) [â¹] node "ip-192-168-16-181.ap-southeast-1.compute.internal" is ready [â¹] node "ip-192-168-49-231.ap-southeast-1.compute.internal" is ready [â¹] node "ip-192-168-75-222.ap-southeast-1.compute.internal" is ready [â¹] kubectl command should work with "/home/ec2-user/.kube/config", try 'kubectl get nodes' [â] EKS cluster "eksworkshop-eksctl" in "ap-southeast-1" region is ready
ã¯ã©ã¹ã¿ã¼ãç«ã¡ä¸ãã£ãå¾ãkubectl get nodes
ã§3ã¤ã®ãã¼ããããã®ã§OKã§ãã
$ kubectl get nodes NAME STATUS ROLES AGE VERSION ip-192-168-16-181.ap-southeast-1.compute.internal Ready <none> 1m v1.11.5 ip-192-168-49-231.ap-southeast-1.compute.internal Ready <none> 1m v1.11.5 ip-192-168-75-222.ap-southeast-1.compute.internal Ready <none> 1m v1.11.5
管çç»é¢ã®èµ·å
Kubernetes ã§ã¯ç®¡çç»é¢ãæ¨æºã§ãããããã¢ã¯ã»ã¹ãã¦ã¯ã©ã¹ã¿ã¼ã®ç¶æ ã確èªãã¾ãã
ãã ãã¯ã¼ã¯ã·ã§ããã«è¨è¼ã®ã³ãã³ãã§ãã¨ã¨ã©ã¼ã§å¤±æãã¦ãã¾ãã¾ãã
$ kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml error: unable to read URL "https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml", server reported 404 Not Found, status code=404
æå®ãã URL ã«èª¤ãããããããªã®ã§ã以ä¸ã®ã³ãã³ãã§ããã°èµ·åãã¾ããåé¤ããã¨ããåæ§ã§ãã
$ kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard.yaml secret "kubernetes-dashboard-certs" created secret "kubernetes-dashboard-csrf" created serviceaccount "kubernetes-dashboard" created role.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" created rolebinding.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" created deployment.apps "kubernetes-dashboard" created service "kubernetes-dashboard" created $ kubectl proxy --port=8080 --address='0.0.0.0' --disable-filter=true &
ã¾ãã管çç»é¢ã¸ã¯aws-iam-authenticator
ã§çºè¡ããããã¼ã¯ã³ã使ã£ã¦ãã°ã¤ã³ã§ãããããã®æ©è½ã使ãã¾ãã以ä¸ã®ã³ãã³ãã®å®è¡ããéã«åºåãããæååãæ§ãã¦ããã°ã¤ã³ç¢ºèªãã¾ãããã
$ aws-iam-authenticator token -i eksworkshop-eksctl --token-only
管çç»é¢ã«ã¢ã¯ã»ã¹ã§ãã¾ããã
ãµã³ãã«ã¢ããªã±ã¼ã·ã§ã³ã®ãããã¤
次ã«ãEKS ã«ãµã³ãã«ã¢ããªã±ã¼ã·ã§ã³ããããã¤ãã¦ããã¾ãã
ãããã¤ããã®ã¯ä»¥ä¸ã®ãã®ã«ãªãã¾ãã
ãããã¤ã«ã¯ããããã§ã¹ããã¡ã¤ã«ã使ãã¾ãããããã§ã¹ããã¡ã¤ã«ã¯ Kubernetes ã®å¦çãå®ç¾©ãããã¡ã¤ã«ã§ãã
2ã¤ã®ãã¡ã¤ã«ãé©ç¨ãã¦ããã¾ãã
$ kubectl apply -f kubernetes/deployment.yaml deployment.apps "ecsdemo-nodejs" created $ kubectl apply -f kubernetes/service.yaml service "ecsdemo-nodejs" created
ãããã¤ãå®äºãã段éã§ã¯ãããããã¬ããªã«æ°ã1ã¤ãã¤ã®æ§æã§ãã
$ kubectl get deployments NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE ecsdemo-crystal 1 1 1 1 9m ecsdemo-frontend 1 1 1 1 4m ecsdemo-nodejs 1 1 1 1 11m
ãµã³ãã«ã¢ããªã±ã¼ã·ã§ã³ã«ã¢ã¯ã»ã¹ãã¦ã¿ãã¨ã以ä¸ã®ãããªãã¼ã¸ã表示ããã¾ãã
ã¬ããªã«ã®ã¹ã±ã¼ã«ã¢ã¦ã
ãããã¹ã±ã¼ã«ã¢ã¦ãããã¾ããããã¯ã¨ã³ãã® NodeJS 㨠Crystal ã®ãããã¤ãããã¬ããªã«æ°ã3ã¤ã«å¢ãã¦ãã¾ãã
$ kubectl scale deployment ecsdemo-nodejs --replicas=3 deployment.extensions "ecsdemo-nodejs" scaled $ kubectl scale deployment ecsdemo-crystal --replicas=3 deployment.extensions "ecsdemo-crystal" scaled $ kubectl get deployments NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE ecsdemo-crystal 3 3 3 1 10m ecsdemo-frontend 1 1 1 1 6m ecsdemo-nodejs 3 3 3 1 13m
ããã³ãã¨ã³ãå´ãã¹ã±ã¼ã«ããã¾ããããã§æåã®ãããã¤æç¹ã§ã¯1ã¤ã§ããåãã¦ããªãã£ããªã¯ã¨ã¹ããè¤æ°ã®ããã³ãã¨ã³ã API ãåããå½¢ã«ãªãã¾ããã
$ kubectl scale deployment ecsdemo-frontend --replicas=3 deployment.extensions "ecsdemo-frontend" scaled $ kubectl get deployments NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE ecsdemo-crystal 3 3 3 3 16m ecsdemo-frontend 3 3 3 3 12m ecsdemo-nodejs 3 3 3 3 19m
å度ããµã³ãã«ã¢ããªã±ã¼ã·ã§ã³ã®ãã¼ã¸ã«ã¢ã¯ã»ã¹ããã¨ããã©ãã£ãã¯ã®æ¯ãå ãã©ã³ãã ã«ãªãã¾ããã
æåã«ç¢ºèªããéã¯ãåä¸ã§ãããè¤æ°ã®ã³ã³ããã§ãªã¯ã¨ã¹ããåããããããã«ãªãã¾ããã
Kubernetes ã®ããã±ã¼ã¸ç®¡çãã¼ã« Helm ã®å°å ¥
Kubernetes ã®ããã±ã¼ã¸ç®¡çãã¼ã« Helm
ãå°å
¥ãã¾ãã
ã¾ããåæã»ããã¢ãããè¡ãã¾ãã
$ curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 7234 100 7234 0 0 17431 0 --:--:-- --:--:-- --:--:-- 17389 $ chmod +x get_helm.sh $ ./get_helm.sh Downloading https://kubernetes-helm.storage.googleapis.com/helm-v2.13.0-linux-amd64.tar.gz Preparing to install helm and tiller into /usr/local/bin helm installed into /usr/local/bin/helm tiller installed into /usr/local/bin/tiller Run 'helm init' to configure helm.
Helm
ã¯ãKubernetes ã«å¯¾ããç¹å¥ãªè¨±å¯ãå¿
è¦ã¨ããtiller
ã¨å¼ã°ãããµã¼ãã¹ã«ä¾åãã¦ããã®ã§ãtiller
ã使ç¨ãããµã¼ãã¹ã¢ã«ã¦ã³ããä½æããå¿
è¦ãããã¾ãã
ãããã¯ã©ã¹ã¿ã«é©ç¨ããããã«ããµã¼ãã¹ã¢ã«ã¦ã³ãç¨ãããã§ã¹ããã¡ã¤ã«ãä½ãã¾ãã
$ cat <<EoF > ~/environment/rbac.yaml > --- > apiVersion: v1 > kind: ServiceAccount > metadata: > name: tiller > namespace: kube-system > --- > apiVersion: rbac.authorization.k8s.io/v1beta1 > kind: ClusterRoleBinding > metadata: > name: tiller > roleRef: > apiGroup: rbac.authorization.k8s.io > kind: ClusterRole > name: cluster-admin > subjects: > - kind: ServiceAccount > name: tiller > namespace: kube-system > EoF $ kubectl apply -f ~/environment/rbac.yaml serviceaccount "tiller" created clusterrolebinding.rbac.authorization.k8s.io "tiller" created $ helm init --service-account tiller Creating /home/ec2-user/.helm Creating /home/ec2-user/.helm/repository Creating /home/ec2-user/.helm/repository/cache Creating /home/ec2-user/.helm/repository/local Creating /home/ec2-user/.helm/plugins Creating /home/ec2-user/.helm/starters Creating /home/ec2-user/.helm/cache/archive Creating /home/ec2-user/.helm/repository/repositories.yaml Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com Adding local repo with URL: http://127.0.0.1:8879/charts $HELM_HOME has been configured at /home/ec2-user/.helm. Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster. Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy. To prevent this, run `helm init` with the --tiller-tls-verify flag. For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation Happy Helming!
ããã§è¨å®ãå®äºã®ãããã¯ã©ã¹ã¿å ã®ãªã½ã¼ã¹ã管çã§ããããã«ãªãã¾ãã
Helm ã使ã£ã¦ Jenkins ããããã¤ãã
試ãã«Helm
ã使ã£ã¦ Jenkins ããããã¤ãã¾ãã
helm install stable/jenkins --set rbac.install=true --name cicd
ä¸è¨ã®ã³ãã³ãå®è¡å¾ã Jenkins ã®ç®¡çç»é¢ãã¢ã¯ã»ã¹ããããã®æ å ±ãåºåãããããé 次å®è¡ãã¾ãã
$ printf $(kubectl get secret --namespace default cicd-jenkins -o jsonpath="{.data.jenkins-admin-password}" | base64 --decode);echo xxxxxxxx # admin ã¦ã¼ã¶ã¼ã®ãã¹ã¯ã¼ã $ export SERVICE_IP=$(kubectl get svc --namespace default cicd-jenkins --template "{{ range (index .status.loadBalancer.ingress 0) }}{{ . }}{{ end }}") $ echo http://$SERVICE_IP:8080/login http://acc58d1ea435c11e9820806e6ac3f061-1693712465.ap-southeast-1.elb.amazonaws.com:8080/login # ãã°ã¤ã³ã®URL
URLå ã«ã¢ã¯ã»ã¹ãã¦ã¿ã¦ããã°ã¤ã³ç¢ºèªãã§ãã¾ããã
EFK ã§ã®ãã°ç®¡çåºç¤ã®æ§æ
EKS ã®ãã°ç®¡çåºç¤ãæ§æãè¡ããã£ãã¿ã¼ã«ãªãã¾ãã
Fluentd / ElasticSearch / Kibana ã使ã£ã¦ãã°ã®åéã¨ä¿åãå¯è¦åãè¡ãã¾ãã
ããã3ã¤ãåããã¦ãEFK ã¹ã¿ãã¯ãã¨å¼ã°ãã¦ãã¾ãã
Fluentd 㧠CloudWatch Logs ã« EKS ã®ãã°ãåºåãã Amazon Elasticsearch(以ä¸ãAES) ã§ãã°ã®åãè¾¼ã¿ã¨ã Kibana ã«ããå¯è¦åãã¾ãã
IAM ãã¼ã«ã¸ãã°ç¨ã®ããªã·ã¼è¿½å
ã¯ã©ã¹ã¿ã¼ä½ææã«ãä½æããã IAM ãã¼ã«(eksctl-eksworkshop-eksctl-nodegro-NodeInstanceRole-xxxx
)ã«ãã°ç¨ã®ããªã·ã¼ã追å ãã¾ãã
追å ãã IAM ããªã·ã¼ã¯ä»¥ä¸ã®éãã§ãã
{ "Version": "2012-10-17", "Statement": [ { "Action": [ "logs:DescribeLogGroups", "logs:DescribeLogStreams", "logs:CreateLogGroup", "logs:CreateLogStream", "logs:PutLogEvents" ], "Resource": "*", "Effect": "Allow" } ] }
注æç¹ã¨ãã¦ãã¯ã¼ã¯ã·ã§ããã§å®è¡ããã³ãã³ãã§ã以ä¸ã®ã³ãã³ããããã¾ãããã ããã®ã³ãã³ãã¯ã¹ã¿ãã¯åã«èª¤ããããçå¾ç¶å¦çã失æãã¾ãã
INSTANCE_PROFILE_PREFIX=$(aws cloudformation describe-stacks --stack-name eksctl-eksworkshop-eksctl-nodegroup-0 | jq -r '.Stacks[].Outputs[].ExportName' | sed 's/:.*//') INSTANCE_PROFILE_NAME=$(aws iam list-instance-profiles | jq -r '.InstanceProfiles[].InstanceProfileName' | grep $INSTANCE_PROFILE_PREFIX) ROLE_NAME=$(aws iam get-instance-profile --instance-profile-name $INSTANCE_PROFILE_NAME | jq -r '.InstanceProfile.Roles[] | .RoleName')
ãã®ãããããã§è¨å®ãããã®ã¯ã¯ã¼ã«ã¼ãã¼ãã® IAM ãã¼ã«åã«ãªãã¾ãã®ã§ã CloudFormation ã®åºåããåã ã«ç¢ºèªããã®ãè¯ããã¨æãã¾ãã
ROLE_NAME=$(aws iam get-instance-profile --instance-profile-name eksctl-eksworkshop-eks-nodegroup-ng-d3bf1805-NodeInstanceProfile-1LHFYP99YV221 | jq -r '.InstanceProfile.Roles[] | .RoleName')
AES ãä½æãã
AES ãä½æãã¾ãã
aws es create-elasticsearch-domain \ --domain-name kubernetes-logs \ --elasticsearch-version 6.3 \ --elasticsearch-cluster-config \ InstanceType=m4.large.elasticsearch,InstanceCount=2 \ --ebs-options EBSEnabled=true,VolumeType=standard,VolumeSize=100 \ --access-policies '{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Principal":{"AWS":["*"]},"Action":["es:*"],"Resource":"*"}]}'
以ä¸ã®ããã«false
ã¹ãã¼ã¿ã¹ã«ãªãã¾ããããå®äºã§ãã
$ aws es describe-elasticsearch-domain --domain-name kubernetes-logs --query 'DomainStatus.Processing' false
Fluentd ãã»ããã¢ãããã
Flunetd ãã»ããã¢ãããã¾ãã
Fluentd ãã°ã¨ã¼ã¸ã§ã³ãè¨å®ã¯ Kubernetes ConfigMap ã«ããã¾ãã
Fluentd 㯠ã¯ã¼ã«ã¼ãã¼ããã¨ã«1ã¤ã® Pod ã¨ãã¦å±éããã¾ãã
ã¯ã¼ã¯ã·ã§ããã§ã¯ã3ãã¼ãã¯ã©ã¹ã¿ã使ç¨ããã¦ãããããå±éããã¨3ã¤ã® Pod ãåºåã«è¡¨ç¤ºããã¾ãã
$ kubectl get pods -w --namespace=kube-system NAME READY STATUS RESTARTS AGE ~ä¸ç¥~ fluentd-cloudwatch-98xvb 1/1 Running 0 1m fluentd-cloudwatch-hqdzd 1/1 Running 0 1m fluentd-cloudwatch-hrxfw 1/1 Running 0 1m ~ä¸ç¥~
CloudWatch Logs ã«ãåºåããã¦ãã¾ãã
CloudWatch Logs ã®ãã°ã AES ã«çµ±åãã
CloudWatch Logs ã®ãã°ã«ã¯ AES ã«çµ±åæ©è½ãããã¾ãã
æ©è½èªä½ã®è©³ç´°ã¯ä»¥ä¸ã®ããã¥ã¡ã³ããåç §ãã ããã
çµ±åããçµæããã°ã Kibana ã§ãé²è¦§å¯è½ãªç¶æ ã«ãªãã¾ããã
CI / CD ããã¼ã®ä½æ
AWS CodePipeline(以ä¸ãCodepipeline)ã¨ãAWS CodeBuild(以ä¸ãCodeBuild)ã使ã£ã¦ãCI / CD ããã¼ãä½ãã¾ãã
CodeBuild ã® IAM ãã¼ã«ãä½æãã
kubectl
ãä»ã㦠EKS ã¯ã©ã¹ã¿ã¼ã¸ã¢ã¯ã»ã¹ããããã« CodeBuild ã§ä½¿ç¨ããã¤ã³ã©ã¤ã³ããªã·ã¼ã追å ãã¾ãã
IAM ããªã·ã¼ã¨ãã¦ã¯ä»¥ä¸ã®ãã®ã追å ãã¾ãã
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "eks:Describe*", "Resource": "*" } ] }
ä¿¡é ¼é¢ä¿ã¨ãã¦ä»¥ä¸ã®ãã®ã追å ãã¾ãã
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::xxxxxxxxxxxxx:role/eksworkshop-codepipeline-CodeBuildServiceRole-1R93P86RZPBW6" }, "Action": "sts:AssumeRole" }, { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::xxxxxxxxxxxxx:role:root" }, "Action": "sts:AssumeRole" } ] }
ConfigMap ã«ä¸è¨ã®ãã¼ã«ãå«ããè¨å®ãè¡ãã¨ããã¤ãã©ã¤ã³ã®CodeBuild ã®kubectl
㯠IAM ãã¼ã«ãä»ã㦠EKSã¯ã©ã¹ã¿ã¼ã«ã¢ã¯ã»ã¹ã§ããããã«ãªãã¾ãã
ã½ã¼ã¹ã³ã¼ãã®ãããã¤
GitHub ã®ä»¥ä¸ã®ã½ã¼ã¹ã³ã¼ããfork
ãã¾ãã
ä¸è¨ã®ã½ã¼ã¹ãå¤æ´ãããã¨ãããªã¬ã¼ã« CodePipeline 㨠CodeBuild ãã½ã¼ã¹ã³ã¼ãããã«ããã¦ããããã¤ããã¾ãã
ãHello World!ãã¨è¡¨ç¤ºãããã¨ãããããHello EKS World!!ãã«å¤æ´ãã¦ãããã¤ãã¾ãã
ç°å¢ã«ã¢ã¯ã»ã¹ãã¦ã¿ãã¨ãç·¨éããã¡ãã»ã¼ã¸ã表示ããã¾ããã
Kubernetes ã®ã¹ã±ã¼ãªã³ã°
Kubernetes ã®ã¹ã±ã¼ãªã³ã°ã«ã¯æ¬¡ã®2ã¤ã®ãã¿ã¼ã³ãããã2ã¤ã®ãã¿ã¼ã³ãå®è·µãã¾ãã
- Horizontal Pod Autoscaler (HPA) : ã¬ããªã«ã»ããå ã® Pod ãã¹ã±ã¼ã«ã¤ã³/ã¢ã¦ãããã
- Cluster Autoscaler(CA) : Kubernetes ã®ããã©ã«ãã³ã³ãã¼ãã³ããã¯ã©ã¹ã¿ã¼ãã¼ãã¨ãPod ãã¹ã±ã¼ã«ãããã
Horizontal Pod Autoscaler ã®è¨å®ã¨å®è·µ
Metrics Server ã¯ããªã½ã¼ã¹ä½¿ç¨éãã¼ã¿ãã¯ã©ã¹ã¿ã¼å ¨ä½ã§éç´ããã¹ã±ã¼ã«ã¢ã¦ãã®ããã®ææã¨ãã¾ãã
Helm
ã§ä»¥ä¸ã®ã³ãã³ããå®è¡ãã¾ãã
helm install stable/metrics-server \ --name metrics-server \ --version 2.0.2 \ --namespace metrics
ãµã³ãã«ã¢ããªããããã¤ããCPU 50%ãè¶ ãããã¹ã±ã¼ãªã³ã°ãããããè¨å®ãã¾ãã
kubectl run php-apache --image=k8s.gcr.io/hpa-example --requests=cpu=200m --expose --port=80 kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10
æå³çã«è² è·ãããã¦ã¿ã¾ãã
kubectl run -i --tty load-generator --image=busybox /bin/sh while true; do wget -q -O - http://php-apache; done
è² è·ãå¢ããã¨ãã¬ããªã«ã®æ°ãå¢ãã¾ããã
kubectl get hpa -w NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE php-apache Deployment/php-apache 59%/50% 1 10 8 8m
Cluster Autoscale ã®è¨å®ã¨å®è·µ
Cluster Autoscaler 㯠Auto Scaling ã°ã«ã¼ãã¨ã®çµ±åãå¯è½ã§ã4ã¤ã®ãªãã·ã§ã³ãããã¾ãã
- One Auto Scaling group
- Multiple Auto Scaling groups
- Auto-Discovery
- Master Node setup
Auto Scaling 㨠Cluster Autoscaler ã®è¨å®ãè¡ãå¿ è¦ãããã¾ãã
Auto Scaling 㯠Auto Scaling ã°ã«ã¼ãã®æå°ã¨æ大ã®æ°ãç·¨éããCluster Autoscaler ã¯è¨å®ãã¡ã¤ã«ã®ä»¥ä¸ã®--nodes
ã¨ãenv
ã®ãªã¼ã¸ã§ã³ãå¤æ´ãã¾ãã
command: - ./cluster-autoscaler - --v=4 - --stderrthreshold=info - --cloud-provider=aws - --skip-nodes-with-local-storage=false - --nodes=2:8:eksctl-eksworkshop-eksctl-nodegroup-ng-74de009b-NodeGroup-26BV6BK2H39E env: - name: AWS_REGION value: ap-southeast-1
ããã¦ãã¯ã¼ã«ã¼ãã¼ãã® IAM ã¤ã³ã¹ã¿ã³ã¹ãããã¡ã¤ã«ãè¨å®ãã¾ãã
{ "RoleName": "eksctl-eksworkshop-eksctl-nodegro-NodeInstanceRole-67FXBWETY0OH", "PolicyDocument": { "Version": "2012-10-17", "Statement": [ { "Action": [ "autoscaling:DescribeAutoScalingGroups", "autoscaling:DescribeAutoScalingInstances", "autoscaling:SetDesiredCapacity", "autoscaling:TerminateInstanceInAutoScalingGroup" ], "Resource": "*", "Effect": "Allow" } ] }, "PolicyName": "ASG-Policy-For-Worker" }
ã¢ããªã±ã¼ã·ã§ã³ã®ãããã¤ãè¡ããã¬ããªã«ã®ã¹ã±ã¼ãªã³ã°ãè¡ã£ã¦ã¿ã¾ãã
kubectl apply -f ~/environment/cluster-autoscaler/cluster_autoscaler.yml $ kubectl get deployment/nginx-to-scaleout NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE nginx-to-scaleout 1 1 1 0 2s $ kubectl scale --replicas=10 deployment/nginx-to-scaleout deployment.extensions "nginx-to-scaleout" scaled
10åã®ã¬ããªã«ãã§ãã¦ãã¾ãã
$ kubectl get pods -o wide --watch nginx-to-scaleout-ff975c58d-7gh42 0/1 Pending 0 29s <none> <none> nginx-to-scaleout-ff975c58d-9krg5 1/1 Running 0 29s 192.168.25.240 ip-192-168-16-181.ap-southeast-1.compute.internal nginx-to-scaleout-ff975c58d-gqq9v 1/1 Running 0 29s 192.168.55.8 ip-192-168-49-231.ap-southeast-1.compute.internal nginx-to-scaleout-ff975c58d-kpcxk 1/1 Running 0 29s 192.168.72.203 ip-192-168-75-222.ap-southeast-1.compute.internal nginx-to-scaleout-ff975c58d-llk67 1/1 Running 0 29s 192.168.30.63 ip-192-168-16-181.ap-southeast-1.compute.internal nginx-to-scaleout-ff975c58d-nttnc 1/1 Running 0 29s 192.168.53.56 ip-192-168-49-231.ap-southeast-1.compute.internal nginx-to-scaleout-ff975c58d-sqh7l 1/1 Running 0 29s 192.168.56.31 ip-192-168-49-231.ap-southeast-1.compute.internal nginx-to-scaleout-ff975c58d-t5nw5 1/1 Running 0 29s 192.168.12.128 ip-192-168-16-181.ap-southeast-1.compute.internal nginx-to-scaleout-ff975c58d-wxfr6 1/1 Running 0 1m 192.168.88.205 ip-192-168-75-222.ap-southeast-1.compute.internal nginx-to-scaleout-ff975c58d-z9sc4 1/1 Running 0 29s 192.168.77.198 ip-192-168-75-222.ap-southeast-1.compute.internal
Prometheus 㨠Grafana ã§ã®ç£è¦
æå¾ã«ãç£è¦è¨å®ã®ç« ã§ãã
ç£è¦ãã¼ã«ã¨ã㦠Prometheus ã¨ããã·ã¥ãã¼ãã®ãã¼ã«ã¨ã㦠Grafana 㧠Kubernetes ã¯ã©ã¹ã¿ã¼ãç£è¦ãã¾ãã
Prometheus ã®å°å ¥
Prometheus ã®å°å ¥ãè¡ãã¾ãã
åæºåã¨ãã¦ãã¦ã³ãã¼ããããprometheus-values.yaml
ã次ã®ããã«ç·¨éãã¾ãã
# 175è¡ç®ã¨712è¡ç®ãããã®ã³ã¡ã³ãã¢ã¦ããå¤ãã¦ç·¨éãã storageClass: "prometheus" # 767è¡ç®ä»¥éãç·¨éãã externalIPs: [] loadBalancerIP: "" loadBalancerSourceRanges: [] servicePort: 80 nodePort: 30900 type: NodePort
Helm
㧠Prometheus ããããã¤ãã¾ãã
helm install -f prometheus-values.yaml stable/prometheus --name prometheus --namespace prometheus
ãããã¤ãæå¾ éãã«è¡ãããã確èªãã¾ãã以ä¸ã®ç¶æ ã«ãªã£ã¦ããã°ãããã¤ã¯æåãã¦ãã¾ãã
$ kubectl get all -n prometheus NAME READY STATUS RESTARTS AGE pod/prometheus-alertmanager-5bfcddf64-8skm4 0/2 Pending 0 5m pod/prometheus-kube-state-metrics-7c54bd8d8-t46c6 1/1 Running 0 5m pod/prometheus-node-exporter-dq5sj 1/1 Running 0 5m pod/prometheus-node-exporter-m4mx9 1/1 Running 0 5m pod/prometheus-node-exporter-w2crn 1/1 Running 0 5m pod/prometheus-pushgateway-6985db8447-rr7pj 1/1 Running 0 5m pod/prometheus-server-7f677fdd8d-m4vpq 0/2 Pending 0 5m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/prometheus-alertmanager ClusterIP 10.100.110.221 <none> 80/TCP 5m service/prometheus-kube-state-metrics ClusterIP None <none> 80/TCP 5m service/prometheus-node-exporter ClusterIP None <none> 9100/TCP 5m service/prometheus-pushgateway ClusterIP 10.100.149.87 <none> 9091/TCP 5m service/prometheus-server NodePort 10.100.114.114 <none> 80:30900/TCP 5m NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/prometheus-node-exporter 3 3 3 3 3 <none> 5m NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deployment.apps/prometheus-alertmanager 1 1 1 0 5m deployment.apps/prometheus-kube-state-metrics 1 1 1 1 5m deployment.apps/prometheus-pushgateway 1 1 1 1 5m deployment.apps/prometheus-server 1 1 1 0 5m NAME DESIRED CURRENT READY AGE replicaset.apps/prometheus-alertmanager-5bfcddf64 1 1 0 5m replicaset.apps/prometheus-kube-state-metrics-7c54bd8d8 1 1 1 5m replicaset.apps/prometheus-pushgateway-6985db8447 1 1 1 5m replicaset.apps/prometheus-server-7f677fdd8d 1 1 0 5m
Grafana ã®å°å ¥
Grafana ã®å°å ¥ãè¡ãã¾ãã
grafana-values.yaml
ã次ã®ããã«ç·¨éãã¾ãã
# 74è¡ç®ä»¥ä¸ã次ã®ããã«å¤æ´ãã service: type: LoadBalancer port: 80 targetPort: 3000 # targetPort: 4181 To be used with a proxy extraContainer annotations: {} labels: {} # 142è¡ç®ã次ã®ããã«å¤æ´ãã storageClassName: prometheus # 152è¡ç®ã次ã®ããã«å¤æ´ãã adminPassword: EKS!sAWSome # 204è¡ç®ä»¥ä¸ã次ã®ããã«å¤æ´ãã datasources: datasources.yaml: apiVersion: 1 datasources: - name: Prometheus type: prometheus url: http://prometheus-server.prometheus.svc.cluster.local access: proxy isDefault: true
Helm
㧠Grafana ããããã¤ãã¾ãã
helm install -f grafana-values.yaml stable/grafana --name grafana --namespace grafana
ãããã¤å¾ã®ç¢ºèªãè¡ãã¾ãã以ä¸ã®è¡¨ç¤ºãåºåããã°ããããã¤æåã«ãªãã¾ãã
$ kubectl get all -n grafana NAME READY STATUS RESTARTS AGE pod/grafana-5b8c7b48c6-szlrt 1/1 Running 0 17s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/grafana LoadBalancer 10.100.22.24 ae601dcc143e9... 80:31524/TCP 17s NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deployment.apps/grafana 1 1 1 1 17s NAME DESIRED CURRENT READY AGE replicaset.apps/grafana-5b8c7b48c6 1 1 1 17s
ããã·ã¥ãã¼ãã®ä½æ
Grafana ã§ããã·ã¥ãã¼ããä½æãã¾ãã
ããã·ã¥ãã¼ã ID 3131
ãè¨å®ãã Kubernetes All Nodes
ã®ã¡ããªã¯ã¹ã表示ããã¦ã¿ã¾ããã
ã¾ã¨ã
EKS ãªã³ã©ã¤ã³ã¯ã¼ã¯ã·ã§ããã®å 容ãä¸éãç´¹ä»ãã¾ããã
EKS ã®æ§æãã ãã°ã CI / CD ãããã±ã¼ã¸ç®¡çãã¹ã±ã¼ã«ã¢ã¦ããç£è¦ãªã©æ§ã ãªå®è·µçãªå 容ã«ãªã£ã¦ãã¾ãã
ã³ã³ããã®ã¯ã¼ã¯ãã¼ãã«ããã¦ã Kubernetes ãæ±ãäºä¾ãå¢ãã¦ãã¦ãã¾ãã
EKS ãæ¤è¨ããéã«ãä½ç³»çã«å¦ã¹ããã®ã¯ã¼ã¯ã·ã§ãããä¸åº¦è§¦ãã¦ã¿ã¦ã¯ãããã§ããããï¼