cert-manager と aws-privateca-issuer と Reloader を試す

cert-manager によって Nginx で TLS 終端をするための証明書を管理している。cert-manager によって証明書は自動更新されるが、Nginx を再起動する必要がある。このために Reloader が使えるらしいので、動作を確認する。

aws-privateca-issuer も合わせて確認する。

クラスターの作成

クラスターを作成する。

CLUSTER_NAME="reloader"
MY_ARN=$(aws sts get-caller-identity --output text --query Arn)
AWS_ACCOUNT_ID=$(aws sts get-caller-identity --output text --query Account)
cat << EOF > cluster.yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: ${CLUSTER_NAME}
  region: ap-northeast-1
  version: "1.29"
vpc:
  cidr: "10.0.0.0/16"

availabilityZones:
  - ap-northeast-1a
  - ap-northeast-1c

cloudWatch:
  clusterLogging:
    enableTypes: ["*"]

iam:
  withOIDC: true

accessConfig:
  bootstrapClusterCreatorAdminPermissions: false
  authenticationMode: API
  accessEntries:
    - principalARN: arn:aws:iam::${AWS_ACCOUNT_ID}:role/Admin
      accessPolicies:
        - policyARN: arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy
          accessScope:
            type: cluster
EOF
eksctl create cluster -f cluster.yaml

ノードグループを作成する。

cat << EOF > m1.yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: ${CLUSTER_NAME}
  region: ap-northeast-1

managedNodeGroups:
  - name: m1
    instanceType: m6i.large
    minSize: 1
    maxSize: 10
    desiredCapacity: 2
    privateNetworking: true
    iam:
      attachPolicyARNs:
        - arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
        - arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
        - arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore
EOF
eksctl create nodegroup -f m1.yaml

ノードを確認する。

$ k get node
NAME                                             STATUS   ROLES    AGE    VERSION
ip-10-0-106-51.ap-northeast-1.compute.internal   Ready    <none>   106s   v1.29.10-eks-94953ac
ip-10-0-72-39.ap-northeast-1.compute.internal    Ready    <none>   113s   v1.29.10-eks-94953ac

cert-manager のインストール

チャートリポジトリを追加する。

helm repo add jetstack https://charts.jetstack.io --force-update

インストールする。

helm install \
  cert-manager jetstack/cert-manager \
  --namespace cert-manager \
  --create-namespace \
  --version v1.16.1 \
  --set crds.enabled=true
NAME: cert-manager
LAST DEPLOYED: Wed Nov 20 10:11:21 2024
NAMESPACE: cert-manager
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
cert-manager v1.16.1 has been deployed successfully!

In order to begin issuing certificates, you will need to set up a ClusterIssuer
or Issuer resource (for example, by creating a 'letsencrypt-staging' issuer).

More information on the different types of issuers and how to configure them
can be found in our documentation:

https://cert-manager.io/docs/configuration/

For information on how to configure cert-manager to automatically provision
Certificates for Ingress resources, take a look at the `ingress-shim`
documentation:

https://cert-manager.io/docs/usage/ingress/

Pod を確認する。

$ k -n cert-manager get pods
NAME                                      READY   STATUS    RESTARTS   AGE
cert-manager-859bc755b6-r2h4q             1/1     Running   0          2m6s
cert-manager-cainjector-dc59548c5-dcmqp   1/1     Running   0          2m6s
cert-manager-webhook-d45c9fbd6-8r82w      1/1     Running   0          2m6s

aws-privateca-issuer のインストール

先に AWS マネジメントコンソールから AWS Private CA で CA を立てておく。

この CA を利用可能なポリシーを作成する。

cat << EOF > privateca-issuer-policy.json
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "awspcaissuer",
      "Action": [
        "acm-pca:DescribeCertificateAuthority",
        "acm-pca:GetCertificate",
        "acm-pca:IssueCertificate"
      ],
      "Effect": "Allow",
      "Resource": "arn:aws:acm-pca:ap-northeast-1:XXXXXXXXXXXX:certificate-authority/2aebb313-2f59-4cd1-98a8-97d39bf3c42a"
    }
  ]
}
EOF
aws iam create-policy \
  --policy-name privateca-issuer-policy \
  --policy-document file://privateca-issuer-policy.json

IAM ロールと ServiceAccount を先に作成しておく。--role-only でロールだけ作る方法もある。

NAMESPACE="cert-manager"
SA_NAME="aws-privateca-issuer"
eksctl create iamserviceaccount \
  --cluster ${CLUSTER_NAME} --name ${SA_NAME} --namespace ${NAMESPACE} \
  --attach-policy-arn arn:aws:iam::${AWS_ACCOUNT_ID}:policy/privateca-issuer-policy \
  --approve
2024-11-20 10:23:53 [ℹ]  1 iamserviceaccount (cert-manager/aws-privateca-issuer) was included (based on the include/exclude rules)
2024-11-20 10:23:53 [!]  serviceaccounts that exist in Kubernetes will be excluded, use --override-existing-serviceaccounts to override
2024-11-20 10:23:53 [ℹ]  1 task: { 
    2 sequential sub-tasks: { 
        create IAM role for serviceaccount "cert-manager/aws-privateca-issuer",
        create serviceaccount "cert-manager/aws-privateca-issuer",
    } }2024-11-20 10:23:53 [ℹ]  building iamserviceaccount stack "eksctl-reloader-addon-iamserviceaccount-cert-manager-aws-privateca-issuer"
2024-11-20 10:23:54 [ℹ]  deploying stack "eksctl-reloader-addon-iamserviceaccount-cert-manager-aws-privateca-issuer"
2024-11-20 10:23:54 [ℹ]  waiting for CloudFormation stack "eksctl-reloader-addon-iamserviceaccount-cert-manager-aws-privateca-issuer"
2024-11-20 10:24:24 [ℹ]  waiting for CloudFormation stack "eksctl-reloader-addon-iamserviceaccount-cert-manager-aws-privateca-issuer"
2024-11-20 10:24:24 [ℹ]  created serviceaccount "cert-manager/aws-privateca-issuer""

ロールの ARN を変数に入れておく。

STACK_NAME="eksctl-${CLUSTER_NAME}-addon-iamserviceaccount-${NAMESPACE}-${SA_NAME}"
ROLE_NAME=$(aws cloudformation describe-stack-resources \
    --stack-name ${STACK_NAME} \
    --query "StackResources[?ResourceType=='AWS::IAM::Role'].PhysicalResourceId" \
    --output text)
echo ${ROLE_NAME}
ROLE_ARN=$(aws iam get-role \
    --role-name ${ROLE_NAME} \
    --query "Role.Arn" \
    --output text)
echo ${ROLE_ARN}

values.yaml を作成する。values.yaml 全量はこれ。

ServiceAccount は既に作ってあり create: false を設定するので annotations は指定しなくてもよいはずだが一応入れておく。

cat << EOF > privateca-issuer-values.yaml
serviceAccount:
  # Specifies whether a service account should be created
  create: false
  # Annotations to add to the service account
  annotations:
    eks.amazonaws.com/role-arn: ${ROLE_ARN}
  # The name of the service account to use.
  # If not set and create is true, a name is generated using the fullname template
  name: "${SA_NAME}"
EOF

チャートリポジトリを追加する。

helm repo add awspca https://cert-manager.github.io/aws-privateca-issuer --force-update

インストールする。

helm install \
  aws-privateca-issuer awspca/aws-privateca-issuer \
  --namespace cert-manager \
  --version v1.4.0 \
  -f privateca-issuer-values.yaml
NAME: aws-privateca-issuer
LAST DEPLOYED: Wed Nov 20 10:35:06 2024
NAMESPACE: cert-manager
STATUS: deployed
REVISION: 1
TEST SUITE: None

Pod を確認する。

$ k -n cert-manager get po
NAME                                      READY   STATUS    RESTARTS   AGE
aws-privateca-issuer-6d8bcdbbb7-9vr6f     1/1     Running   0          33s
cert-manager-859bc755b6-r2h4q             1/1     Running   0          24m
cert-manager-cainjector-dc59548c5-dcmqp   1/1     Running   0          24m
cert-manager-webhook-d45c9fbd6-8r82w      1/1     Running   0          24m

Reloader のインストール

チャートリポジトリを追加する。

helm repo add stakater https://stakater.github.io/stakater-charts --force-update

インストールする。

helm install \
  reloader stakater/reloader \
  --namespace reloader \
  --create-namespace \
  --version 1.1.0
NAME: reloader
LAST DEPLOYED: Wed Nov 20 10:39:25 2024
NAMESPACE: reloader
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
- For a `Deployment` called `foo` have a `ConfigMap` called `foo-configmap`. Then add this annotation to main metadata of your `Deployment`
  configmap.reloader.stakater.com/reload: "foo-configmap"

- For a `Deployment` called `foo` have a `Secret` called `foo-secret`. Then add this annotation to main metadata of your `Deployment`
  secret.reloader.stakater.com/reload: "foo-secret"

- After successful installation, your pods will get rolling updates when a change in data of configmap or secret will happen.

Pod を確認する。

$ k -n reloader get pods
NAME                                READY   STATUS    RESTARTS   AGE
reloader-reloader-59f8898b8-z4jb8   1/1     Running   0          21s

証明書の発行

まず AWSPCAClusterIssuer を作成する。

cat << EOF > root-ca-issuer.yaml
apiVersion: awspca.cert-manager.io/v1beta1
kind: AWSPCAClusterIssuer
metadata:
  name: root-ca
spec:
  arn: arn:aws:acm-pca:ap-northeast-1:XXXXXXXXXXXX:certificate-authority/2aebb313-2f59-4cd1-98a8-97d39bf3c42a
  region: ap-northeast-1
EOF
k apply -f root-ca-issuer.yaml

Certificate を作成する。

k create ns nginx
cat << EOF > nginx-cert.yaml
kind: Certificate
apiVersion: cert-manager.io/v1
metadata:
  name:  nginx-cert
  namespace: nginx
spec:
  commonName: nginx
  dnsNames:
    - www.example.com
  duration: 1h0m0s
  issuerRef:
    group: awspca.cert-manager.io
    kind: AWSPCAClusterIssuer
    name: root-ca
  renewBefore: 10m0s
  secretName: nginx-cert-tls
  usages:
    - server auth
  privateKey:
    algorithm: "RSA"
    size: 2048
EOF
k apply -f nginx-cert.yaml

証明書が発行されたことを確認する。

$ k -n nginx get certificate
NAME         READY   SECRET           AGE
nginx-cert   True    nginx-cert-tls   6s
$ k -n nginx get certificaterequest
NAME           APPROVED   DENIED   READY   ISSUER    REQUESTER                                         AGE
nginx-cert-1   True                True    root-ca   system:serviceaccount:cert-manager:cert-manager   11s
$ k -n nginx get secret
NAME             TYPE                DATA   AGE
nginx-cert-tls   kubernetes.io/tls   3      23m

CertificateRequest の status や Secret に証明書が入っている。

$ k -n nginx get certificaterequest nginx-cert-1 -oyaml
apiVersion: cert-manager.io/v1
kind: CertificateRequest
metadata:
  annotations:
    aws-privateca-issuer/certificate-arn: arn:aws:acm-pca:ap-northeast-1:XXXXXXXXXXXX:certificate-authority/2aebb313-2f59-4cd1-98a8-97d39bf3c42a/certificate/6ca13da9e494428deb37ba6e565d4e38
    cert-manager.io/certificate-name: nginx-cert
    cert-manager.io/certificate-revision: "1"
    cert-manager.io/private-key-secret-name: nginx-cert-skrds
  creationTimestamp: "2024-11-20T02:14:34Z"
  generation: 1
  name: nginx-cert-1
  namespace: nginx
  ownerReferences:
  - apiVersion: cert-manager.io/v1
    blockOwnerDeletion: true
    controller: true
    kind: Certificate
    name: nginx-cert
    uid: ffd8a316-8bbe-40f6-bac9-bba52cdabe28
  resourceVersion: "20102"
  uid: 200b365d-4471-450d-8065-e1a2bfc6c90f
spec:
  duration: 1h0m0s
  extra:
    authentication.kubernetes.io/pod-name:
    - cert-manager-859bc755b6-r2h4q
    authentication.kubernetes.io/pod-uid:
    - a39fdcdd-415f-4831-8415-bd7796ee282b
  groups:
  - system:serviceaccounts
  - system:serviceaccounts:cert-manager
  - system:authenticated
  issuerRef:
    group: awspca.cert-manager.io
    kind: AWSPCAClusterIssuer
    name: root-ca
  request: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ2x6Q0NBWDhDQVFBd0VERU9NQXdHQTFVRUF4TUZibWRwYm5nd2dnRWlNQTBHQ1NxR1NJYjNEUUVCQVFVQQpBNElCRHdBd2dnRUtBb0lCQVFDNy9mclRLcUZjZEtUTTF5anE0NVRwK05DVU9lL0FzUnF2eVRoRHlQNGRhVzI0ClFVTmV4K1RlVHV4dkJqcUlaOUdFNE9adUwwUHdkSFdaK1JGcHNJc2N4M1FkMFRyaVZaQTErRDl6dXFJUzExdzMKQ2RjU0wyTllOeStSek43Mm9Tc2R3K0R5cUJQQXEzVnBMSG9HdWJkTkhJMElsTW9razFpZEdyVGJvVVY4aTFLWAp3UzZGcWRST2tzYnFTNmRYdWtEQjFhcGNFdWVDSkhRb1g5VWlBSkhDaGpoT3BXTlB3Z3RORnIya08zNUh0N2hyCjlRWCtldEg2Vm01UkhBRWZCY1NTZlJoRkhnWEcvS1Y3UjhxV1l2WTFYbnBYWTdvL3hrMTRDaWNIUm1YaHJXK1EKWmJ4NjVJUDBubGNHRHJjQXBicCswcDhVVmt0R2pBTjNGL21qYUthdEFnTUJBQUdnUWpCQUJna3Foa2lHOXcwQgpDUTR4TXpBeE1Cb0dBMVVkRVFRVE1CR0NEM2QzZHk1bGVHRnRjR3hsTG1OdmJUQVRCZ05WSFNVRUREQUtCZ2dyCkJnRUZCUWNEQVRBTkJna3Foa2lHOXcwQkFRc0ZBQU9DQVFFQVorU2xJMlptVWJhSWFhZ2ZrY0xTZ0I5VkkyT0IKZ1JyTGV6cTNVSkppZlJnNy9aVVZBaC9wcExxZ1paK05sajBmbWR6WFFtREdkWG1VQ3pZNURqVTlRQ2ZldUdPKwpIc05NYnpGMGI0VS9rSEt4VnVIQmRGVDVFRkNmTk9rejJJY04vZDV1Sm1nZVNyYlZTaDdBQnFkQk5qVzJDY0RWCm9MM0VicVVrTktFSjZmeWVOQzVpM1VTdjVNMjJwR2lFUHlNV050blBURXZPQkc1dHQ0Y0RmdWFJci9NR0g1UGEKTUUrWG1VbDBqcHpVQUdtSjV1L0Nmb01CSHQ3aTlYOTFNS21CMzVBc3lEVmpqamVMOVZwdjVCZTVjcEIwUW1mdAo0ZktQMlZDRUJBQ211cVMyZGRCWlJmVVhqL242KzMwd29wOWNKbGFRL0dhaG9zV1IramNnWVZRZ3dBPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUgUkVRVUVTVC0tLS0tCg==
  uid: 6ae4a1ea-48e9-426b-82d5-78a095c1279f
  usages:
  - server auth
  username: system:serviceaccount:cert-manager:cert-manager
status:
  ca: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM4RENDQWRpZ0F3SUJBZ0lRWmU2ODlEbGpyMnhFT0JWZ2pvMHlGekFOQmdrcWhraUc5dzBCQVFzRkFEQVMKTVJBd0RnWURWUVFEREFkU2IyOTBJRU5CTUI0WERUSTBNVEV5TURBd01Ua3hPRm9YRFRNME1URXlNREF4TVRreApORm93RWpFUU1BNEdBMVVFQXd3SFVtOXZkQ0JEUVRDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDCkFRb0NnZ0VCQUxtTmo4dFN0dlk4LzgraWJjOHM5WXdwbEt4aHBUUWJsdEw4bk1OSUhGdVluZUg1WjhpSkdkUGMKVlpxVnVaT3p2NUJCUkRjY3ErajVpYVZsOUFwNkVCeVRsRGlpbk1lOUowUnJRYTB0cTFzdDRHaG5SMWZEb2Q5egpQNjNCbFFjT2VSeHBwZVpJaTBjN1FJNitVdmFRN24wald5VXJIQnRvRDF6b3pYbUY5cjdkcXpiSjhLRXFWOWd6CnlmNHNnZHc5MWtwdHhxcGNCVFRFamRuMFVicXdxV3EwUWpUZUwvUlR3UjFidm1VWG5RV1pPOEVSakxpTExFOHAKdnI2MXJNU09MRDZweVQxVER4VkRJbGJnMWJueWlWUDk3M29Md0lqZWNaZjFtaVgzc0J6WXJUdm5scGZYeStOTAo5ZEFYa3NYRGVBMXR1eHpiYlZIREF5VTJndG50aFowQ0F3RUFBYU5DTUVBd0R3WURWUjBUQVFIL0JBVXdBd0VCCi96QWRCZ05WSFE0RUZnUVVvNmp1aHFSSzB3WStzVFBhV2U3SENRNWlLOVF3RGdZRFZSMFBBUUgvQkFRREFnR0cKTUEwR0NTcUdTSWIzRFFFQkN3VUFBNElCQVFBY3JreGxQclc2WW83azVIdHJSL21wdUNLNE82T3loV2lLYjdSSgprSjRPZ2xLc0p3Z3Q2U3R6L0VTRTRrZU1FWUVHMTVvempiYkVubTBNTHJ6RnRNWGlWbUVieEtKTGtBZ08zMzVICi9YSGFLWjI5OWN6Z2xHZUR3bU9GTCtvK1RneExqUUZSaFYyZWNDK2I1WUV6ZU0zSjJGNEx0RkF1TzRlS29jSUsKN1lCc082WFY5RHdoYzZYdzlBY3NSS1YzSk1vY0hrN1dHakUxdVZyTEEvV24yRTQzWTFJN3ZIZDc3cnRReVFFSgorUDc2MFFPTzlOTGVLd2FMNzVmVHVxZ1FXRVJNUVJSNHY0ZTVldG1HVU9EbEY0UllGaGxMQ3NRSlJIYzJLR0tFCjNoK25aVTU1dEEzY3BYeS82ODBwR01UbW11T0Q1cC9FQlVHNzJGSVB4TzBLOElxTQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
  certificate: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURQRENDQWlTZ0F3SUJBZ0lRYktFOXFlU1VRbzNyTjdwdVZsMU9PREFOQmdrcWhraUc5dzBCQVFzRkFEQVMKTVJBd0RnWURWUVFEREFkU2IyOTBJRU5CTUI0WERUSTBNVEV5TURBeE1UUXpORm9YRFRJME1URXlNREF6TVRRegpORm93RURFT01Bd0dBMVVFQXhNRmJtZHBibmd3Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUUM3L2ZyVEtxRmNkS1RNMXlqcTQ1VHArTkNVT2UvQXNScXZ5VGhEeVA0ZGFXMjRRVU5leCtUZVR1eHYKQmpxSVo5R0U0T1p1TDBQd2RIV1orUkZwc0lzY3gzUWQwVHJpVlpBMStEOXp1cUlTMTF3M0NkY1NMMk5ZTnkrUgp6Tjcyb1NzZHcrRHlxQlBBcTNWcExIb0d1YmROSEkwSWxNb2trMWlkR3JUYm9VVjhpMUtYd1M2RnFkUk9rc2JxClM2ZFh1a0RCMWFwY0V1ZUNKSFFvWDlVaUFKSENoamhPcFdOUHdndE5GcjJrTzM1SHQ3aHI5UVgrZXRINlZtNVIKSEFFZkJjU1NmUmhGSGdYRy9LVjdSOHFXWXZZMVhucFhZN28veGsxNENpY0hSbVhoclcrUVpieDY1SVAwbmxjRwpEcmNBcGJwKzBwOFVWa3RHakFOM0YvbWphS2F0QWdNQkFBR2pnWTh3Z1l3d0dnWURWUjBSQkJNd0VZSVBkM2QzCkxtVjRZVzF3YkdVdVkyOXRNQWtHQTFVZEV3UUNNQUF3SHdZRFZSMGpCQmd3Rm9BVW82anVocVJLMHdZK3NUUGEKV2U3SENRNWlLOVF3SFFZRFZSME9CQllFRk00cDh4RDkvZUFJYjVlcXhYM0REOStJL1FZek1BNEdBMVVkRHdFQgovd1FFQXdJRm9EQVRCZ05WSFNVRUREQUtCZ2dyQmdFRkJRY0RBVEFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBCkFrNExMYStpOENIeXQ0K0xSVE5SYVQrUnc4M2Z2SFQyK0FpNE00SUc2R0Y1N2h0ZldzR3dhUTh0algvQVRETTkKUWUwS1JuKzk0Z0xDNkdQclBQRmdpeWVhNkdYcTIzaXl1WFMwejRNOFBlRUpYQTJUN25ZY3o3Q2lFcjMwc0hscwpocWZsbzdzU2d0Q1gyN1ZvMkZuaTNxMjdRT2lvZjVCWWVmNjVMRWJmbE92ODdCSWJWQWdURDFpZXlGcDh0b2YvCjlrSWZObXcrVUw3Y2xrS0VIaFRFSWdaNi8xWXpZUFZPSENQdXA3SUI4S0JGWGt5V284VHhYTVlYMGtGeDFDNVUKZVVjOUxpeCt1dVRoOElPb0VwbHc5Q2Q0RHc3eVlNSlkvZ3g5MzV2SHJPZmlTcXFkMkczdHF0cEhpYW5rYVdjVgpIWDJtMGJPUWdsekdLbkpFamZHTUp3PT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
  conditions:
  - lastTransitionTime: "2024-11-20T02:14:34Z"
    message: Certificate request has been approved by cert-manager.io
    reason: cert-manager.io
    status: "True"
    type: Approved
  - lastTransitionTime: "2024-11-20T02:14:35Z"
    message: certificate issued
    reason: Issued
    status: "True"
    type: Ready
$ k -n nginx get secret nginx-cert-tls -oyaml
apiVersion: v1
data:
  ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM4RENDQWRpZ0F3SUJBZ0lRWmU2ODlEbGpyMnhFT0JWZ2pvMHlGekFOQmdrcWhraUc5dzBCQVFzRkFEQVMKTVJBd0RnWURWUVFEREFkU2IyOTBJRU5CTUI0WERUSTBNVEV5TURBd01Ua3hPRm9YRFRNME1URXlNREF4TVRreApORm93RWpFUU1BNEdBMVVFQXd3SFVtOXZkQ0JEUVRDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDCkFRb0NnZ0VCQUxtTmo4dFN0dlk4LzgraWJjOHM5WXdwbEt4aHBUUWJsdEw4bk1OSUhGdVluZUg1WjhpSkdkUGMKVlpxVnVaT3p2NUJCUkRjY3ErajVpYVZsOUFwNkVCeVRsRGlpbk1lOUowUnJRYTB0cTFzdDRHaG5SMWZEb2Q5egpQNjNCbFFjT2VSeHBwZVpJaTBjN1FJNitVdmFRN24wald5VXJIQnRvRDF6b3pYbUY5cjdkcXpiSjhLRXFWOWd6CnlmNHNnZHc5MWtwdHhxcGNCVFRFamRuMFVicXdxV3EwUWpUZUwvUlR3UjFidm1VWG5RV1pPOEVSakxpTExFOHAKdnI2MXJNU09MRDZweVQxVER4VkRJbGJnMWJueWlWUDk3M29Md0lqZWNaZjFtaVgzc0J6WXJUdm5scGZYeStOTAo5ZEFYa3NYRGVBMXR1eHpiYlZIREF5VTJndG50aFowQ0F3RUFBYU5DTUVBd0R3WURWUjBUQVFIL0JBVXdBd0VCCi96QWRCZ05WSFE0RUZnUVVvNmp1aHFSSzB3WStzVFBhV2U3SENRNWlLOVF3RGdZRFZSMFBBUUgvQkFRREFnR0cKTUEwR0NTcUdTSWIzRFFFQkN3VUFBNElCQVFBY3JreGxQclc2WW83azVIdHJSL21wdUNLNE82T3loV2lLYjdSSgprSjRPZ2xLc0p3Z3Q2U3R6L0VTRTRrZU1FWUVHMTVvempiYkVubTBNTHJ6RnRNWGlWbUVieEtKTGtBZ08zMzVICi9YSGFLWjI5OWN6Z2xHZUR3bU9GTCtvK1RneExqUUZSaFYyZWNDK2I1WUV6ZU0zSjJGNEx0RkF1TzRlS29jSUsKN1lCc082WFY5RHdoYzZYdzlBY3NSS1YzSk1vY0hrN1dHakUxdVZyTEEvV24yRTQzWTFJN3ZIZDc3cnRReVFFSgorUDc2MFFPTzlOTGVLd2FMNzVmVHVxZ1FXRVJNUVJSNHY0ZTVldG1HVU9EbEY0UllGaGxMQ3NRSlJIYzJLR0tFCjNoK25aVTU1dEEzY3BYeS82ODBwR01UbW11T0Q1cC9FQlVHNzJGSVB4TzBLOElxTQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
  tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURQRENDQWlTZ0F3SUJBZ0lRYktFOXFlU1VRbzNyTjdwdVZsMU9PREFOQmdrcWhraUc5dzBCQVFzRkFEQVMKTVJBd0RnWURWUVFEREFkU2IyOTBJRU5CTUI0WERUSTBNVEV5TURBeE1UUXpORm9YRFRJME1URXlNREF6TVRRegpORm93RURFT01Bd0dBMVVFQXhNRmJtZHBibmd3Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUUM3L2ZyVEtxRmNkS1RNMXlqcTQ1VHArTkNVT2UvQXNScXZ5VGhEeVA0ZGFXMjRRVU5leCtUZVR1eHYKQmpxSVo5R0U0T1p1TDBQd2RIV1orUkZwc0lzY3gzUWQwVHJpVlpBMStEOXp1cUlTMTF3M0NkY1NMMk5ZTnkrUgp6Tjcyb1NzZHcrRHlxQlBBcTNWcExIb0d1YmROSEkwSWxNb2trMWlkR3JUYm9VVjhpMUtYd1M2RnFkUk9rc2JxClM2ZFh1a0RCMWFwY0V1ZUNKSFFvWDlVaUFKSENoamhPcFdOUHdndE5GcjJrTzM1SHQ3aHI5UVgrZXRINlZtNVIKSEFFZkJjU1NmUmhGSGdYRy9LVjdSOHFXWXZZMVhucFhZN28veGsxNENpY0hSbVhoclcrUVpieDY1SVAwbmxjRwpEcmNBcGJwKzBwOFVWa3RHakFOM0YvbWphS2F0QWdNQkFBR2pnWTh3Z1l3d0dnWURWUjBSQkJNd0VZSVBkM2QzCkxtVjRZVzF3YkdVdVkyOXRNQWtHQTFVZEV3UUNNQUF3SHdZRFZSMGpCQmd3Rm9BVW82anVocVJLMHdZK3NUUGEKV2U3SENRNWlLOVF3SFFZRFZSME9CQllFRk00cDh4RDkvZUFJYjVlcXhYM0REOStJL1FZek1BNEdBMVVkRHdFQgovd1FFQXdJRm9EQVRCZ05WSFNVRUREQUtCZ2dyQmdFRkJRY0RBVEFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBCkFrNExMYStpOENIeXQ0K0xSVE5SYVQrUnc4M2Z2SFQyK0FpNE00SUc2R0Y1N2h0ZldzR3dhUTh0algvQVRETTkKUWUwS1JuKzk0Z0xDNkdQclBQRmdpeWVhNkdYcTIzaXl1WFMwejRNOFBlRUpYQTJUN25ZY3o3Q2lFcjMwc0hscwpocWZsbzdzU2d0Q1gyN1ZvMkZuaTNxMjdRT2lvZjVCWWVmNjVMRWJmbE92ODdCSWJWQWdURDFpZXlGcDh0b2YvCjlrSWZObXcrVUw3Y2xrS0VIaFRFSWdaNi8xWXpZUFZPSENQdXA3SUI4S0JGWGt5V284VHhYTVlYMGtGeDFDNVUKZVVjOUxpeCt1dVRoOElPb0VwbHc5Q2Q0RHc3eVlNSlkvZ3g5MzV2SHJPZmlTcXFkMkczdHF0cEhpYW5rYVdjVgpIWDJtMGJPUWdsekdLbkpFamZHTUp3PT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
  tls.key: (snip)
kind: Secret
metadata:
  annotations:
    cert-manager.io/alt-names: www.example.com
    cert-manager.io/certificate-name: nginx-cert
    cert-manager.io/common-name: nginx
    cert-manager.io/ip-sans: ""
    cert-manager.io/issuer-group: awspca.cert-manager.io
    cert-manager.io/issuer-kind: AWSPCAClusterIssuer
    cert-manager.io/issuer-name: root-ca
    cert-manager.io/uri-sans: ""
  creationTimestamp: "2024-11-20T01:50:57Z"
  labels:
    controller.cert-manager.io/fao: "true"
  name: nginx-cert-tls
  namespace: nginx
  resourceVersion: "20104"
  uid: fa9fc42c-c7b8-4623-b611-284221dbfa61
type: kubernetes.io/tls

Nginx のインストール

この証明書をマウントして TLS を終端する Nginx を起動する。

cat << EOF > nginx-deployment.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-config-ssl
  namespace: nginx
data:
  default.conf: |
    server {
      listen 443 ssl;
      server_name www.example.com;
      ssl_certificate /etc/nginx/ssl/tls.crt;
      ssl_certificate_key /etc/nginx/ssl/tls.key;
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  namespace: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: public.ecr.aws/docker/library/nginx:latest
        volumeMounts:
        - name: config
          mountPath: /etc/nginx/conf.d
        - name: ssl-certs
          mountPath: /etc/nginx/ssl
      volumes:
      - name: config
        configMap:
          name: nginx-config-ssl
      - name: ssl-certs
        secret:
          secretName: nginx-cert-tls
---
apiVersion: v1
kind: Service
metadata:
  name: nginx
  namespace: nginx
spec:
  selector:
    app: nginx
  ports:
    - port: 443
      targetPort: 443
  type: ClusterIP
EOF
k apply -f nginx-deployment.yaml

ローカルマシンの /etc/hosts に以下エントリを追加しておく。

127.0.0.1       www.example.com

ローカルのブラウザからアクセスするためにポートフォワードする。

$ kubectl -n nginx port-forward svc/nginx 8443:443
Forwarding from 127.0.0.1:8443 -> 443
Forwarding from [::1]:8443 -> 443

ブラウザでアクセスして証明書を確認する。名前は合わせたが、クライアントに CA 証明書がないので警告が出る。有効期限 1 時間でリクエストしたはずだが、2 時間あるようだ。

Secret をデコードして証明書の内容を見てみると、確かにそのようだ。

$ k -n nginx get secret nginx-cert-tls -ojson | jq -r '.data."tls.crt"' | base64 --decode | openssl x509 -text -noout -
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number:
            6c:a1:3d:a9:e4:94:42:8d:eb:37:ba:6e:56:5d:4e:38
        Signature Algorithm: sha256WithRSAEncryption
        Issuer: CN=Root CA
        Validity
            Not Before: Nov 20 01:14:34 2024 GMT
            Not After : Nov 20 03:14:34 2024 GMT
        Subject: CN=nginx
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (2048 bit)
                Modulus:
                    00:bb:fd:fa:d3:2a:a1:5c:74:a4:cc:d7:28:ea:e3:
                    94:e9:f8:d0:94:39:ef:c0:b1:1a:af:c9:38:43:c8:
                    fe:1d:69:6d:b8:41:43:5e:c7:e4:de:4e:ec:6f:06:
                    3a:88:67:d1:84:e0:e6:6e:2f:43:f0:74:75:99:f9:
                    11:69:b0:8b:1c:c7:74:1d:d1:3a:e2:55:90:35:f8:
                    3f:73:ba:a2:12:d7:5c:37:09:d7:12:2f:63:58:37:
                    2f:91:cc:de:f6:a1:2b:1d:c3:e0:f2:a8:13:c0:ab:
                    75:69:2c:7a:06:b9:b7:4d:1c:8d:08:94:ca:24:93:
                    58:9d:1a:b4:db:a1:45:7c:8b:52:97:c1:2e:85:a9:
                    d4:4e:92:c6:ea:4b:a7:57:ba:40:c1:d5:aa:5c:12:
                    e7:82:24:74:28:5f:d5:22:00:91:c2:86:38:4e:a5:
                    63:4f:c2:0b:4d:16:bd:a4:3b:7e:47:b7:b8:6b:f5:
                    05:fe:7a:d1:fa:56:6e:51:1c:01:1f:05:c4:92:7d:
                    18:45:1e:05:c6:fc:a5:7b:47:ca:96:62:f6:35:5e:
                    7a:57:63:ba:3f:c6:4d:78:0a:27:07:46:65:e1:ad:
                    6f:90:65:bc:7a:e4:83:f4:9e:57:06:0e:b7:00:a5:
                    ba:7e:d2:9f:14:56:4b:46:8c:03:77:17:f9:a3:68:
                    a6:ad
                Exponent: 65537 (0x10001)
        X509v3 extensions:
            X509v3 Subject Alternative Name: 
                DNS:www.example.com
            X509v3 Basic Constraints: 
                CA:FALSE
            X509v3 Authority Key Identifier: 
                A3:A8:EE:86:A4:4A:D3:06:3E:B1:33:DA:59:EE:C7:09:0E:62:2B:D4
            X509v3 Subject Key Identifier: 
                CE:29:F3:10:FD:FD:E0:08:6F:97:AA:C5:7D:C3:0F:DF:88:FD:06:33
            X509v3 Key Usage: critical
                Digital Signature, Key Encipherment
            X509v3 Extended Key Usage: 
                TLS Web Server Authentication
    Signature Algorithm: sha256WithRSAEncryption
    Signature Value:
        02:4e:0b:2d:af:a2:f0:21:f2:b7:8f:8b:45:33:51:69:3f:91:
        c3:cd:df:bc:74:f6:f8:08:b8:33:82:06:e8:61:79:ee:1b:5f:
        5a:c1:b0:69:0f:2d:8d:7f:c0:4c:33:3d:41:ed:0a:46:7f:bd:
        e2:02:c2:e8:63:eb:3c:f1:60:8b:27:9a:e8:65:ea:db:78:b2:
        b9:74:b4:cf:83:3c:3d:e1:09:5c:0d:93:ee:76:1c:cf:b0:a2:
        12:bd:f4:b0:79:6c:86:a7:e5:a3:bb:12:82:d0:97:db:b5:68:
        d8:59:e2:de:ad:bb:40:e8:a8:7f:90:58:79:fe:b9:2c:46:df:
        94:eb:fc:ec:12:1b:54:08:13:0f:58:9e:c8:5a:7c:b6:87:ff:
        f6:42:1f:36:6c:3e:50:be:dc:96:42:84:1e:14:c4:22:06:7a:
        ff:56:33:60:f5:4e:1c:23:ee:a7:b2:01:f0:a0:45:5e:4c:96:
        a3:c4:f1:5c:c6:17:d2:41:71:d4:2e:54:79:47:3d:2e:2c:7e:
        ba:e4:e1:f0:83:a8:12:99:70:f4:27:78:0f:0e:f2:60:c2:58:
        fe:0c:7d:df:9b:c7:ac:e7:e2:4a:aa:9d:d8:6d:ed:aa:da:47:
        89:a9:e4:69:67:15:1d:7d:a6:d1:b3:90:82:5c:c6:2a:72:44:
        8d:f1:8c:27

openssl でアクセスしても確認して見る。

$ openssl s_client -connect localhost:8443 -showcerts
Connecting to ::1
CONNECTED(00000005)
Can't use SSL_get_servername
depth=0 CN=nginx
verify error:num=20:unable to get local issuer certificate
verify return:1
depth=0 CN=nginx
verify error:num=21:unable to verify the first certificate
verify return:1
depth=0 CN=nginx
verify return:1
---
Certificate chain
 0 s:CN=nginx
   i:CN=Root CA
   a:PKEY: rsaEncryption, 2048 (bit); sigalg: RSA-SHA256
   v:NotBefore: Nov 20 01:14:34 2024 GMT; NotAfter: Nov 20 03:14:34 2024 GMT
-----BEGIN CERTIFICATE-----
MIIDPDCCAiSgAwIBAgIQbKE9qeSUQo3rN7puVl1OODANBgkqhkiG9w0BAQsFADAS
MRAwDgYDVQQDDAdSb290IENBMB4XDTI0MTEyMDAxMTQzNFoXDTI0MTEyMDAzMTQz
NFowEDEOMAwGA1UEAxMFbmdpbngwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEK
AoIBAQC7/frTKqFcdKTM1yjq45Tp+NCUOe/AsRqvyThDyP4daW24QUNex+TeTuxv
BjqIZ9GE4OZuL0PwdHWZ+RFpsIscx3Qd0TriVZA1+D9zuqIS11w3CdcSL2NYNy+R
zN72oSsdw+DyqBPAq3VpLHoGubdNHI0IlMokk1idGrTboUV8i1KXwS6FqdROksbq
S6dXukDB1apcEueCJHQoX9UiAJHChjhOpWNPwgtNFr2kO35Ht7hr9QX+etH6Vm5R
HAEfBcSSfRhFHgXG/KV7R8qWYvY1XnpXY7o/xk14CicHRmXhrW+QZbx65IP0nlcG
DrcApbp+0p8UVktGjAN3F/mjaKatAgMBAAGjgY8wgYwwGgYDVR0RBBMwEYIPd3d3
LmV4YW1wbGUuY29tMAkGA1UdEwQCMAAwHwYDVR0jBBgwFoAUo6juhqRK0wY+sTPa
We7HCQ5iK9QwHQYDVR0OBBYEFM4p8xD9/eAIb5eqxX3DD9+I/QYzMA4GA1UdDwEB
/wQEAwIFoDATBgNVHSUEDDAKBggrBgEFBQcDATANBgkqhkiG9w0BAQsFAAOCAQEA
Ak4LLa+i8CHyt4+LRTNRaT+Rw83fvHT2+Ai4M4IG6GF57htfWsGwaQ8tjX/ATDM9
Qe0KRn+94gLC6GPrPPFgiyea6GXq23iyuXS0z4M8PeEJXA2T7nYcz7CiEr30sHls
hqflo7sSgtCX27Vo2Fni3q27QOiof5BYef65LEbflOv87BIbVAgTD1ieyFp8tof/
9kIfNmw+UL7clkKEHhTEIgZ6/1YzYPVOHCPup7IB8KBFXkyWo8TxXMYX0kFx1C5U
eUc9Lix+uuTh8IOoEplw9Cd4Dw7yYMJY/gx935vHrOfiSqqd2G3tqtpHiankaWcV
HX2m0bOQglzGKnJEjfGMJw==
-----END CERTIFICATE-----
---
Server certificate
subject=CN=nginx
issuer=CN=Root CA
---
No client certificate CA names sent
Peer signing digest: SHA256
Peer signature type: RSA-PSS
Server Temp Key: X25519, 253 bits
---
SSL handshake has read 1388 bytes and written 382 bytes
Verification error: unable to verify the first certificate
---
New, TLSv1.3, Cipher is TLS_AES_256_GCM_SHA384
Protocol: TLSv1.3
Server public key is 2048 bit
This TLS version forbids renegotiation.
Compression: NONE
Expansion: NONE
No ALPN negotiated
Early data was not sent
Verify return code: 21 (unable to verify the first certificate)
---
---
Post-Handshake New Session Ticket arrived:
SSL-Session:
    Protocol  : TLSv1.3
    Cipher    : TLS_AES_256_GCM_SHA384
    Session-ID: 081865F75E84C5526B8329631C8A719465450BF69726E357DE322F06D4B766EA
    Session-ID-ctx:
    Resumption PSK: 02BA0EE819A11816BD4739A06CD500B9C484BAFE5E473430549D1AA1AF6357D2948165C215AC9B1602C1D3325B5626C1
    PSK identity: None
    PSK identity hint: None
    SRP username: None
    TLS session ticket lifetime hint: 300 (seconds)
    TLS session ticket:
    0000 - 76 c5 fe b4 da 6a 4e 2a-71 a3 75 1a 5c 96 74 4e   v....jN*q.u.\.tN
    0010 - ca 82 c2 1c 9b be 41 e1-94 07 64 36 89 e2 6d b9   ......A...d6..m.
    0020 - 17 f8 39 a9 52 e1 ab a4-d9 2c 2b 8f 2f 70 73 7f   ..9.R....,+./ps.
    0030 - e1 f2 46 de 89 11 f6 db-0c 09 06 4a 31 b6 d2 2f   ..F........J1../
    0040 - 68 5b af d6 9b 42 71 01-e8 46 de 36 9d c9 7f 41   h[...Bq..F.6...A
    0050 - fa 3b 8c 11 f6 06 1c 70-79 71 e5 02 65 19 8c 63   .;.....pyq..e..c
    0060 - e9 d4 38 c5 1e ad d9 6b-09 f8 03 28 3f 35 60 ca   ..8....k...(?5`.
    0070 - 25 6e 82 b9 bf 45 54 ca-ad 5f 70 44 01 db a9 26   %n...ET.._pD...&
    0080 - e7 15 19 f2 d6 ba 7d b7-03 95 0e fd 1f 85 8c 62   ......}........b
    0090 - 0b 28 bf 05 ef 3e f4 d6-65 71 24 ca 77 b2 00 11   .(...>..eq$.w...
    00a0 - 5e 97 df 49 53 da a1 66-3b c4 6c e7 61 ae 96 55   ^..IS..f;.l.a..U
    00b0 - de bb 69 4f af 54 e0 09-e4 3a 28 ab 72 79 08 da   ..iO.T...:(.ry..
    00c0 - 42 3e 8b 6c 17 3d e3 2c-a1 84 1c 2f c9 f0 d9 fb   B>.l.=.,.../....
    00d0 - 07 7d fa 81 dc c6 e2 53-9a d0 49 c4 9f 70 52 89   .}.....S..I..pR.

    Start Time: 1732069480
    Timeout   : 7200 (sec)
    Verify return code: 21 (unable to verify the first certificate)
    Extended master secret: no
    Max Early Data: 0
---
read R BLOCK
---
Post-Handshake New Session Ticket arrived:
SSL-Session:
    Protocol  : TLSv1.3
    Cipher    : TLS_AES_256_GCM_SHA384
    Session-ID: 779A52D6D297D50E423E512D7046E8C40C663822AC5274FDD0D18FC51207782F
    Session-ID-ctx:
    Resumption PSK: 4D2F35113182F1DE64F7550E4BD041A662F85E4C0C4593B60AE6CAF2D4B5D766E737E087E985860EB08E7CCD77646035
    PSK identity: None
    PSK identity hint: None
    SRP username: None
    TLS session ticket lifetime hint: 300 (seconds)
    TLS session ticket:
    0000 - 76 c5 fe b4 da 6a 4e 2a-71 a3 75 1a 5c 96 74 4e   v....jN*q.u.\.tN
    0010 - cd 56 f8 9c 07 05 c0 b5-3b fe bf 67 ea 46 b9 67   .V......;..g.F.g
    0020 - 41 c1 e1 11 ab 13 b8 e3-b2 c3 2e b9 48 c3 a0 70   A...........H..p
    0030 - 81 37 9c b6 d3 3d 7b 7d-ea 5a 58 c2 0e 1b b0 e1   .7...={}.ZX.....
    0040 - 32 ee 32 ff c2 6d 95 71-9d 58 6f 8f 97 63 b5 c8   2.2..m.q.Xo..c..
    0050 - c2 88 fc 6b 73 32 1e b4-e8 3b 60 19 fd 41 a0 2b   ...ks2...;`..A.+
    0060 - bd 4f 16 62 03 08 bd 5c-c7 02 06 fa 55 8c d3 82   .O.b...\....U...
    0070 - db 32 83 87 2a f7 b6 be-22 18 78 0d 2c e7 14 6f   .2..*...".x.,..o
    0080 - d6 dc 6e 1b 81 14 b2 9e-84 15 04 4f 42 52 70 a4   ..n........OBRp.
    0090 - 5e 64 b7 39 89 47 94 63-2d 00 99 92 31 8b d5 f5   ^d.9.G.c-...1...
    00a0 - 52 f6 63 0a 50 b7 57 c8-a6 86 db 7e f9 fc 75 a7   R.c.P.W....~..u.
    00b0 - 10 eb 24 08 88 ec ff cc-b5 cd f7 71 f2 b3 9d 9e   ..$........q....
    00c0 - 9c b6 e3 a8 aa cb 8a cc-3f 21 ee 96 03 a3 42 26   ........?!....B&
    00d0 - 9b 58 15 9b af 3b b0 92-f9 69 3a 24 1c 8e b0 77   .X...;...i:$...w

    Start Time: 1732069480
    Timeout   : 7200 (sec)
    Verify return code: 21 (unable to verify the first certificate)
    Extended master secret: no
    Max Early Data: 0
---
read R BLOCK

証明書の更新動作の確認

このまま 2 時間待とうと思ったら 50 分で更新が実施された。

cert-manager のログ、エラーが出ていて何か問題があるかも知れない。

I1120 03:04:34.000561       1 trigger_controller.go:223] "Certificate must be re-issued" logger="cert-manager.controller" key="nginx/nginx-cert" reason="Renewing" message="Renewing certificate as renewal was scheduled at 2024-11-20 03:04:34 +0000 UTC"
I1120 03:04:34.000597       1 conditions.go:203] Setting lastTransitionTime for Certificate "nginx-cert" condition "Issuing" to 2024-11-20 03:04:34.000589942 +0000 UTC m=+6782.076443858
I1120 03:04:34.083756       1 conditions.go:263] Setting lastTransitionTime for CertificateRequest "nginx-cert-2" condition "Approved" to 2024-11-20 03:04:34.083747695 +0000 UTC m=+6782.159601623
I1120 03:04:35.330079       1 controller.go:152] "re-queuing item due to optimistic locking on resource" logger="cert-manager.controller" error="Operation cannot be fulfilled on certificates.cert-manager.io \"nginx-cert\": the object has been modified; please apply your changes to the latest version and try again"
I1120 03:04:35.348350       1 controller.go:152] "re-queuing item due to optimistic locking on resource" logger="cert-manager.controller" error="Operation cannot be fulfilled on certificates.cert-manager.io \"nginx-cert\": the object has been modified; please apply your changes to the latest version and try again"

privateca-issuer のログ、こちらは大丈夫そう。

{"level":"info","ts":"2024-11-20T02:14:34Z","logger":"controllers.CertificateRequest","msg":"Issued certificate with arn: arn:aws:acm-pca:ap-northeast-1:XXXXXXXXXXXX:certificate-authority/2aebb313-2f59-4cd1-98a8-97d39bf3c42a/certificate/6ca13da9e494428deb37ba6e565d4e38","certificaterequest":{"name":"nginx-cert-1","namespace":"nginx"}}
{"level":"info","ts":"2024-11-20T02:14:35Z","logger":"controllers.CertificateRequest","msg":"Created certificate with arn: ","certificaterequest":{"name":"nginx-cert-1","namespace":"nginx"}}
{"level":"info","ts":"2024-11-20T03:04:34Z","logger":"controllers.CertificateRequest","msg":"Issued certificate with arn: arn:aws:acm-pca:ap-northeast-1:XXXXXXXXXXXX:certificate-authority/2aebb313-2f59-4cd1-98a8-97d39bf3c42a/certificate/07d4187f2dd22a9e9d13264c40c50164","certificaterequest":{"name":"nginx-cert-2","namespace":"nginx"}}
{"level":"info","ts":"2024-11-20T03:04:35Z","logger":"controllers.CertificateRequest","msg":"Created certificate with arn: ","certificaterequest":{"name":"nginx-cert-2","namespace":"nginx"}}
$ k -n nginx get certificate
NAME         READY   SECRET           AGE
nginx-cert   True    nginx-cert-tls   52m
$ k -n nginx get certificaterequest
NAME           APPROVED   DENIED   READY   ISSUER    REQUESTER                                         AGE
nginx-cert-1   True                True    root-ca   system:serviceaccount:cert-manager:cert-manager   52m
nginx-cert-2   True                True    root-ca   system:serviceaccount:cert-manager:cert-manager   2m41s
$ k -n nginx get secret
NAME             TYPE                DATA   AGE
nginx-cert-tls   kubernetes.io/tls   3      76m
$ k -n nginx get certificate nginx-cert -oyaml
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"cert-manager.io/v1","kind":"Certificate","metadata":{"annotations":{},"name":"nginx-cert","namespace":"nginx"},"spec":{"commonName":"nginx","dnsNames":["www.example.com"],"duration":"1h0m0s","issuerRef":{"group":"awspca.cert-manager.io","kind":"AWSPCAClusterIssuer","name":"root-ca"},"privateKey":{"algorithm":"RSA","size":2048},"renewBefore":"10m0s","secretName":"nginx-cert-tls","usages":["server auth"]}}
  creationTimestamp: "2024-11-20T02:14:34Z"
  generation: 1
  name: nginx-cert
  namespace: nginx
  resourceVersion: "30130"
  uid: ffd8a316-8bbe-40f6-bac9-bba52cdabe28
spec:
  commonName: nginx
  dnsNames:
  - www.example.com
  duration: 1h0m0s
  issuerRef:
    group: awspca.cert-manager.io
    kind: AWSPCAClusterIssuer
    name: root-ca
  privateKey:
    algorithm: RSA
    size: 2048
  renewBefore: 10m0s
  secretName: nginx-cert-tls
  usages:
  - server auth
status:
  conditions:
  - lastTransitionTime: "2024-11-20T02:14:35Z"
    message: Certificate is up to date and has not expired
    observedGeneration: 1
    reason: Ready
    status: "True"
    type: Ready
  notAfter: "2024-11-20T04:04:34Z"
  notBefore: "2024-11-20T02:04:34Z"
  renewalTime: "2024-11-20T03:54:34Z"
  revision: 2
$ k -n nginx get certificaterequest nginx-cert-2 -oyaml
apiVersion: cert-manager.io/v1
kind: CertificateRequest
metadata:
  annotations:
    aws-privateca-issuer/certificate-arn: arn:aws:acm-pca:ap-northeast-1:XXXXXXXXXXXX:certificate-authority/2aebb313-2f59-4cd1-98a8-97d39bf3c42a/certificate/07d4187f2dd22a9e9d13264c40c50164
    cert-manager.io/certificate-name: nginx-cert
    cert-manager.io/certificate-revision: "2"
    cert-manager.io/private-key-secret-name: nginx-cert-mzbzt
  creationTimestamp: "2024-11-20T03:04:34Z"
  generation: 1
  name: nginx-cert-2
  namespace: nginx
  ownerReferences:
  - apiVersion: cert-manager.io/v1
    blockOwnerDeletion: true
    controller: true
    kind: Certificate
    name: nginx-cert
    uid: ffd8a316-8bbe-40f6-bac9-bba52cdabe28
  resourceVersion: "30123"
  uid: e68773c3-5d6f-498b-bb7c-1ae538c915ae
spec:
  duration: 1h0m0s
  extra:
    authentication.kubernetes.io/pod-name:
    - cert-manager-859bc755b6-r2h4q
    authentication.kubernetes.io/pod-uid:
    - a39fdcdd-415f-4831-8415-bd7796ee282b
  groups:
  - system:serviceaccounts
  - system:serviceaccounts:cert-manager
  - system:authenticated
  issuerRef:
    group: awspca.cert-manager.io
    kind: AWSPCAClusterIssuer
    name: root-ca
  request: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ2x6Q0NBWDhDQVFBd0VERU9NQXdHQTFVRUF4TUZibWRwYm5nd2dnRWlNQTBHQ1NxR1NJYjNEUUVCQVFVQQpBNElCRHdBd2dnRUtBb0lCQVFDNy9mclRLcUZjZEtUTTF5anE0NVRwK05DVU9lL0FzUnF2eVRoRHlQNGRhVzI0ClFVTmV4K1RlVHV4dkJqcUlaOUdFNE9adUwwUHdkSFdaK1JGcHNJc2N4M1FkMFRyaVZaQTErRDl6dXFJUzExdzMKQ2RjU0wyTllOeStSek43Mm9Tc2R3K0R5cUJQQXEzVnBMSG9HdWJkTkhJMElsTW9razFpZEdyVGJvVVY4aTFLWAp3UzZGcWRST2tzYnFTNmRYdWtEQjFhcGNFdWVDSkhRb1g5VWlBSkhDaGpoT3BXTlB3Z3RORnIya08zNUh0N2hyCjlRWCtldEg2Vm01UkhBRWZCY1NTZlJoRkhnWEcvS1Y3UjhxV1l2WTFYbnBYWTdvL3hrMTRDaWNIUm1YaHJXK1EKWmJ4NjVJUDBubGNHRHJjQXBicCswcDhVVmt0R2pBTjNGL21qYUthdEFnTUJBQUdnUWpCQUJna3Foa2lHOXcwQgpDUTR4TXpBeE1Cb0dBMVVkRVFRVE1CR0NEM2QzZHk1bGVHRnRjR3hsTG1OdmJUQVRCZ05WSFNVRUREQUtCZ2dyCkJnRUZCUWNEQVRBTkJna3Foa2lHOXcwQkFRc0ZBQU9DQVFFQVorU2xJMlptVWJhSWFhZ2ZrY0xTZ0I5VkkyT0IKZ1JyTGV6cTNVSkppZlJnNy9aVVZBaC9wcExxZ1paK05sajBmbWR6WFFtREdkWG1VQ3pZNURqVTlRQ2ZldUdPKwpIc05NYnpGMGI0VS9rSEt4VnVIQmRGVDVFRkNmTk9rejJJY04vZDV1Sm1nZVNyYlZTaDdBQnFkQk5qVzJDY0RWCm9MM0VicVVrTktFSjZmeWVOQzVpM1VTdjVNMjJwR2lFUHlNV050blBURXZPQkc1dHQ0Y0RmdWFJci9NR0g1UGEKTUUrWG1VbDBqcHpVQUdtSjV1L0Nmb01CSHQ3aTlYOTFNS21CMzVBc3lEVmpqamVMOVZwdjVCZTVjcEIwUW1mdAo0ZktQMlZDRUJBQ211cVMyZGRCWlJmVVhqL242KzMwd29wOWNKbGFRL0dhaG9zV1IramNnWVZRZ3dBPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUgUkVRVUVTVC0tLS0tCg==
  uid: 6ae4a1ea-48e9-426b-82d5-78a095c1279f
  usages:
  - server auth
  username: system:serviceaccount:cert-manager:cert-manager
status:
  ca: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM4RENDQWRpZ0F3SUJBZ0lRWmU2ODlEbGpyMnhFT0JWZ2pvMHlGekFOQmdrcWhraUc5dzBCQVFzRkFEQVMKTVJBd0RnWURWUVFEREFkU2IyOTBJRU5CTUI0WERUSTBNVEV5TURBd01Ua3hPRm9YRFRNME1URXlNREF4TVRreApORm93RWpFUU1BNEdBMVVFQXd3SFVtOXZkQ0JEUVRDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDCkFRb0NnZ0VCQUxtTmo4dFN0dlk4LzgraWJjOHM5WXdwbEt4aHBUUWJsdEw4bk1OSUhGdVluZUg1WjhpSkdkUGMKVlpxVnVaT3p2NUJCUkRjY3ErajVpYVZsOUFwNkVCeVRsRGlpbk1lOUowUnJRYTB0cTFzdDRHaG5SMWZEb2Q5egpQNjNCbFFjT2VSeHBwZVpJaTBjN1FJNitVdmFRN24wald5VXJIQnRvRDF6b3pYbUY5cjdkcXpiSjhLRXFWOWd6CnlmNHNnZHc5MWtwdHhxcGNCVFRFamRuMFVicXdxV3EwUWpUZUwvUlR3UjFidm1VWG5RV1pPOEVSakxpTExFOHAKdnI2MXJNU09MRDZweVQxVER4VkRJbGJnMWJueWlWUDk3M29Md0lqZWNaZjFtaVgzc0J6WXJUdm5scGZYeStOTAo5ZEFYa3NYRGVBMXR1eHpiYlZIREF5VTJndG50aFowQ0F3RUFBYU5DTUVBd0R3WURWUjBUQVFIL0JBVXdBd0VCCi96QWRCZ05WSFE0RUZnUVVvNmp1aHFSSzB3WStzVFBhV2U3SENRNWlLOVF3RGdZRFZSMFBBUUgvQkFRREFnR0cKTUEwR0NTcUdTSWIzRFFFQkN3VUFBNElCQVFBY3JreGxQclc2WW83azVIdHJSL21wdUNLNE82T3loV2lLYjdSSgprSjRPZ2xLc0p3Z3Q2U3R6L0VTRTRrZU1FWUVHMTVvempiYkVubTBNTHJ6RnRNWGlWbUVieEtKTGtBZ08zMzVICi9YSGFLWjI5OWN6Z2xHZUR3bU9GTCtvK1RneExqUUZSaFYyZWNDK2I1WUV6ZU0zSjJGNEx0RkF1TzRlS29jSUsKN1lCc082WFY5RHdoYzZYdzlBY3NSS1YzSk1vY0hrN1dHakUxdVZyTEEvV24yRTQzWTFJN3ZIZDc3cnRReVFFSgorUDc2MFFPTzlOTGVLd2FMNzVmVHVxZ1FXRVJNUVJSNHY0ZTVldG1HVU9EbEY0UllGaGxMQ3NRSlJIYzJLR0tFCjNoK25aVTU1dEEzY3BYeS82ODBwR01UbW11T0Q1cC9FQlVHNzJGSVB4TzBLOElxTQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
  certificate: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURQRENDQWlTZ0F3SUJBZ0lRQjlRWWZ5M1NLcDZkRXlaTVFNVUJaREFOQmdrcWhraUc5dzBCQVFzRkFEQVMKTVJBd0RnWURWUVFEREFkU2IyOTBJRU5CTUI0WERUSTBNVEV5TURBeU1EUXpORm9YRFRJME1URXlNREEwTURRegpORm93RURFT01Bd0dBMVVFQXhNRmJtZHBibmd3Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUUM3L2ZyVEtxRmNkS1RNMXlqcTQ1VHArTkNVT2UvQXNScXZ5VGhEeVA0ZGFXMjRRVU5leCtUZVR1eHYKQmpxSVo5R0U0T1p1TDBQd2RIV1orUkZwc0lzY3gzUWQwVHJpVlpBMStEOXp1cUlTMTF3M0NkY1NMMk5ZTnkrUgp6Tjcyb1NzZHcrRHlxQlBBcTNWcExIb0d1YmROSEkwSWxNb2trMWlkR3JUYm9VVjhpMUtYd1M2RnFkUk9rc2JxClM2ZFh1a0RCMWFwY0V1ZUNKSFFvWDlVaUFKSENoamhPcFdOUHdndE5GcjJrTzM1SHQ3aHI5UVgrZXRINlZtNVIKSEFFZkJjU1NmUmhGSGdYRy9LVjdSOHFXWXZZMVhucFhZN28veGsxNENpY0hSbVhoclcrUVpieDY1SVAwbmxjRwpEcmNBcGJwKzBwOFVWa3RHakFOM0YvbWphS2F0QWdNQkFBR2pnWTh3Z1l3d0dnWURWUjBSQkJNd0VZSVBkM2QzCkxtVjRZVzF3YkdVdVkyOXRNQWtHQTFVZEV3UUNNQUF3SHdZRFZSMGpCQmd3Rm9BVW82anVocVJLMHdZK3NUUGEKV2U3SENRNWlLOVF3SFFZRFZSME9CQllFRk00cDh4RDkvZUFJYjVlcXhYM0REOStJL1FZek1BNEdBMVVkRHdFQgovd1FFQXdJRm9EQVRCZ05WSFNVRUREQUtCZ2dyQmdFRkJRY0RBVEFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBCmNLSDlBTlZybWVVa1pIdXF2Vmd6ZmlEUmRoVEZ5Wm51WjZpM1EzSXE2aUZjMDFWK0FOaFJmb2x2T0hjK1JyQk4KVWNkY211YldVcjFsN2xXbVVLUDJLNlFua05ubDhDMTN3YnJlNXhGNmdzMjBvVlRtaTg3ZDhhUklkR1pWMWxIVQoyUlV5bEUvZEtuR3VBc282dkNoUVM0R1BjcWlBd3gyN3QyeEs3aHBtbCs2N0pMMTlZbU8vOGFaV2N3dGRGUzYwCmZGb0VUWmNPekoxRFQzZjBjeUxBUGYvOWFmQU1ndG1BbFdKa0RPTzBZUmhWMFRRQk1aYnVVYmptbGdadzBKRHgKamNrRVY2bnRCaTNuNVhNVUJ3eXpCRE9SamozZjVicG5nK29peTYrVjdmZzl6L3NXb2RUWk5SN3pINjBROGVmSgowMEVRZjJ3M3F4WkZhV3dGTmNuVVhBPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
  conditions:
  - lastTransitionTime: "2024-11-20T03:04:34Z"
    message: Certificate request has been approved by cert-manager.io
    reason: cert-manager.io
    status: "True"
    type: Approved
  - lastTransitionTime: "2024-11-20T03:04:35Z"
    message: certificate issued
    reason: Issued
    status: "True"
    type: Ready
$ k -n nginx get secret nginx-cert-tls -oyaml
apiVersion: v1
data:
  ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM4RENDQWRpZ0F3SUJBZ0lRWmU2ODlEbGpyMnhFT0JWZ2pvMHlGekFOQmdrcWhraUc5dzBCQVFzRkFEQVMKTVJBd0RnWURWUVFEREFkU2IyOTBJRU5CTUI0WERUSTBNVEV5TURBd01Ua3hPRm9YRFRNME1URXlNREF4TVRreApORm93RWpFUU1BNEdBMVVFQXd3SFVtOXZkQ0JEUVRDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDCkFRb0NnZ0VCQUxtTmo4dFN0dlk4LzgraWJjOHM5WXdwbEt4aHBUUWJsdEw4bk1OSUhGdVluZUg1WjhpSkdkUGMKVlpxVnVaT3p2NUJCUkRjY3ErajVpYVZsOUFwNkVCeVRsRGlpbk1lOUowUnJRYTB0cTFzdDRHaG5SMWZEb2Q5egpQNjNCbFFjT2VSeHBwZVpJaTBjN1FJNitVdmFRN24wald5VXJIQnRvRDF6b3pYbUY5cjdkcXpiSjhLRXFWOWd6CnlmNHNnZHc5MWtwdHhxcGNCVFRFamRuMFVicXdxV3EwUWpUZUwvUlR3UjFidm1VWG5RV1pPOEVSakxpTExFOHAKdnI2MXJNU09MRDZweVQxVER4VkRJbGJnMWJueWlWUDk3M29Md0lqZWNaZjFtaVgzc0J6WXJUdm5scGZYeStOTAo5ZEFYa3NYRGVBMXR1eHpiYlZIREF5VTJndG50aFowQ0F3RUFBYU5DTUVBd0R3WURWUjBUQVFIL0JBVXdBd0VCCi96QWRCZ05WSFE0RUZnUVVvNmp1aHFSSzB3WStzVFBhV2U3SENRNWlLOVF3RGdZRFZSMFBBUUgvQkFRREFnR0cKTUEwR0NTcUdTSWIzRFFFQkN3VUFBNElCQVFBY3JreGxQclc2WW83azVIdHJSL21wdUNLNE82T3loV2lLYjdSSgprSjRPZ2xLc0p3Z3Q2U3R6L0VTRTRrZU1FWUVHMTVvempiYkVubTBNTHJ6RnRNWGlWbUVieEtKTGtBZ08zMzVICi9YSGFLWjI5OWN6Z2xHZUR3bU9GTCtvK1RneExqUUZSaFYyZWNDK2I1WUV6ZU0zSjJGNEx0RkF1TzRlS29jSUsKN1lCc082WFY5RHdoYzZYdzlBY3NSS1YzSk1vY0hrN1dHakUxdVZyTEEvV24yRTQzWTFJN3ZIZDc3cnRReVFFSgorUDc2MFFPTzlOTGVLd2FMNzVmVHVxZ1FXRVJNUVJSNHY0ZTVldG1HVU9EbEY0UllGaGxMQ3NRSlJIYzJLR0tFCjNoK25aVTU1dEEzY3BYeS82ODBwR01UbW11T0Q1cC9FQlVHNzJGSVB4TzBLOElxTQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
  tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURQRENDQWlTZ0F3SUJBZ0lRQjlRWWZ5M1NLcDZkRXlaTVFNVUJaREFOQmdrcWhraUc5dzBCQVFzRkFEQVMKTVJBd0RnWURWUVFEREFkU2IyOTBJRU5CTUI0WERUSTBNVEV5TURBeU1EUXpORm9YRFRJME1URXlNREEwTURRegpORm93RURFT01Bd0dBMVVFQXhNRmJtZHBibmd3Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUUM3L2ZyVEtxRmNkS1RNMXlqcTQ1VHArTkNVT2UvQXNScXZ5VGhEeVA0ZGFXMjRRVU5leCtUZVR1eHYKQmpxSVo5R0U0T1p1TDBQd2RIV1orUkZwc0lzY3gzUWQwVHJpVlpBMStEOXp1cUlTMTF3M0NkY1NMMk5ZTnkrUgp6Tjcyb1NzZHcrRHlxQlBBcTNWcExIb0d1YmROSEkwSWxNb2trMWlkR3JUYm9VVjhpMUtYd1M2RnFkUk9rc2JxClM2ZFh1a0RCMWFwY0V1ZUNKSFFvWDlVaUFKSENoamhPcFdOUHdndE5GcjJrTzM1SHQ3aHI5UVgrZXRINlZtNVIKSEFFZkJjU1NmUmhGSGdYRy9LVjdSOHFXWXZZMVhucFhZN28veGsxNENpY0hSbVhoclcrUVpieDY1SVAwbmxjRwpEcmNBcGJwKzBwOFVWa3RHakFOM0YvbWphS2F0QWdNQkFBR2pnWTh3Z1l3d0dnWURWUjBSQkJNd0VZSVBkM2QzCkxtVjRZVzF3YkdVdVkyOXRNQWtHQTFVZEV3UUNNQUF3SHdZRFZSMGpCQmd3Rm9BVW82anVocVJLMHdZK3NUUGEKV2U3SENRNWlLOVF3SFFZRFZSME9CQllFRk00cDh4RDkvZUFJYjVlcXhYM0REOStJL1FZek1BNEdBMVVkRHdFQgovd1FFQXdJRm9EQVRCZ05WSFNVRUREQUtCZ2dyQmdFRkJRY0RBVEFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBCmNLSDlBTlZybWVVa1pIdXF2Vmd6ZmlEUmRoVEZ5Wm51WjZpM1EzSXE2aUZjMDFWK0FOaFJmb2x2T0hjK1JyQk4KVWNkY211YldVcjFsN2xXbVVLUDJLNlFua05ubDhDMTN3YnJlNXhGNmdzMjBvVlRtaTg3ZDhhUklkR1pWMWxIVQoyUlV5bEUvZEtuR3VBc282dkNoUVM0R1BjcWlBd3gyN3QyeEs3aHBtbCs2N0pMMTlZbU8vOGFaV2N3dGRGUzYwCmZGb0VUWmNPekoxRFQzZjBjeUxBUGYvOWFmQU1ndG1BbFdKa0RPTzBZUmhWMFRRQk1aYnVVYmptbGdadzBKRHgKamNrRVY2bnRCaTNuNVhNVUJ3eXpCRE9SamozZjVicG5nK29peTYrVjdmZzl6L3NXb2RUWk5SN3pINjBROGVmSgowMEVRZjJ3M3F4WkZhV3dGTmNuVVhBPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
  tls.key: (snip)
kind: Secret
metadata:
  annotations:
    cert-manager.io/alt-names: www.example.com
    cert-manager.io/certificate-name: nginx-cert
    cert-manager.io/common-name: nginx
    cert-manager.io/ip-sans: ""
    cert-manager.io/issuer-group: awspca.cert-manager.io
    cert-manager.io/issuer-kind: AWSPCAClusterIssuer
    cert-manager.io/issuer-name: root-ca
    cert-manager.io/uri-sans: ""
  creationTimestamp: "2024-11-20T01:50:57Z"
  labels:
    controller.cert-manager.io/fao: "true"
  name: nginx-cert-tls
  namespace: nginx
  resourceVersion: "30125"
  uid: fa9fc42c-c7b8-4623-b611-284221dbfa61
type: kubernetes.io/tls

Secret ぱっと見更新されているのかよくわからないが、中身を見ると更新されているようだ。

$ k -n nginx get secret nginx-cert-tls -ojson | jq -r '.data."tls.crt"' | base64 --decode | openssl x509 -text -noout -
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number:
            07:d4:18:7f:2d:d2:2a:9e:9d:13:26:4c:40:c5:01:64
        Signature Algorithm: sha256WithRSAEncryption
        Issuer: CN=Root CA
        Validity
            Not Before: Nov 20 02:04:34 2024 GMT
            Not After : Nov 20 04:04:34 2024 GMT
        Subject: CN=nginx
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (2048 bit)
                Modulus:
                    00:bb:fd:fa:d3:2a:a1:5c:74:a4:cc:d7:28:ea:e3:
                    94:e9:f8:d0:94:39:ef:c0:b1:1a:af:c9:38:43:c8:
                    fe:1d:69:6d:b8:41:43:5e:c7:e4:de:4e:ec:6f:06:
                    3a:88:67:d1:84:e0:e6:6e:2f:43:f0:74:75:99:f9:
                    11:69:b0:8b:1c:c7:74:1d:d1:3a:e2:55:90:35:f8:
                    3f:73:ba:a2:12:d7:5c:37:09:d7:12:2f:63:58:37:
                    2f:91:cc:de:f6:a1:2b:1d:c3:e0:f2:a8:13:c0:ab:
                    75:69:2c:7a:06:b9:b7:4d:1c:8d:08:94:ca:24:93:
                    58:9d:1a:b4:db:a1:45:7c:8b:52:97:c1:2e:85:a9:
                    d4:4e:92:c6:ea:4b:a7:57:ba:40:c1:d5:aa:5c:12:
                    e7:82:24:74:28:5f:d5:22:00:91:c2:86:38:4e:a5:
                    63:4f:c2:0b:4d:16:bd:a4:3b:7e:47:b7:b8:6b:f5:
                    05:fe:7a:d1:fa:56:6e:51:1c:01:1f:05:c4:92:7d:
                    18:45:1e:05:c6:fc:a5:7b:47:ca:96:62:f6:35:5e:
                    7a:57:63:ba:3f:c6:4d:78:0a:27:07:46:65:e1:ad:
                    6f:90:65:bc:7a:e4:83:f4:9e:57:06:0e:b7:00:a5:
                    ba:7e:d2:9f:14:56:4b:46:8c:03:77:17:f9:a3:68:
                    a6:ad
                Exponent: 65537 (0x10001)
        X509v3 extensions:
            X509v3 Subject Alternative Name:
                DNS:www.example.com
            X509v3 Basic Constraints:
                CA:FALSE
            X509v3 Authority Key Identifier:
                A3:A8:EE:86:A4:4A:D3:06:3E:B1:33:DA:59:EE:C7:09:0E:62:2B:D4
            X509v3 Subject Key Identifier:
                CE:29:F3:10:FD:FD:E0:08:6F:97:AA:C5:7D:C3:0F:DF:88:FD:06:33
            X509v3 Key Usage: critical
                Digital Signature, Key Encipherment
            X509v3 Extended Key Usage:
                TLS Web Server Authentication
    Signature Algorithm: sha256WithRSAEncryption
    Signature Value:
        70:a1:fd:00:d5:6b:99:e5:24:64:7b:aa:bd:58:33:7e:20:d1:
        76:14:c5:c9:99:ee:67:a8:b7:43:72:2a:ea:21:5c:d3:55:7e:
        00:d8:51:7e:89:6f:38:77:3e:46:b0:4d:51:c7:5c:9a:e6:d6:
        52:bd:65:ee:55:a6:50:a3:f6:2b:a4:27:90:d9:e5:f0:2d:77:
        c1:ba:de:e7:11:7a:82:cd:b4:a1:54:e6:8b:ce:dd:f1:a4:48:
        74:66:55:d6:51:d4:d9:15:32:94:4f:dd:2a:71:ae:02:ca:3a:
        bc:28:50:4b:81:8f:72:a8:80:c3:1d:bb:b7:6c:4a:ee:1a:66:
        97:ee:bb:24:bd:7d:62:63:bf:f1:a6:56:73:0b:5d:15:2e:b4:
        7c:5a:04:4d:97:0e:cc:9d:43:4f:77:f4:73:22:c0:3d:ff:fd:
        69:f0:0c:82:d9:80:95:62:64:0c:e3:b4:61:18:55:d1:34:01:
        31:96:ee:51:b8:e6:96:06:70:d0:90:f1:8d:c9:04:57:a9:ed:
        06:2d:e7:e5:73:14:07:0c:b3:04:33:91:8e:3d:df:e5:ba:67:
        83:ea:22:cb:af:95:ed:f8:3d:cf:fb:16:a1:d4:d9:35:1e:f3:
        1f:ad:10:f1:e7:c9:d3:41:10:7f:6c:37:ab:16:45:69:6c:05:
        35:c9:d4:5c

しかし予想通りブラウザでアクセスすると更新されていない。

openssl でも確認しておく。

$ openssl s_client -connect localhost:8443 -showcerts
Connecting to ::1
CONNECTED(00000005)
Can't use SSL_get_servername
depth=0 CN=nginx
verify error:num=20:unable to get local issuer certificate
verify return:1
depth=0 CN=nginx
verify error:num=21:unable to verify the first certificate
verify return:1
depth=0 CN=nginx
verify error:num=10:certificate has expired
notAfter=Nov 20 03:14:34 2024 GMT
verify return:1
depth=0 CN=nginx
notAfter=Nov 20 03:14:34 2024 GMT
verify return:1
---
Certificate chain
 0 s:CN=nginx
   i:CN=Root CA
   a:PKEY: rsaEncryption, 2048 (bit); sigalg: RSA-SHA256
   v:NotBefore: Nov 20 01:14:34 2024 GMT; NotAfter: Nov 20 03:14:34 2024 GMT
-----BEGIN CERTIFICATE-----
MIIDPDCCAiSgAwIBAgIQbKE9qeSUQo3rN7puVl1OODANBgkqhkiG9w0BAQsFADAS
MRAwDgYDVQQDDAdSb290IENBMB4XDTI0MTEyMDAxMTQzNFoXDTI0MTEyMDAzMTQz
NFowEDEOMAwGA1UEAxMFbmdpbngwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEK
AoIBAQC7/frTKqFcdKTM1yjq45Tp+NCUOe/AsRqvyThDyP4daW24QUNex+TeTuxv
BjqIZ9GE4OZuL0PwdHWZ+RFpsIscx3Qd0TriVZA1+D9zuqIS11w3CdcSL2NYNy+R
zN72oSsdw+DyqBPAq3VpLHoGubdNHI0IlMokk1idGrTboUV8i1KXwS6FqdROksbq
S6dXukDB1apcEueCJHQoX9UiAJHChjhOpWNPwgtNFr2kO35Ht7hr9QX+etH6Vm5R
HAEfBcSSfRhFHgXG/KV7R8qWYvY1XnpXY7o/xk14CicHRmXhrW+QZbx65IP0nlcG
DrcApbp+0p8UVktGjAN3F/mjaKatAgMBAAGjgY8wgYwwGgYDVR0RBBMwEYIPd3d3
LmV4YW1wbGUuY29tMAkGA1UdEwQCMAAwHwYDVR0jBBgwFoAUo6juhqRK0wY+sTPa
We7HCQ5iK9QwHQYDVR0OBBYEFM4p8xD9/eAIb5eqxX3DD9+I/QYzMA4GA1UdDwEB
/wQEAwIFoDATBgNVHSUEDDAKBggrBgEFBQcDATANBgkqhkiG9w0BAQsFAAOCAQEA
Ak4LLa+i8CHyt4+LRTNRaT+Rw83fvHT2+Ai4M4IG6GF57htfWsGwaQ8tjX/ATDM9
Qe0KRn+94gLC6GPrPPFgiyea6GXq23iyuXS0z4M8PeEJXA2T7nYcz7CiEr30sHls
hqflo7sSgtCX27Vo2Fni3q27QOiof5BYef65LEbflOv87BIbVAgTD1ieyFp8tof/
9kIfNmw+UL7clkKEHhTEIgZ6/1YzYPVOHCPup7IB8KBFXkyWo8TxXMYX0kFx1C5U
eUc9Lix+uuTh8IOoEplw9Cd4Dw7yYMJY/gx935vHrOfiSqqd2G3tqtpHiankaWcV
HX2m0bOQglzGKnJEjfGMJw==
-----END CERTIFICATE-----
---
Server certificate
subject=CN=nginx
issuer=CN=Root CA
---
No client certificate CA names sent
Peer signing digest: SHA256
Peer signature type: RSA-PSS
Server Temp Key: X25519, 253 bits
---
SSL handshake has read 1388 bytes and written 382 bytes
Verification error: certificate has expired
---
New, TLSv1.3, Cipher is TLS_AES_256_GCM_SHA384
Protocol: TLSv1.3
Server public key is 2048 bit
This TLS version forbids renegotiation.
Compression: NONE
Expansion: NONE
No ALPN negotiated
Early data was not sent
Verify return code: 10 (certificate has expired)
---
---
Post-Handshake New Session Ticket arrived:
SSL-Session:
    Protocol  : TLSv1.3
    Cipher    : TLS_AES_256_GCM_SHA384
    Session-ID: 43BF3E46EC23FFF3D3AEF306BB931243B9F5FCF64E2FD3BCB869108270D13AE1
    Session-ID-ctx:
    Resumption PSK: 6827140C0AA27746E712B7210545B3C69321C903D24986EE2BBFBB79044C282150FD5AA6C85727C71873A9C1F2938D9F
    PSK identity: None
    PSK identity hint: None
    SRP username: None
    TLS session ticket lifetime hint: 300 (seconds)
    TLS session ticket:
    0000 - 76 c5 fe b4 da 6a 4e 2a-71 a3 75 1a 5c 96 74 4e   v....jN*q.u.\.tN
    0010 - 4d 18 4f a9 1f 5f 97 cf-fb 09 c6 66 27 8a 3a 37   M.O.._.....f'.:7
    0020 - a2 83 b9 b6 b4 da f2 1c-b0 a1 bf 35 04 6d 59 bd   ...........5.mY.
    0030 - 11 bf be fc 6b 91 90 b5-84 90 85 52 83 da ac 47   ....k......R...G
    0040 - 63 cb bd dd 33 68 26 93-81 ea 53 f8 1e 18 8d 1f   c...3h&...S.....
    0050 - 2e f7 51 47 cc 30 d2 da-b2 b6 85 91 a8 ef 09 d9   ..QG.0..........
    0060 - cb 39 4d 5b e4 21 9a f2-2b de 73 b6 e3 85 62 29   .9M[.!..+.s...b)
    0070 - 5c e6 be 26 30 24 65 bd-e5 ba 54 f9 0d d2 54 2f   \..&0$e...T...T/
    0080 - 05 af 75 f5 d3 9c e9 cd-3c f5 bb a1 9b 24 aa f4   ..u.....<....$..
    0090 - f1 09 71 eb 96 02 e6 b2-45 1e f7 67 cd 97 71 79   ..q.....E..g..qy
    00a0 - 4d 19 0e 91 a6 1c fc d6-98 7d c9 39 df c3 ee 41   M........}.9...A
    00b0 - 94 25 fa a6 ed 94 32 fa-ea 54 0e 5b 94 30 fe f6   .%....2..T.[.0..
    00c0 - 4d ad 70 f2 90 f6 ff 6f-ce f4 f2 4b b9 d4 91 5b   M.p....o...K...[
    00d0 - b8 e8 94 21 be 89 61 c4-6d 71 50 b5 dc 18 8c f9   ...!..a.mqP.....

    Start Time: 1732072738
    Timeout   : 7200 (sec)
    Verify return code: 10 (certificate has expired)
    Extended master secret: no
    Max Early Data: 0
---
read R BLOCK
---
Post-Handshake New Session Ticket arrived:
SSL-Session:
    Protocol  : TLSv1.3
    Cipher    : TLS_AES_256_GCM_SHA384
    Session-ID: A3D2C90F3B721F950A638A18482504F498EF54D30FAC5642EE4BF307C9E06A99
    Session-ID-ctx:
    Resumption PSK: 766B18982AFAEF910F575B88DE61F7F6B6E3728D4534885432405FF7AAAF34C18A017B549F33206A22FFFD4F507F6756
    PSK identity: None
    PSK identity hint: None
    SRP username: None
    TLS session ticket lifetime hint: 300 (seconds)
    TLS session ticket:
    0000 - 76 c5 fe b4 da 6a 4e 2a-71 a3 75 1a 5c 96 74 4e   v....jN*q.u.\.tN
    0010 - e9 01 40 e9 f5 45 4e 6c-c0 16 22 f0 29 df c4 91   [email protected]..".)...
    0020 - 83 e5 1d f3 07 06 88 21-76 a8 ed dd 9a 53 19 99   .......!v....S..
    0030 - 4d 6f 66 b5 71 92 f8 a2-6f ac 24 8b 1b a0 4b 13   Mof.q...o.$...K.
    0040 - 8d 7a b8 2b 47 a5 15 10-70 ba 0a a7 70 aa 09 f0   .z.+G...p...p...
    0050 - f5 a6 c9 7f 4c bb 0c 0a-bf c4 59 ed 34 2c 06 04   ....L.....Y.4,..
    0060 - 82 4e a3 19 a5 61 e0 76-2d cf 2a ac 66 78 ab 2d   .N...a.v-.*.fx.-
    0070 - ff 99 cd d3 10 09 6f fc-34 95 86 14 56 d1 90 5e   ......o.4...V..^
    0080 - c0 ca 3a 10 20 dd 20 d5-25 ca 91 fe d2 f1 b7 cb   ..:. . .%.......
    0090 - 85 5a 67 9a c9 cd 5d d1-ea 97 99 0d b1 40 17 80   .Zg...]......@..
    00a0 - a7 6c dd 4c ed 03 68 c2-8a cf 78 6e b5 ef 4e 39   .l.L..h...xn..N9
    00b0 - 10 53 68 1b ad 06 4f b6-ed af 00 e0 ed c4 c2 b9   .Sh...O.........
    00c0 - bb 47 9c 17 25 8d f3 f4-07 54 ed 74 56 5e ac 58   .G..%....T.tV^.X
    00d0 - 7d 93 99 44 e2 e2 63 d7-27 73 27 a4 f7 42 43 48   }..D..c.'s'..BCH

    Start Time: 1732072738
    Timeout   : 7200 (sec)
    Verify return code: 10 (certificate has expired)
    Extended master secret: no
    Max Early Data: 0
---
read R BLOCK

Nginx の Pod を再起動して証明書の変更が反映されることを確認する。

$ k -n nginx rollout restart deployment nginx
deployment.apps/nginx restarted
$ k -n nginx rollout status deployment nginx
deployment "nginx" successfully rolled out
$ k -n nginx get po
NAME                     READY   STATUS    RESTARTS   AGE
nginx-64d594d6f8-xhjn5   1/1     Running   0          21s

有効期限が更新された。

$ openssl s_client -connect localhost:8443 -showcerts
Connecting to ::1
CONNECTED(00000005)
Can't use SSL_get_servername
depth=0 CN=nginx
verify error:num=20:unable to get local issuer certificate
verify return:1
depth=0 CN=nginx
verify error:num=21:unable to verify the first certificate
verify return:1
depth=0 CN=nginx
verify return:1
---
Certificate chain
 0 s:CN=nginx
   i:CN=Root CA
   a:PKEY: rsaEncryption, 2048 (bit); sigalg: RSA-SHA256
   v:NotBefore: Nov 20 02:04:34 2024 GMT; NotAfter: Nov 20 04:04:34 2024 GMT
-----BEGIN CERTIFICATE-----
MIIDPDCCAiSgAwIBAgIQB9QYfy3SKp6dEyZMQMUBZDANBgkqhkiG9w0BAQsFADAS
MRAwDgYDVQQDDAdSb290IENBMB4XDTI0MTEyMDAyMDQzNFoXDTI0MTEyMDA0MDQz
NFowEDEOMAwGA1UEAxMFbmdpbngwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEK
AoIBAQC7/frTKqFcdKTM1yjq45Tp+NCUOe/AsRqvyThDyP4daW24QUNex+TeTuxv
BjqIZ9GE4OZuL0PwdHWZ+RFpsIscx3Qd0TriVZA1+D9zuqIS11w3CdcSL2NYNy+R
zN72oSsdw+DyqBPAq3VpLHoGubdNHI0IlMokk1idGrTboUV8i1KXwS6FqdROksbq
S6dXukDB1apcEueCJHQoX9UiAJHChjhOpWNPwgtNFr2kO35Ht7hr9QX+etH6Vm5R
HAEfBcSSfRhFHgXG/KV7R8qWYvY1XnpXY7o/xk14CicHRmXhrW+QZbx65IP0nlcG
DrcApbp+0p8UVktGjAN3F/mjaKatAgMBAAGjgY8wgYwwGgYDVR0RBBMwEYIPd3d3
LmV4YW1wbGUuY29tMAkGA1UdEwQCMAAwHwYDVR0jBBgwFoAUo6juhqRK0wY+sTPa
We7HCQ5iK9QwHQYDVR0OBBYEFM4p8xD9/eAIb5eqxX3DD9+I/QYzMA4GA1UdDwEB
/wQEAwIFoDATBgNVHSUEDDAKBggrBgEFBQcDATANBgkqhkiG9w0BAQsFAAOCAQEA
cKH9ANVrmeUkZHuqvVgzfiDRdhTFyZnuZ6i3Q3Iq6iFc01V+ANhRfolvOHc+RrBN
UcdcmubWUr1l7lWmUKP2K6QnkNnl8C13wbre5xF6gs20oVTmi87d8aRIdGZV1lHU
2RUylE/dKnGuAso6vChQS4GPcqiAwx27t2xK7hpml+67JL19YmO/8aZWcwtdFS60
fFoETZcOzJ1DT3f0cyLAPf/9afAMgtmAlWJkDOO0YRhV0TQBMZbuUbjmlgZw0JDx
jckEV6ntBi3n5XMUBwyzBDORjj3f5bpng+oiy6+V7fg9z/sWodTZNR7zH60Q8efJ
00EQf2w3qxZFaWwFNcnUXA==
-----END CERTIFICATE-----
---
Server certificate
subject=CN=nginx
issuer=CN=Root CA
---
No client certificate CA names sent
Peer signing digest: SHA256
Peer signature type: RSA-PSS
Server Temp Key: X25519, 253 bits
---
SSL handshake has read 1388 bytes and written 382 bytes
Verification error: unable to verify the first certificate
---
New, TLSv1.3, Cipher is TLS_AES_256_GCM_SHA384
Protocol: TLSv1.3
Server public key is 2048 bit
This TLS version forbids renegotiation.
Compression: NONE
Expansion: NONE
No ALPN negotiated
Early data was not sent
Verify return code: 21 (unable to verify the first certificate)
---
---
Post-Handshake New Session Ticket arrived:
SSL-Session:
    Protocol  : TLSv1.3
    Cipher    : TLS_AES_256_GCM_SHA384
    Session-ID: 880E61D7AFEFA00530491AC7CCD97FA332678F470DD4A025D017CA8CDEE14888
    Session-ID-ctx:
    Resumption PSK: FCD6B9C771EFDEEA826E363D73D4438A6CB2C1A00AF2A48DB9E507699AC03ED090EC42272A3660321542491E9CDAE808
    PSK identity: None
    PSK identity hint: None
    SRP username: None
    TLS session ticket lifetime hint: 300 (seconds)
    TLS session ticket:
    0000 - 33 a7 0b 89 fc af 6f ed-6c c9 25 1f ce 6d b3 50   3.....o.l.%..m.P
    0010 - 43 e5 0c 98 e1 de 33 a7-7c 99 a8 00 6e 0c f3 75   C.....3.|...n..u
    0020 - 37 78 49 38 60 d4 ab 32-29 40 ee a9 a8 f5 45 c2   7xI8`..2)@....E.
    0030 - 66 78 39 15 02 e0 8c 7e-18 3f db 82 19 f5 f8 3e   fx9....~.?.....>
    0040 - e0 57 22 91 57 53 d5 d5-23 6b 6a 79 2d 3a e1 51   .W".WS..#kjy-:.Q
    0050 - 1b a3 d2 01 d0 69 96 0f-a7 ae 38 fe cc b7 26 22   .....i....8...&"
    0060 - d5 45 a5 a5 64 8c dd 5a-2b 09 14 78 0b 93 1a 09   .E..d..Z+..x....
    0070 - 1c ca 28 a1 d3 be a6 60-b4 7b dd b4 8f 92 57 53   ..(....`.{....WS
    0080 - 73 92 9f 29 d9 61 51 da-94 18 a5 0c 7e 40 92 23   s..).aQ.....~@.#
    0090 - 51 93 0b e6 24 4f 82 b3-0c 06 f4 16 7d 4a 08 c8   Q...$O......}J..
    00a0 - bc 54 92 c5 2a 82 e5 2a-a7 09 e8 e1 d2 a2 ca 94   .T..*..*........
    00b0 - c4 75 10 89 9d 1a 6c 5e-aa c9 6e 88 a1 01 b5 bf   .u....l^..n.....
    00c0 - 4c fd 4d 2e a9 0b 02 a9-1e 00 5c a8 f5 e2 73 1b   L.M.......\...s.
    00d0 - 7e ff 02 c0 49 e5 c5 9e-ad 78 95 cf 50 22 b0 7a   ~...I....x..P".z

    Start Time: 1732072959
    Timeout   : 7200 (sec)
    Verify return code: 21 (unable to verify the first certificate)
    Extended master secret: no
    Max Early Data: 0
---
read R BLOCK
---
Post-Handshake New Session Ticket arrived:
SSL-Session:
    Protocol  : TLSv1.3
    Cipher    : TLS_AES_256_GCM_SHA384
    Session-ID: 3450842A43C543FE9E0A44E871A81941DF3E002A281EA97CD25A08F47F9E13E1
    Session-ID-ctx:
    Resumption PSK: 5E3037E963869FB67355D78B7B9EC3F5C7C1C05DAF34E7BEBFCE0F2AFAC2E6D7C6119D9A25C81786F3F55CFF037CF75B
    PSK identity: None
    PSK identity hint: None
    SRP username: None
    TLS session ticket lifetime hint: 300 (seconds)
    TLS session ticket:
    0000 - 33 a7 0b 89 fc af 6f ed-6c c9 25 1f ce 6d b3 50   3.....o.l.%..m.P
    0010 - 18 ad 5f b0 4a 52 c3 5f-66 3d fc 7f 67 93 ad a5   .._.JR._f=..g...
    0020 - 95 03 d3 e2 3b 44 1b 0b-c1 9b e8 a7 9b 1d af 8a   ....;D..........
    0030 - b3 54 db 6e 95 15 16 1f-41 c7 7d 62 4f 23 b7 34   .T.n....A.}bO#.4
    0040 - 17 07 e9 fc 1e 99 56 80-96 c6 a0 70 1e 1d 45 bc   ......V....p..E.
    0050 - 17 06 25 ca b3 5f 5a 39-4e e5 16 b6 3f ca 99 8c   ..%.._Z9N...?...
    0060 - ee ed 76 ef 88 4b 55 8b-69 aa d7 9e 13 8e 9f e0   ..v..KU.i.......
    0070 - 47 b4 e6 1f 7a a2 d1 29-0b 1a 67 73 38 cb e1 62   G...z..)..gs8..b
    0080 - 66 be 8d 80 90 b6 56 9f-5e 32 34 88 07 36 19 9f   f.....V.^24..6..
    0090 - 88 cd 22 79 e5 bf 9e fd-13 2a 11 f1 b8 ba 55 33   .."y.....*....U3
    00a0 - 4c 74 5e c0 c2 fa d1 6c-5e 3c af 2f 09 fb 6d 4d   Lt^....l^<./..mM
    00b0 - e0 f1 68 84 da 7a df 45-6a 9a ce c8 98 7e 23 3d   ..h..z.Ej....~#=
    00c0 - 75 ca ff b5 c7 e9 40 32-25 72 80 af 90 fc 07 9f   u.....@2%r......
    00d0 - 7a e3 89 a2 ae ce 55 50-80 5d be 2b c9 27 7b f5   z.....UP.].+.'{.

    Start Time: 1732072959
    Timeout   : 7200 (sec)
    Verify return code: 21 (unable to verify the first certificate)
    Extended master secret: no
    Max Early Data: 0
---
read R BLOCK

Reloader による証明書更新時のロールアウト

Deployment を edit してアノテーションを追加する。

$ k -n nginx edit deployment nginx

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    reloader.stakater.com/auto: "true"  # これを追加
...
  name: nginx
  namespace: nginx

アノテーションの追加だけだと Pod は再起動されないので一応再起動。Reloader が見ているのは Deployment の方だと思うのであまり関係ないかもしれない。

$ k -n nginx rollout restart deployment nginx
deployment.apps/nginx restarted

このまま 50 分後に再び更新されるのを待つ。

少々確認が遅れたが、更新されたタイミングで Nginx の Pod がリスタートされているのが確認できる。

$ k -n nginx get certificate
NAME         READY   SECRET           AGE
nginx-cert   True    nginx-cert-tls   117m
$ k -n nginx get certificaterequest
NAME           APPROVED   DENIED   READY   ISSUER    REQUESTER                                         AGE
nginx-cert-1   True                True    root-ca   system:serviceaccount:cert-manager:cert-manager   117m
nginx-cert-2   True                True    root-ca   system:serviceaccount:cert-manager:cert-manager   67m
nginx-cert-3   True                True    root-ca   system:serviceaccount:cert-manager:cert-manager   17m
$ k -n nginx get secret
NAME             TYPE                DATA   AGE
nginx-cert-tls   kubernetes.io/tls   3      141m
$ k -n nginx get pods
NAME                     READY   STATUS    RESTARTS   AGE
nginx-6f696674bc-gwbbb   1/1     Running   0          18m

証明書の有効期限が更新されている。

$ openssl s_client -connect localhost:8443 -showcerts
Connecting to ::1
CONNECTED(00000005)
Can't use SSL_get_servername
depth=0 CN=nginx
verify error:num=20:unable to get local issuer certificate
verify return:1
depth=0 CN=nginx
verify error:num=21:unable to verify the first certificate
verify return:1
depth=0 CN=nginx
verify return:1
---
Certificate chain
 0 s:CN=nginx
   i:CN=Root CA
   a:PKEY: rsaEncryption, 2048 (bit); sigalg: RSA-SHA256
   v:NotBefore: Nov 20 02:54:34 2024 GMT; NotAfter: Nov 20 04:54:34 2024 GMT
-----BEGIN CERTIFICATE-----
MIIDPTCCAiWgAwIBAgIRALHDdqBoX3d3RyT9kf5GYIUwDQYJKoZIhvcNAQELBQAw
EjEQMA4GA1UEAwwHUm9vdCBDQTAeFw0yNDExMjAwMjU0MzRaFw0yNDExMjAwNDU0
MzRaMBAxDjAMBgNVBAMTBW5naW54MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIB
CgKCAQEAu/360yqhXHSkzNco6uOU6fjQlDnvwLEar8k4Q8j+HWltuEFDXsfk3k7s
bwY6iGfRhODmbi9D8HR1mfkRabCLHMd0HdE64lWQNfg/c7qiEtdcNwnXEi9jWDcv
kcze9qErHcPg8qgTwKt1aSx6Brm3TRyNCJTKJJNYnRq026FFfItSl8EuhanUTpLG
6kunV7pAwdWqXBLngiR0KF/VIgCRwoY4TqVjT8ILTRa9pDt+R7e4a/UF/nrR+lZu
URwBHwXEkn0YRR4Fxvyle0fKlmL2NV56V2O6P8ZNeAonB0Zl4a1vkGW8euSD9J5X
Bg63AKW6ftKfFFZLRowDdxf5o2imrQIDAQABo4GPMIGMMBoGA1UdEQQTMBGCD3d3
dy5leGFtcGxlLmNvbTAJBgNVHRMEAjAAMB8GA1UdIwQYMBaAFKOo7oakStMGPrEz
2lnuxwkOYivUMB0GA1UdDgQWBBTOKfMQ/f3gCG+XqsV9ww/fiP0GMzAOBgNVHQ8B
Af8EBAMCBaAwEwYDVR0lBAwwCgYIKwYBBQUHAwEwDQYJKoZIhvcNAQELBQADggEB
ADWJELzemZsgTooGoRYTq6wuCAU2fAIPiwhi0H61XFzlb54P/Ep1cOuxApEnD+gE
oOy82bK5fdEMd72a9fGulgip6aEs6sgEHHB02wI7qEoSTANfvuhmCBFe0M7gOOVi
QdcRXYm74/2ly4zb/Bfbg7xFtyChp7iRHY55A2+ctIOO7chN7hTuaIokkIrcPHex
/c0+qkKOxBWSLzKU2hSXgoUyg46qLrMJ3sOL9eECWbMHwJFm1U5k6iXmSM7/MXBp
IMR67cpZJYL+/N3BiEQiQzd3Dy2/ZEjlLXNnT+52m89/jeK9plr0fpvFQCj1AR5Z
4wFrd2npceW3uJR62JX5tlk=
-----END CERTIFICATE-----
---
Server certificate
subject=CN=nginx
issuer=CN=Root CA
---
No client certificate CA names sent
Peer signing digest: SHA256
Peer signature type: RSA-PSS
Server Temp Key: X25519, 253 bits
---
SSL handshake has read 1389 bytes and written 382 bytes
Verification error: unable to verify the first certificate
---
New, TLSv1.3, Cipher is TLS_AES_256_GCM_SHA384
Protocol: TLSv1.3
Server public key is 2048 bit
This TLS version forbids renegotiation.
Compression: NONE
Expansion: NONE
No ALPN negotiated
Early data was not sent
Verify return code: 21 (unable to verify the first certificate)
---
---
Post-Handshake New Session Ticket arrived:
SSL-Session:
    Protocol  : TLSv1.3
    Cipher    : TLS_AES_256_GCM_SHA384
    Session-ID: 1589D017F81D8DE0FE8EA51F3C604A509284699FF2A8CB5355DB4CCF4FA7C3FC
    Session-ID-ctx:
    Resumption PSK: 9F018E3BDDC523284F5F1FB5D413CB3E2A9FA14B488A753E1783CA6C33EF08A4AD089CA67729EFA0F11B0DBB4311F57C
    PSK identity: None
    PSK identity hint: None
    SRP username: None
    TLS session ticket lifetime hint: 300 (seconds)
    TLS session ticket:
    0000 - a3 45 3a f0 9b 58 4e 35-7e fe 2f 91 b7 7c 75 ed   .E:..XN5~./..|u.
    0010 - 04 f5 5e d6 16 50 14 91-d2 1f 92 db 92 b1 8d d9   ..^..P..........
    0020 - 6e 98 9b ee 3a ac e7 e2-da 1d 25 4c 39 f3 42 9f   n...:.....%L9.B.
    0030 - a3 95 01 07 6e 39 a6 d4-e3 86 58 3b 93 71 07 3a   ....n9....X;.q.:
    0040 - f6 e4 4b f5 f2 be 98 c0-08 00 ac b5 eb da 03 52   ..K............R
    0050 - 89 66 5d 50 2c 45 cb 4c-c2 42 6a 87 93 47 f6 d3   .f]P,E.L.Bj..G..
    0060 - 82 c5 55 7b 6c c4 b7 49-e8 27 e3 da 71 1e a5 6b   ..U{l..I.'..q..k
    0070 - 32 40 46 bc 4b b3 08 ea-8e 18 d9 42 84 44 9f 84   2@F.K......B.D..
    0080 - 10 9a 8f 6e f3 88 5c bc-39 21 5e 0b 48 b6 64 78   ...n..\.9!^.H.dx
    0090 - 76 fc 28 1b ac 7f 17 9e-a4 ad 79 43 d9 5c 46 40   v.(.......yC.\F@
    00a0 - 5b e2 6f de 74 d2 fc b0-5d 0d e2 11 2b 81 b5 9b   [.o.t...]...+...
    00b0 - 4d e4 5e e0 a1 40 cf 11-60 35 e9 f2 16 a3 bf 00   M.^..@..`5......
    00c0 - 83 52 42 04 ed 13 2e 91-2d 84 6c 7d 3e cd 82 18   .RB.....-.l}>...
    00d0 - 81 90 e6 65 ad 7f b9 35-2a aa 84 2b 47 ea 5a 73   ...e...5*..+G.Zs

    Start Time: 1732076825
    Timeout   : 7200 (sec)
    Verify return code: 21 (unable to verify the first certificate)
    Extended master secret: no
    Max Early Data: 0
---
read R BLOCK
---
Post-Handshake New Session Ticket arrived:
SSL-Session:
    Protocol  : TLSv1.3
    Cipher    : TLS_AES_256_GCM_SHA384
    Session-ID: E9E15B46DEFE6C6A7E19099F20FB2AFB1F815928FBCEF113AD77001FA69C4CEF
    Session-ID-ctx:
    Resumption PSK: 374713A9C141C61217265CDDC1E49867DFFBB87F613376F6EC61FFD744E05A9C3F08CFFC386889AF8327D888EB8F30AE
    PSK identity: None
    PSK identity hint: None
    SRP username: None
    TLS session ticket lifetime hint: 300 (seconds)
    TLS session ticket:
    0000 - a3 45 3a f0 9b 58 4e 35-7e fe 2f 91 b7 7c 75 ed   .E:..XN5~./..|u.
    0010 - e1 01 10 40 2f f7 05 5c-0d 70 6b 5b 51 83 5d f7   ...@/..\.pk[Q.].
    0020 - c6 66 4a 96 3d 09 e0 12-20 de 0d 79 92 28 86 84   .fJ.=... ..y.(..
    0030 - fa 44 3e bc c5 1e c5 33-23 1e 56 9f 59 24 b5 4e   .D>....3#.V.Y$.N
    0040 - d1 6e eb 49 39 57 0c 8f-1f 76 fd a5 5e 6d d2 fd   .n.I9W...v..^m..
    0050 - 91 0b 2e 61 8d 2d 75 b0-36 96 52 8b ce 23 4a 0e   ...a.-u.6.R..#J.
    0060 - f9 ff e9 d6 99 91 95 f1-ad 41 18 c1 6e 60 3a 5b   .........A..n`:[
    0070 - 28 53 6d db 9b 23 7c e8-30 d9 0d 79 be 33 74 69   (Sm..#|.0..y.3ti
    0080 - 21 6a ec b7 21 32 84 83-fb 71 b3 07 ff 5c af 7d   !j..!2...q...\.}
    0090 - 6a 98 d4 6d b2 00 4b 55-49 5e 8a 9a 98 67 b0 15   j..m..KUI^...g..
    00a0 - 6a 7c e0 68 b9 7a d2 af-ed 62 94 66 db eb 03 2a   j|.h.z...b.f...*
    00b0 - 79 ed 25 67 96 88 68 d6-3f 5a 13 c5 e7 dc 20 7d   y.%g..h.?Z.... }
    00c0 - 1b 25 3d 34 bb 6b d1 18-2e 44 1a 76 b6 1d 53 cf   .%=4.k...D.v..S.
    00d0 - b8 3c a1 08 a5 2b 28 fb-63 b7 ce be 58 4a dd 4d   .<...+(.c...XJ.M

    Start Time: 1732076825
    Timeout   : 7200 (sec)
    Verify return code: 21 (unable to verify the first certificate)
    Extended master secret: no
    Max Early Data: 0
---
read R BLOCK

Pod のガベッジコレクションの閾値

Suceeded または Failed の Pod は controller-manager によって GC される。その閾値について確認したメモ。

--terminated-pod-gc-threshold のデフォルト値は 12500 だが、閾値を超えたとき、全部削除されるのか、閾値まで削除されるのかを確認する。

クラスターの作成

EKS ではコントロールプレーンコンポーネントをカスタマイズできないので、今回は kind を使用する。

以下あたりを参考にして、設定ファイルを用意する。

cat << EOF > mycluster.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  image: kindest/node:v1.29.10
- role: worker
  image: kindest/node:v1.29.10
kubeadmConfigPatches:
- |
  kind: ClusterConfiguration
  apiVersion: kubeadm.k8s.io/v1beta3
  controllerManager:
    extraArgs:
      terminated-pod-gc-threshold: "10"
EOF

クラスターを作成する。

$ kind create cluster --config=mycluster.yaml
Creating cluster "kind" ...
 ✓ Ensuring node image (kindest/node:v1.29.10) 🖼
 ✓ Preparing nodes 📦 📦
 ✓ Writing configuration 📜
 ✓ Starting control-plane 🕹️
 ✓ Installing CNI 🔌
 ✓ Installing StorageClass 💾
 ✓ Joining worker nodes 🚜
Set kubectl context to "kind-kind"
You can now use your cluster with:

kubectl cluster-info --context kind-kind

Thanks for using kind! 😊
$

なお、M1 Mac を使用していて、普段は以下環境変数をセットしている。

export DOCKER_DEFAULT_PLATFORM=linux/amd64

arm64 ではクラスターが起動しなかったので、この環境変数を unset して、イメージを削除してから再実行したら上手くいった。

ノードを確認する。

$ k get nodes
NAME                 STATUS   ROLES           AGE     VERSION
kind-control-plane   Ready    control-plane   2m55s   v1.29.10
kind-worker          Ready    <none>          2m32s   v1.29.10

引数 --terminated-pod-gc-threshold が設定できているのかを確認する。ちゃんと設定できている。

$ k -n kube-system get pods kube-controller-manager-kind-control-plane -oyaml
apiVersion: v1
kind: Pod
metadata:
...
  name: kube-controller-manager-kind-control-plane
  namespace: kube-system
...
spec:
  containers:
  - command:
    - kube-controller-manager
    - --allocate-node-cidrs=true
    - --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
    - --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
    - --bind-address=127.0.0.1
    - --client-ca-file=/etc/kubernetes/pki/ca.crt
    - --cluster-cidr=10.244.0.0/16
    - --cluster-name=kind
    - --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
    - --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
    - --controllers=*,bootstrapsigner,tokencleaner
    - --enable-hostpath-provisioner=true
    - --kubeconfig=/etc/kubernetes/controller-manager.conf
    - --leader-elect=true
    - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
    - --root-ca-file=/etc/kubernetes/pki/ca.crt
    - --service-account-private-key-file=/etc/kubernetes/pki/sa.key
    - --service-cluster-ip-range=10.96.0.0/16
    - --terminated-pod-gc-threshold=10
    - --use-service-account-credentials=true
    image: registry.k8s.io/kube-controller-manager:v1.29.10
    imagePullPolicy: IfNotPresent
...

検証

Pod を 9 個作ってみる。

$ k run test1 --image=busybox --restart=Never
pod/test1 created

...

$ k run test9 --image=busybox --restart=Never
pod/test9 created
$ k get pods
NAME    READY   STATUS      RESTARTS   AGE
test1   0/1     Completed   0          57s
test2   0/1     Completed   0          39s
test3   0/1     Completed   0          34s
test4   0/1     Completed   0          30s
test5   0/1     Completed   0          24s
test6   0/1     Completed   0          20s
test7   0/1     Completed   0          14s
test8   0/1     Completed   0          9s
test9   0/1     Completed   0          5s

10 個目を作る。

$ k run test10 --image=busybox --restart=Never
pod/test10 created

しばらく待ってみるが消えない。

$ k get pods
NAME     READY   STATUS      RESTARTS   AGE
test1    0/1     Completed   0          3m31s
test10   0/1     Completed   0          2m23s
test2    0/1     Completed   0          3m13s
test3    0/1     Completed   0          3m8s
test4    0/1     Completed   0          3m4s
test5    0/1     Completed   0          2m58s
test6    0/1     Completed   0          2m54s
test7    0/1     Completed   0          2m48s
test8    0/1     Completed   0          2m43s
test9    0/1     Completed   0          2m39s

11 個目を作る。

$ k run test11 --image=busybox --restart=Never
pod/test11 created

watch していると、test1 が消えた。

$ k get pods -w
NAME     READY   STATUS      RESTARTS   AGE
test1    0/1     Completed   0          4m51s
test10   0/1     Completed   0          3m43s
test11   0/1     Completed   0          6s
test2    0/1     Completed   0          4m33s
test3    0/1     Completed   0          4m28s
test4    0/1     Completed   0          4m24s
test5    0/1     Completed   0          4m18s
test6    0/1     Completed   0          4m14s
test7    0/1     Completed   0          4m8s
test8    0/1     Completed   0          4m3s
test9    0/1     Completed   0          3m59s
test1    0/1     Terminating   0          5m5s
test1    0/1     Terminating   0          5m5s

12 個目を作ってみる。

$ k run test12 --image=busybox --restart=Never
pod/test12 created

test2 が消えた。

$ k get pods -w
NAME     READY   STATUS              RESTARTS   AGE
test10   0/1     Completed           0          5m25s
test11   0/1     Completed           0          108s
test12   0/1     ContainerCreating   0          2s
test2    0/1     Completed           0          6m15s
test3    0/1     Completed           0          6m10s
test4    0/1     Completed           0          6m6s
test5    0/1     Completed           0          6m
test6    0/1     Completed           0          5m56s
test7    0/1     Completed           0          5m50s
test8    0/1     Completed           0          5m45s
test9    0/1     Completed           0          5m41s
test12   0/1     Completed           0          6s
test12   0/1     Completed           0          7s
test2    0/1     Terminating         0          6m27s
test2    0/1     Terminating         0          6m27s

閾値を超えると全部削除されるのではなく、閾値まで削除されることが確認できた。

IBM Coredump Handler を試す

IBM Coredump Handler を試すメモ。

クラスターの作成

EKS クラスターを作成する。本当は Ubuntu で試したいが、まず AL2 から。

CLUSTER_NAME="coredump"
MY_ARN=$(aws sts get-caller-identity --output text --query Arn)
AWS_ACCOUNT_ID=$(aws sts get-caller-identity --output text --query Account)
cat << EOF > cluster.yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: ${CLUSTER_NAME}
  region: ap-northeast-1
  version: "1.29"
vpc:
  cidr: "10.0.0.0/16"

availabilityZones:
  - ap-northeast-1a
  - ap-northeast-1c

cloudWatch:
  clusterLogging:
    enableTypes: ["*"]

iam:
  withOIDC: true

accessConfig:
  bootstrapClusterCreatorAdminPermissions: false
  authenticationMode: API
  accessEntries:
    - principalARN: arn:aws:iam::${AWS_ACCOUNT_ID}:role/Admin
      accessPolicies:
        - policyARN: arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy
          accessScope:
            type: cluster
EOF
eksctl create cluster -f cluster.yaml

ノードグループを作成する。

cat << EOF > m1.yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: ${CLUSTER_NAME}
  region: ap-northeast-1

managedNodeGroups:
  - name: m1
    instanceType: m6i.large
    minSize: 1
    maxSize: 10
    desiredCapacity: 2
    privateNetworking: true
    iam:
      attachPolicyARNs:
        - arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
        - arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
        - arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore
EOF
eksctl create nodegroup -f m1.yaml

ノードを確認する。

$ k get node
NAME                                            STATUS   ROLES    AGE   VERSION
ip-10-0-85-36.ap-northeast-1.compute.internal   Ready    <none>   18m   v1.29.8-eks-a737599
ip-10-0-97-32.ap-northeast-1.compute.internal   Ready    <none>   18m   v1.29.8-eks-a737599

バケットと IAM ロールの作成

インストール手順を見ると、IRSA をサポートしている。

まずは、S3 バケットを作成する。

aws s3 mb s3://coredump-${AWS_ACCOUNT_ID}

このバケットにアクセス可能な IAM ポリシーを作成する。

cat << EOF > coredump-handler-policy.json
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "s3:*",
            "Resource": [
                "arn:aws:s3:::coredump-${AWS_ACCOUNT_ID}",
                "arn:aws:s3:::coredump-${AWS_ACCOUNT_ID}/*"
            ]
        }
    ]
}
EOF
aws iam create-policy \
  --policy-name coredump-handler-policy \
  --policy-document file://coredump-handler-policy.json

Service Account と IAM ロールを作成する。

NAMESPACE="observe"
SA_NAME="core-dump-admin"
kubectl create ns ${NAMESPACE}
eksctl create iamserviceaccount \
  --cluster ${CLUSTER_NAME} --name ${SA_NAME} --namespace ${NAMESPACE} \
  --attach-policy-arn arn:aws:iam::${AWS_ACCOUNT_ID}:policy/coredump-handler-policy \
  --approve
2024-11-05 16:40:27 [ℹ]  1 iamserviceaccount (observe/core-dump-admin) was included (based on the include/exclude rules)
2024-11-05 16:40:27 [!]  serviceaccounts that exist in Kubernetes will be excluded, use --override-existing-serviceaccounts to override
2024-11-05 16:40:27 [ℹ]  1 task: { 
    2 sequential sub-tasks: { 
        create IAM role for serviceaccount "observe/core-dump-admin",
        create serviceaccount "observe/core-dump-admin",
    } }2024-11-05 16:40:27 [ℹ]  building iamserviceaccount stack "eksctl-coredump-addon-iamserviceaccount-observe-core-dump-admin"
2024-11-05 16:40:27 [ℹ]  deploying stack "eksctl-coredump-addon-iamserviceaccount-observe-core-dump-admin"
2024-11-05 16:40:27 [ℹ]  waiting for CloudFormation stack "eksctl-coredump-addon-iamserviceaccount-observe-core-dump-admin"
2024-11-05 16:40:57 [ℹ]  waiting for CloudFormation stack "eksctl-coredump-addon-iamserviceaccount-observe-core-dump-admin"
2024-11-05 16:40:57 [ℹ]  created serviceaccount "observe/core-dump-admin"

ロールの ARN を変数に入れておく。

STACK_NAME="eksctl-${CLUSTER_NAME}-addon-iamserviceaccount-${NAMESPACE}-${SA_NAME}"
ROLE_NAME=$(aws cloudformation describe-stack-resources \
    --stack-name ${STACK_NAME} \
    --query "StackResources[?ResourceType=='AWS::IAM::Role'].PhysicalResourceId" \
    --output text)
echo ${ROLE_NAME}
ROLE_ARN=$(aws iam get-role \
    --role-name ${ROLE_NAME} \
    --query "Role.Arn" \
    --output text)
echo ${ROLE_ARN}

Coredump Handler のインストール

values.yaml を作成する。

values.yaml 全量はこれで、EKS で IRSA を使うサンプルがこれ。

cat << EOF > values.yaml
# AWS requires a crio client to be copied to the server
daemonset:
  includeCrioExe: true
  vendor: rhel7 # EKS EC2 images have an old libc=2.26
  s3BucketName: coredump-${AWS_ACCOUNT_ID}
  s3Region: ap-northeast-1

serviceAccount:
  create: false
  annotations:
    # See https://docs.aws.amazon.com/eks/latest/userguide/specify-service-account-role.html
    eks.amazonaws.com/role-arn: ${ROLE_ARN}
EOF

チャートリポジトリを追加する。

helm repo add core-dump-handler https://ibm.github.io/core-dump-handler/

チャートを確認する。

$ helm search repo core-dump-handler
NAME                                    CHART VERSION   APP VERSION     DESCRIPTION
core-dump-handler/core-dump-handler     v8.10.0         v8.10.0         A Helm chart for deploying a core dump manageme...

インストールする。

helm -n ${NAMESPACE} install core-dump-handler core-dump-handler/core-dump-handler --values values.yaml
NAME: core-dump-handler
LAST DEPLOYED: Tue Nov  5 17:25:39 2024
NAMESPACE: observe
STATUS: deployed
REVISION: 1
NOTES:
Verifying the chart

Run a crashing container - this container writes a value to a null pointer

1. kubectl run -i -t segfaulter --image=quay.io/icdh/segfaulter --restart=Never

2. Validate the core dump has been uploaded to your object store instance.

動いたことを確認する。

$ k -n observe get pods
NAME                      READY   STATUS    RESTARTS   AGE
core-dump-handler-7hbmh   1/1     Running   0          14s
core-dump-handler-zc9lj   1/1     Running   0          14s

ログも見ておく。

$ k -n observe logs core-dump-handler-7hbmh 
[2024-11-05T08:25:41Z INFO  core_dump_agent] no .env file found 
     That's ok if running in kubernetes
[2024-11-05T08:25:41Z INFO  core_dump_agent] Setting host location to: /var/mnt/core-dump-handler
[2024-11-05T08:25:41Z INFO  core_dump_agent] Current Directory for setup is /app
[2024-11-05T08:25:41Z INFO  core_dump_agent] Copying the crictl from ./crictl to /var/mnt/core-dump-handler/crictl
[2024-11-05T08:25:41Z INFO  core_dump_agent] Copying the composer from ./vendor/rhel7/cdc to /var/mnt/core-dump-handler/cdc
[2024-11-05T08:25:41Z INFO  core_dump_agent] Starting sysctl for kernel.core_pattern /var/mnt/core-dump-handler/core_pattern.bak with |/var/mnt/core-dump-handler/cdc -c=%c -e=%e -p=%p -s=%s -t=%t -d=/var/mnt/core-dump-handler/cores -h=%h -E=%E
[2024-11-05T08:25:41Z INFO  core_dump_agent] Getting sysctl for kernel.core_pattern
[2024-11-05T08:25:41Z INFO  core_dump_agent] Created Backup of /var/mnt/core-dump-handler/core_pattern.bak
kernel.core_pattern = |/var/mnt/core-dump-handler/cdc -c=%c -e=%e -p=%p -s=%s -t=%t -d=/var/mnt/core-dump-handler/cores -h=%h -E=%E
[2024-11-05T08:25:41Z INFO  core_dump_agent] Starting sysctl for kernel.core_pipe_limit /var/mnt/core-dump-handler/core_pipe_limit.bak with 128
[2024-11-05T08:25:41Z INFO  core_dump_agent] Getting sysctl for kernel.core_pipe_limit
[2024-11-05T08:25:41Z INFO  core_dump_agent] Created Backup of /var/mnt/core-dump-handler/core_pipe_limit.bak
kernel.core_pipe_limit = 128
[2024-11-05T08:25:41Z INFO  core_dump_agent] Starting sysctl for fs.suid_dumpable /var/mnt/core-dump-handler/suid_dumpable.bak with 2
[2024-11-05T08:25:41Z INFO  core_dump_agent] Getting sysctl for fs.suid_dumpable
[2024-11-05T08:25:41Z INFO  core_dump_agent] Created Backup of /var/mnt/core-dump-handler/suid_dumpable.bak
fs.suid_dumpable = 2
[2024-11-05T08:25:41Z INFO  core_dump_agent] Creating /var/mnt/core-dump-handler/.env file with LOG_LEVEL=Warn
[2024-11-05T08:25:41Z INFO  core_dump_agent] Writing composer .env 
    LOG_LEVEL=Warn
    IGNORE_CRIO=false
    CRIO_IMAGE_CMD=img
    USE_CRIO_CONF=false
    FILENAME_TEMPLATE={uuid}-dump-{timestamp}-{hostname}-{exe_name}-{pid}-{signal}
    LOG_LENGTH=500
    POD_SELECTOR_LABEL=
    TIMEOUT=600
    COMPRESSION=true
    CORE_EVENTS=false
    EVENT_DIRECTORY=/var/mnt/core-dump-handler/events
    
[2024-11-05T08:25:41Z INFO  core_dump_agent] Executing Agent with location : /var/mnt/core-dump-handler/cores
[2024-11-05T08:25:43Z INFO  core_dump_agent] Dir Content []
[2024-11-05T08:25:43Z INFO  core_dump_agent] INotify Starting...
[2024-11-05T08:25:43Z INFO  core_dump_agent] INotify Initialised...
[2024-11-05T08:25:43Z INFO  core_dump_agent] INotify watching : /var/mnt/core-dump-handler/cores

テスト

チャートインストール時の出力にかかれているコマンドでテストする。

$ kubectl run -i -t segfaulter --image=quay.io/icdh/segfaulter --restart=Never
Logging a message 1 from segfaulter
Logging a message 2 from segfaulter
Logging a message 3 from segfaulter
Logging a message 4 from segfaulter
Logging a message 5 from segfaulter
Logging a message 6 from segfaulter
Logging a message 7 from segfaulter
Logging a message 8 from segfaulter
Logging a message 9 from segfaulter
Logging a message 10 from segfaulter

...

Logging a message 991 from segfaulter
Logging a message 992 from segfaulter
Logging a message 993 from segfaulter
Logging a message 994 from segfaulter
Logging a message 995 from segfaulter
Logging a message 996 from segfaulter
Logging a message 997 from segfaulter
Logging a message 998 from segfaulter
Logging a message 999 from segfaulter
pod default/segfaulter terminated (Error)
$

エラーした Pod を確認する。

$ k get pods -owide
NAME         READY   STATUS   RESTARTS   AGE   IP            NODE                                            NOMINATED NODE   READINESS GATES
segfaulter   0/1     Error    0          14s   10.0.83.141   ip-10-0-85-36.ap-northeast-1.compute.internal   <none>           <none>

Coredump Handler のログを確認する。転送できている。

$ k -n observe logs core-dump-handler-7hbmh
[2024-11-05T08:25:41Z INFO  core_dump_agent] no .env file found 
     That's ok if running in kubernetes
[2024-11-05T08:25:41Z INFO  core_dump_agent] Setting host location to: /var/mnt/core-dump-handler
[2024-11-05T08:25:41Z INFO  core_dump_agent] Current Directory for setup is /app
[2024-11-05T08:25:41Z INFO  core_dump_agent] Copying the crictl from ./crictl to /var/mnt/core-dump-handler/crictl
[2024-11-05T08:25:41Z INFO  core_dump_agent] Copying the composer from ./vendor/rhel7/cdc to /var/mnt/core-dump-handler/cdc
[2024-11-05T08:25:41Z INFO  core_dump_agent] Starting sysctl for kernel.core_pattern /var/mnt/core-dump-handler/core_pattern.bak with |/var/mnt/core-dump-handler/cdc -c=%c -e=%e -p=%p -s=%s -t=%t -d=/var/mnt/core-dump-handler/cores -h=%h -E=%E
[2024-11-05T08:25:41Z INFO  core_dump_agent] Getting sysctl for kernel.core_pattern
[2024-11-05T08:25:41Z INFO  core_dump_agent] Created Backup of /var/mnt/core-dump-handler/core_pattern.bak
kernel.core_pattern = |/var/mnt/core-dump-handler/cdc -c=%c -e=%e -p=%p -s=%s -t=%t -d=/var/mnt/core-dump-handler/cores -h=%h -E=%E
[2024-11-05T08:25:41Z INFO  core_dump_agent] Starting sysctl for kernel.core_pipe_limit /var/mnt/core-dump-handler/core_pipe_limit.bak with 128
[2024-11-05T08:25:41Z INFO  core_dump_agent] Getting sysctl for kernel.core_pipe_limit
[2024-11-05T08:25:41Z INFO  core_dump_agent] Created Backup of /var/mnt/core-dump-handler/core_pipe_limit.bak
kernel.core_pipe_limit = 128
[2024-11-05T08:25:41Z INFO  core_dump_agent] Starting sysctl for fs.suid_dumpable /var/mnt/core-dump-handler/suid_dumpable.bak with 2
[2024-11-05T08:25:41Z INFO  core_dump_agent] Getting sysctl for fs.suid_dumpable
[2024-11-05T08:25:41Z INFO  core_dump_agent] Created Backup of /var/mnt/core-dump-handler/suid_dumpable.bak
fs.suid_dumpable = 2
[2024-11-05T08:25:41Z INFO  core_dump_agent] Creating /var/mnt/core-dump-handler/.env file with LOG_LEVEL=Warn
[2024-11-05T08:25:41Z INFO  core_dump_agent] Writing composer .env 
    LOG_LEVEL=Warn
    IGNORE_CRIO=false
    CRIO_IMAGE_CMD=img
    USE_CRIO_CONF=false
    FILENAME_TEMPLATE={uuid}-dump-{timestamp}-{hostname}-{exe_name}-{pid}-{signal}
    LOG_LENGTH=500
    POD_SELECTOR_LABEL=
    TIMEOUT=600
    COMPRESSION=true
    CORE_EVENTS=false
    EVENT_DIRECTORY=/var/mnt/core-dump-handler/events
    
[2024-11-05T08:25:41Z INFO  core_dump_agent] Executing Agent with location : /var/mnt/core-dump-handler/cores
[2024-11-05T08:25:43Z INFO  core_dump_agent] Dir Content []
[2024-11-05T08:25:43Z INFO  core_dump_agent] INotify Starting...
[2024-11-05T08:25:43Z INFO  core_dump_agent] INotify Initialised...
[2024-11-05T08:25:43Z INFO  core_dump_agent] INotify watching : /var/mnt/core-dump-handler/cores
[2024-11-05T08:27:04Z INFO  core_dump_agent] Uploading: /var/mnt/core-dump-handler/cores/f0e41b21-0da3-4275-ad82-e0d1a531ea8b-dump-1730795223-segfaulter-segfaulter-1-4.zip
[2024-11-05T08:27:04Z INFO  core_dump_agent] zip size is 22190
[2024-11-05T08:27:04Z INFO  core_dump_agent] S3 Returned: 200

アップロードできている。

$ aws s3 ls s3://coredump-${AWS_ACCOUNT_ID}
2024-11-05 17:27:05      22190 f0e41b21-0da3-4275-ad82-e0d1a531ea8b-dump-1730795223-segfaulter-segfaulter-1-4.zip

確認

このとき、core_pattern がどう設定されているのか確認するため、Pod を起動する。

$ k run nginx --image nginx
pod/nginx created
$ k get pods
NAME         READY   STATUS    RESTARTS   AGE
nginx        1/1     Running   0          13s
segfaulter   0/1     Error     0          10m
$ k exec -it nginx -- bash
root@nginx:/# cat /proc/sys/kernel/core_pattern
|/var/mnt/core-dump-handler/cdc -c=%c -e=%e -p=%p -s=%s -t=%t -d=/var/mnt/core-dump-handler/cores -h=%h -E=%E
root@nginx:/# ls -l /var/mnt/core-dump-handler/cdc
ls: cannot access '/var/mnt/core-dump-handler/cdc': No such file or directory
root@nginx:/# exit
exit
command terminated with exit code 2

ホスト側で確認しても同じ。

[root@ip-10-0-85-36 ~]# cat /proc/sys/kernel/core_pattern
|/var/mnt/core-dump-handler/cdc -c=%c -e=%e -p=%p -s=%s -t=%t -d=/var/mnt/core-dump-handler/cores -h=%h -E=%E
[root@ip-10-0-85-36 ~]# ls -l /var/mnt/core-dump-handler/cdc
-rwxr-xr-x 1 root root 53999632 Nov  5 08:25 /var/mnt/core-dump-handler/cdc
[root@ip-10-0-85-36 ~]#

/var/mnt/core-dump-handler/cdc はホスト側にあるファイルで、coredump-handler が PV/PVC 経由で hostPath をマウントしている。

$ k -n observe get ds core-dump-handler -oyaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  annotations:
    deprecated.daemonset.template.generation: "1"
    meta.helm.sh/release-name: core-dump-handler
    meta.helm.sh/release-namespace: observe
  creationTimestamp: "2024-11-05T08:25:40Z"
  generation: 1
  labels:
    app.kubernetes.io/managed-by: Helm
  name: core-dump-handler
  namespace: observe
  resourceVersion: "13602"
  uid: cdb7f174-d5ee-4fa2-a4a1-8e19e2f67430
spec:
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      name: core-dump-ds
  template:
    metadata:
      creationTimestamp: null
      labels:
        name: core-dump-ds
    spec:
      containers:
      - command:
        - /app/core-dump-agent
        env:
        - name: COMP_FILENAME_TEMPLATE
          value: '{uuid}-dump-{timestamp}-{hostname}-{exe_name}-{pid}-{signal}'
        - name: COMP_LOG_LENGTH
          value: "500"
        - name: COMP_LOG_LEVEL
          value: Warn
        - name: COMP_IGNORE_CRIO
          value: "false"
        - name: COMP_CRIO_IMAGE_CMD
          value: img
        - name: COMP_POD_SELECTOR_LABEL
        - name: COMP_TIMEOUT
          value: "600"
        - name: COMP_COMPRESSION
          value: "true"
        - name: COMP_CORE_EVENTS
          value: "false"
        - name: COMP_CORE_EVENT_DIR
          value: /var/mnt/core-dump-handler/events
        - name: DEPLOY_CRIO_CONFIG
          value: "false"
        - name: CRIO_ENDPOINT
          value: unix:///run/containerd/containerd.sock
        - name: HOST_DIR
          value: /var/mnt/core-dump-handler
        - name: CORE_DIR
          value: /var/mnt/core-dump-handler/cores
        - name: EVENT_DIR
          value: /var/mnt/core-dump-handler/events
        - name: SUID_DUMPABLE
          value: "2"
        - name: DEPLOY_CRIO_EXE
          value: "true"
        - name: S3_ACCESS_KEY
          valueFrom:
            secretKeyRef:
              key: s3AccessKey
              name: s3config
              optional: true
        - name: S3_SECRET
          valueFrom:
            secretKeyRef:
              key: s3Secret
              name: s3config
              optional: true
        - name: S3_BUCKET_NAME
          valueFrom:
            secretKeyRef:
              key: s3BucketName
              name: s3config
        - name: S3_REGION
          valueFrom:
            secretKeyRef:
              key: s3Region
              name: s3config
        - name: VENDOR
          value: rhel7
        - name: INTERVAL
        - name: SCHEDULE
        - name: USE_INOTIFY
          value: "true"
        image: quay.io/icdh/core-dump-handler:v8.10.0
        imagePullPolicy: Always
        lifecycle:
          preStop:
            exec:
              command:
              - /app/core-dump-agent
              - remove
        name: coredump-container
        resources:
          limits:
            cpu: 500m
            memory: 128Mi
          requests:
            cpu: 250m
            memory: 64Mi
        securityContext:
          privileged: true
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /var/mnt/core-dump-handler
          mountPropagation: Bidirectional
          name: host-volume
        - mountPath: /var/mnt/core-dump-handler/cores
          mountPropagation: Bidirectional
          name: core-volume
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: core-dump-admin
      serviceAccountName: core-dump-admin
      terminationGracePeriodSeconds: 30
      volumes:
      - name: host-volume
        persistentVolumeClaim:
          claimName: host-storage-pvc
      - name: core-volume
        persistentVolumeClaim:
          claimName: core-storage-pvc
  updateStrategy:
    rollingUpdate:
      maxSurge: 0
      maxUnavailable: 1
    type: RollingUpdate
status:
  currentNumberScheduled: 2
  desiredNumberScheduled: 2
  numberAvailable: 2
  numberMisscheduled: 0
  numberReady: 2
  observedGeneration: 1
  updatedNumberScheduled: 2
$ k get pvc -A
NAMESPACE   NAME               STATUS   VOLUME        CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
observe     core-storage-pvc   Bound    core-volume   10Gi       RWO            hostclass      <unset>                 21m
observe     host-storage-pvc   Bound    host-volume   1Gi        RWO            hostclass      <unset>                 21m
$ k get pv
NAME          CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                      STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
core-volume   10Gi       RWO            Retain           Bound    observe/core-storage-pvc   hostclass      <unset>                          21m
host-volume   1Gi        RWO            Retain           Bound    observe/host-storage-pvc   hostclass      <unset>                          21m
$ k get pv host-volume -oyaml
apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    meta.helm.sh/release-name: core-dump-handler
    meta.helm.sh/release-namespace: observe
  creationTimestamp: "2024-11-05T08:25:40Z"
  finalizers:
  - kubernetes.io/pv-protection
  labels:
    app.kubernetes.io/managed-by: Helm
    type: local
  name: host-volume
  resourceVersion: "13570"
  uid: 50e6f774-fb3f-48aa-bb7f-b41c5e140806
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 1Gi
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: host-storage-pvc
    namespace: observe
    resourceVersion: "13562"
    uid: 87d4cc9a-5bd5-418a-a06a-a23f6b30ee94
  hostPath:
    path: /var/mnt/core-dump-handler
    type: ""
  persistentVolumeReclaimPolicy: Retain
  storageClassName: hostclass
  volumeMode: Filesystem
status:
  lastPhaseTransitionTime: "2024-11-05T08:25:40Z"
  phase: Bound
$ k get pv core-volume -oyaml
apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    meta.helm.sh/release-name: core-dump-handler
    meta.helm.sh/release-namespace: observe
  creationTimestamp: "2024-11-05T08:25:40Z"
  finalizers:
  - kubernetes.io/pv-protection
  labels:
    app.kubernetes.io/managed-by: Helm
    type: local
  name: core-volume
  resourceVersion: "13564"
  uid: 81d1efbd-10e2-4198-a8dc-bb08c109f2b0
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 10Gi
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: core-storage-pvc
    namespace: observe
    resourceVersion: "13561"
    uid: 10fd5435-77fe-4210-a4ce-fc87d8894699
  hostPath:
    path: /var/mnt/core-dump-handler/cores
    type: ""
  persistentVolumeReclaimPolicy: Retain
  storageClassName: hostclass
  volumeMode: Filesystem
status:
  lastPhaseTransitionTime: "2024-11-05T08:25:40Z"
  phase: Bound

Ubuntu ノードグループの追加

ノードグループを作成する。以前は Ubuntu2204 はなかったような気がするが、使えるようになっている。

cat << EOF > ubuntu.yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: ${CLUSTER_NAME}
  region: ap-northeast-1

managedNodeGroups:
  - name: ubuntu
    amiFamily: Ubuntu2204
    instanceType: m6i.large
    minSize: 1
    maxSize: 10
    desiredCapacity: 2
    privateNetworking: true
    iam:
      attachPolicyARNs:
        - arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
        - arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
        - arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore
EOF
eksctl create nodegroup -f ubuntu.yaml

Ubuntu ノードが追加されたことを確認する。

$ k get nodes -owide
NAME                                              STATUS   ROLES    AGE   VERSION               INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION                  CONTAINER-RUNTIME
ip-10-0-115-197.ap-northeast-1.compute.internal   Ready    <none>   77s   v1.29.6               10.0.115.197   <none>        Ubuntu 22.04.5 LTS   6.8.0-1015-aws                  containerd://1.7.12
ip-10-0-77-210.ap-northeast-1.compute.internal    Ready    <none>   78s   v1.29.6               10.0.77.210    <none>        Ubuntu 22.04.5 LTS   6.8.0-1015-aws                  containerd://1.7.12
ip-10-0-85-36.ap-northeast-1.compute.internal     Ready    <none>   96m   v1.29.8-eks-a737599   10.0.85.36     <none>        Amazon Linux 2       5.10.226-214.880.amzn2.x86_64   containerd://1.7.22
ip-10-0-97-32.ap-northeast-1.compute.internal     Ready    <none>   96m   v1.29.8-eks-a737599   10.0.97.32     <none>        Amazon Linux 2       5.10.226-214.880.amzn2.x86_64   containerd://1.7.22

この場合、values.yaml で指定している、以下が妥当なのか気になる。

daemonset:
  includeCrioExe: true
  vendor: rhel7 # EKS EC2 images have an old libc=2.26

Ubuntu ノードの coredump-handler のログを見てみる。

$ k -n observe logs core-dump-handler-8hrbg
[2024-11-05T08:57:47Z INFO  core_dump_agent] no .env file found 
     That's ok if running in kubernetes
[2024-11-05T08:57:47Z INFO  core_dump_agent] Setting host location to: /var/mnt/core-dump-handler
[2024-11-05T08:57:47Z INFO  core_dump_agent] Current Directory for setup is /app
[2024-11-05T08:57:47Z INFO  core_dump_agent] Copying the crictl from ./crictl to /var/mnt/core-dump-handler/crictl
[2024-11-05T08:57:48Z INFO  core_dump_agent] Copying the composer from ./vendor/rhel7/cdc to /var/mnt/core-dump-handler/cdc
[2024-11-05T08:57:51Z INFO  core_dump_agent] Starting sysctl for kernel.core_pattern /var/mnt/core-dump-handler/core_pattern.bak with |/var/mnt/core-dump-handler/cdc -c=%c -e=%e -p=%p -s=%s -t=%t -d=/var/mnt/core-dump-handler/cores -h=%h -E=%E
[2024-11-05T08:57:51Z INFO  core_dump_agent] Getting sysctl for kernel.core_pattern
[2024-11-05T08:57:51Z INFO  core_dump_agent] Created Backup of /var/mnt/core-dump-handler/core_pattern.bak
kernel.core_pattern = |/var/mnt/core-dump-handler/cdc -c=%c -e=%e -p=%p -s=%s -t=%t -d=/var/mnt/core-dump-handler/cores -h=%h -E=%E
[2024-11-05T08:57:51Z INFO  core_dump_agent] Starting sysctl for kernel.core_pipe_limit /var/mnt/core-dump-handler/core_pipe_limit.bak with 128
[2024-11-05T08:57:51Z INFO  core_dump_agent] Getting sysctl for kernel.core_pipe_limit
[2024-11-05T08:57:51Z INFO  core_dump_agent] Created Backup of /var/mnt/core-dump-handler/core_pipe_limit.bak
kernel.core_pipe_limit = 128
[2024-11-05T08:57:51Z INFO  core_dump_agent] Starting sysctl for fs.suid_dumpable /var/mnt/core-dump-handler/suid_dumpable.bak with 2
[2024-11-05T08:57:51Z INFO  core_dump_agent] Getting sysctl for fs.suid_dumpable
[2024-11-05T08:57:51Z INFO  core_dump_agent] fs.suid_dumpable with value 2 is already applied
[2024-11-05T08:57:51Z INFO  core_dump_agent] Creating /var/mnt/core-dump-handler/.env file with LOG_LEVEL=Warn
[2024-11-05T08:57:51Z INFO  core_dump_agent] Writing composer .env 
    LOG_LEVEL=Warn
    IGNORE_CRIO=false
    CRIO_IMAGE_CMD=img
    USE_CRIO_CONF=false
    FILENAME_TEMPLATE={uuid}-dump-{timestamp}-{hostname}-{exe_name}-{pid}-{signal}
    LOG_LENGTH=500
    POD_SELECTOR_LABEL=
    TIMEOUT=600
    COMPRESSION=true
    CORE_EVENTS=false
    EVENT_DIRECTORY=/var/mnt/core-dump-handler/events
    
[2024-11-05T08:57:51Z INFO  core_dump_agent] Executing Agent with location : /var/mnt/core-dump-handler/cores
[2024-11-05T08:57:52Z INFO  core_dump_agent] Dir Content []
[2024-11-05T08:57:52Z INFO  core_dump_agent] INotify Starting...
[2024-11-05T08:57:52Z INFO  core_dump_agent] INotify Initialised...
[2024-11-05T08:57:52Z INFO  core_dump_agent] INotify watching : /var/mnt/core-dump-handler/cores

このノードでテスト Pod を動かす。

$ k delete po --all
pod "nginx" deleted
pod "segfaulter" deleted
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: segfaulter
  name: segfaulter
spec:
  nodeName: ip-10-0-115-197.ap-northeast-1.compute.internal
  containers:
  - image: quay.io/icdh/segfaulter
    name: segfaulter
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Never
status: {}
EOF

Error になるのを確認する。

$ k get pods
NAME         READY   STATUS   RESTARTS   AGE
segfaulter   0/1     Error    0          24s

coredump-handler のログを見る。

$ k -n observe logs core-dump-handler-8hrbg
[2024-11-05T08:57:47Z INFO  core_dump_agent] no .env file found 
     That's ok if running in kubernetes
[2024-11-05T08:57:47Z INFO  core_dump_agent] Setting host location to: /var/mnt/core-dump-handler
[2024-11-05T08:57:47Z INFO  core_dump_agent] Current Directory for setup is /app
[2024-11-05T08:57:47Z INFO  core_dump_agent] Copying the crictl from ./crictl to /var/mnt/core-dump-handler/crictl
[2024-11-05T08:57:48Z INFO  core_dump_agent] Copying the composer from ./vendor/rhel7/cdc to /var/mnt/core-dump-handler/cdc
[2024-11-05T08:57:51Z INFO  core_dump_agent] Starting sysctl for kernel.core_pattern /var/mnt/core-dump-handler/core_pattern.bak with |/var/mnt/core-dump-handler/cdc -c=%c -e=%e -p=%p -s=%s -t=%t -d=/var/mnt/core-dump-handler/cores -h=%h -E=%E
[2024-11-05T08:57:51Z INFO  core_dump_agent] Getting sysctl for kernel.core_pattern
[2024-11-05T08:57:51Z INFO  core_dump_agent] Created Backup of /var/mnt/core-dump-handler/core_pattern.bak
kernel.core_pattern = |/var/mnt/core-dump-handler/cdc -c=%c -e=%e -p=%p -s=%s -t=%t -d=/var/mnt/core-dump-handler/cores -h=%h -E=%E
[2024-11-05T08:57:51Z INFO  core_dump_agent] Starting sysctl for kernel.core_pipe_limit /var/mnt/core-dump-handler/core_pipe_limit.bak with 128
[2024-11-05T08:57:51Z INFO  core_dump_agent] Getting sysctl for kernel.core_pipe_limit
[2024-11-05T08:57:51Z INFO  core_dump_agent] Created Backup of /var/mnt/core-dump-handler/core_pipe_limit.bak
kernel.core_pipe_limit = 128
[2024-11-05T08:57:51Z INFO  core_dump_agent] Starting sysctl for fs.suid_dumpable /var/mnt/core-dump-handler/suid_dumpable.bak with 2
[2024-11-05T08:57:51Z INFO  core_dump_agent] Getting sysctl for fs.suid_dumpable
[2024-11-05T08:57:51Z INFO  core_dump_agent] fs.suid_dumpable with value 2 is already applied
[2024-11-05T08:57:51Z INFO  core_dump_agent] Creating /var/mnt/core-dump-handler/.env file with LOG_LEVEL=Warn
[2024-11-05T08:57:51Z INFO  core_dump_agent] Writing composer .env 
    LOG_LEVEL=Warn
    IGNORE_CRIO=false
    CRIO_IMAGE_CMD=img
    USE_CRIO_CONF=false
    FILENAME_TEMPLATE={uuid}-dump-{timestamp}-{hostname}-{exe_name}-{pid}-{signal}
    LOG_LENGTH=500
    POD_SELECTOR_LABEL=
    TIMEOUT=600
    COMPRESSION=true
    CORE_EVENTS=false
    EVENT_DIRECTORY=/var/mnt/core-dump-handler/events
    
[2024-11-05T08:57:51Z INFO  core_dump_agent] Executing Agent with location : /var/mnt/core-dump-handler/cores
[2024-11-05T08:57:52Z INFO  core_dump_agent] Dir Content []
[2024-11-05T08:57:52Z INFO  core_dump_agent] INotify Starting...
[2024-11-05T08:57:52Z INFO  core_dump_agent] INotify Initialised...
[2024-11-05T08:57:52Z INFO  core_dump_agent] INotify watching : /var/mnt/core-dump-handler/cores
[2024-11-05T09:03:46Z INFO  core_dump_agent] Uploading: /var/mnt/core-dump-handler/cores/f9919507-fb92-444a-81a6-fa58fe35e6c2-dump-1730797424-segfaulter-segfaulter-1-4.zip
[2024-11-05T09:03:46Z INFO  core_dump_agent] zip size is 22673
[2024-11-05T09:03:46Z INFO  core_dump_agent] S3 Returned: 200

動いている。ログ送れてる。

$ aws s3 ls s3://coredump-${AWS_ACCOUNT_ID}
2024-11-05 17:27:05      22190 f0e41b21-0da3-4275-ad82-e0d1a531ea8b-dump-1730795223-segfaulter-segfaulter-1-4.zip
2024-11-05 18:03:47      22673 f9919507-fb92-444a-81a6-fa58fe35e6c2-dump-1730797424-segfaulter-segfaulter-1-4.zip

CodeCommit 接続に使用する AWS プロファイルを指定する

CodeCommit を GRC で利用する際、以下のようにリポジトリ名の前に AWS プロファイル名を指定しておくと、git コマンドの実行前に export AWS_PROFILE=hoge-profile のようにプロファイルを指定する必要がないので少し便利。

$ git remote -v
origin  codecommit::ap-northeast-1://hoge-profile@hoge-repository (fetch)
origin  codecommit::ap-northeast-1://hoge-profile@hoge-repository (push)

Istio の Envoy Proxy のメモリ使用量の調査 2

Istio の Envoy Proxy のメモリ使用量の調査の続き。

前の投稿ではあまりメモリ消費量を増やせなかったので、問題となっているアプリを想定した構成にしたところ、メモリ使用量を大きくすることができた。 問題となっているアプリケーションは 1 Deployment が全て 1 Replica であり、各 Deployment には 1-2 の Service がある。1 Namespace にはこの Deployment が 50 個程度ある。

また Sidecar リソースを使って Namespace Isolation を行った場合の効果も合わせて測定した。

準備

m6i.large (2 core) を 100 ノードの構成にする。

$ k get nodes | head
NAME                                              STATUS   ROLES    AGE   VERSION
ip-10-0-101-103.ap-northeast-1.compute.internal   Ready    <none>   77m   v1.29.6-eks-1552ad0
ip-10-0-101-58.ap-northeast-1.compute.internal    Ready    <none>   77m   v1.29.6-eks-1552ad0
ip-10-0-102-166.ap-northeast-1.compute.internal   Ready    <none>   77m   v1.29.6-eks-1552ad0
ip-10-0-102-32.ap-northeast-1.compute.internal    Ready    <none>   77m   v1.29.6-eks-1552ad0
ip-10-0-104-145.ap-northeast-1.compute.internal   Ready    <none>   77m   v1.29.6-eks-1552ad0
ip-10-0-104-163.ap-northeast-1.compute.internal   Ready    <none>   77m   v1.29.6-eks-1552ad0
ip-10-0-104-37.ap-northeast-1.compute.internal    Ready    <none>   77m   v1.29.6-eks-1552ad0
ip-10-0-104-43.ap-northeast-1.compute.internal    Ready    <none>   77m   v1.29.6-eks-1552ad0
ip-10-0-106-38.ap-northeast-1.compute.internal    Ready    <none>   20h   v1.29.6-eks-1552ad0
$ k get nodes | wc -l
     101

計測用の Pod を配置する Namespace を 2 つ用意する。片方には Sidecar リソースを作成し、自分自身の Namespace と istio-system Namespace との通信だけを許可する。

k create ns measure1
k label namespace measure1 istio-injection=enabled
k -n measure1 create deployment test1 --image=public.ecr.aws/docker/library/nginx --replicas=1
k create ns measure2
k label namespace measure2 istio-injection=enabled
k -n measure2 create deployment test1 --image=public.ecr.aws/docker/library/nginx --replicas=1
cat <<EOF | kubectl apply -f -
apiVersion: networking.istio.io/v1
kind: Sidecar
metadata:
  name: default
  namespace: measure2
spec:
  egress:
  - hosts:
    - "./*"
    - "istio-system/*"
EOF
$ istioctl proxy-status
NAME                                CLUSTER        CDS              LDS              EDS              RDS              ECDS        ISTIOD                     VERSION
test1-6658f86b76-mppvq.measure1     Kubernetes     SYNCED (4s)      SYNCED (4s)      SYNCED (4s)      SYNCED (4s)      IGNORED     istiod-dd95d7bdc-ss9ws     1.23.0
test1-6658f86b76-vqkhm.measure2     Kubernetes     SYNCED (10s)     SYNCED (10s)     SYNCED (10s)     SYNCED (10s)     IGNORED     istiod-dd95d7bdc-jw984     1.23.0

テスト実施

テストは以下のようなスクリプトを実行して行う。アプリを 10 インスタンス作成し、その際にメモリ使用量の変化を測定する。

for i in {1..10}
do
  date
  echo "create ns${i}"
  k create ns ns${i}
  k label namespace ns${i} istio-injection=enabled
  for j in {1..50}
  do
    k -n ns${i} create deployment test${j} --image=public.ecr.aws/docker/library/nginx --replicas=1
    k -n ns${i} expose deployment test${j} --port=80 --target-port=80
    k -n ns${i} expose deployment test${j} --port=80 --target-port=80 --cluster-ip=None --name=test${j}-headless
  done
  echo "sleep 30sec"
  sleep 30
  echo "####################"
  echo "number of node"
  k get no -A --no-headers | wc -l
  echo "number of pod"
  k get po -A --no-headers | wc -l
  echo "number of service"
  k get svc -A --no-headers | wc -l
  echo "k top pod"
  k top pod --containers -n measure1 | grep istio-proxy
  k top pod --containers -n measure2 | grep istio-proxy
  echo "number of cluster"
  istioctl proxy-config cluster test1-6658f86b76-mppvq.measure1 | wc -l
  istioctl proxy-config cluster test1-6658f86b76-vqkhm.measure2 | wc -l
  echo "number of listener"
  istioctl proxy-config listener test1-6658f86b76-mppvq.measure1 | wc -l
  istioctl proxy-config listener test1-6658f86b76-vqkhm.measure2 | wc -l
  echo "number of route"
  istioctl proxy-config route test1-6658f86b76-mppvq.measure1 | wc -l
  istioctl proxy-config route test1-6658f86b76-vqkhm.measure2 | wc -l
  echo "number of endpoint"
  istioctl proxy-config endpoint test1-6658f86b76-mppvq.measure1 | wc -l
  istioctl proxy-config endpoint test1-6658f86b76-vqkhm.measure2 | wc -l
  echo "####################"
done

実行ログ

2024年 9月19日 木曜日 18時45分32秒 JST
create ns1
namespace/ns1 created
namespace/ns1 labeled
deployment.apps/test1 created
service/test1 exposed
service/test1-headless exposed
deployment.apps/test2 created
service/test2 exposed
service/test2-headless exposed
deployment.apps/test3 created
service/test3 exposed
service/test3-headless exposed
deployment.apps/test4 created
service/test4 exposed
service/test4-headless exposed
deployment.apps/test5 created
service/test5 exposed
service/test5-headless exposed
deployment.apps/test6 created
service/test6 exposed
service/test6-headless exposed
deployment.apps/test7 created
service/test7 exposed
service/test7-headless exposed
deployment.apps/test8 created
service/test8 exposed
service/test8-headless exposed
deployment.apps/test9 created
service/test9 exposed
service/test9-headless exposed
deployment.apps/test10 created
service/test10 exposed
service/test10-headless exposed
deployment.apps/test11 created
service/test11 exposed
service/test11-headless exposed
deployment.apps/test12 created
service/test12 exposed
service/test12-headless exposed
deployment.apps/test13 created
service/test13 exposed
service/test13-headless exposed
deployment.apps/test14 created
service/test14 exposed
service/test14-headless exposed
deployment.apps/test15 created
service/test15 exposed
service/test15-headless exposed
deployment.apps/test16 created
service/test16 exposed
service/test16-headless exposed
deployment.apps/test17 created
service/test17 exposed
service/test17-headless exposed
deployment.apps/test18 created
service/test18 exposed
service/test18-headless exposed
deployment.apps/test19 created
service/test19 exposed
service/test19-headless exposed
deployment.apps/test20 created
service/test20 exposed
service/test20-headless exposed
deployment.apps/test21 created
service/test21 exposed
service/test21-headless exposed
deployment.apps/test22 created
service/test22 exposed
service/test22-headless exposed
deployment.apps/test23 created
service/test23 exposed
service/test23-headless exposed
deployment.apps/test24 created
service/test24 exposed
service/test24-headless exposed
deployment.apps/test25 created
service/test25 exposed
service/test25-headless exposed
deployment.apps/test26 created
service/test26 exposed
service/test26-headless exposed
deployment.apps/test27 created
service/test27 exposed
service/test27-headless exposed
deployment.apps/test28 created
service/test28 exposed
service/test28-headless exposed
deployment.apps/test29 created
service/test29 exposed
service/test29-headless exposed
deployment.apps/test30 created
service/test30 exposed
service/test30-headless exposed
deployment.apps/test31 created
service/test31 exposed
service/test31-headless exposed
deployment.apps/test32 created
service/test32 exposed
service/test32-headless exposed
deployment.apps/test33 created
service/test33 exposed
service/test33-headless exposed
deployment.apps/test34 created
service/test34 exposed
service/test34-headless exposed
deployment.apps/test35 created
service/test35 exposed
service/test35-headless exposed
deployment.apps/test36 created
service/test36 exposed
service/test36-headless exposed
deployment.apps/test37 created
service/test37 exposed
service/test37-headless exposed
deployment.apps/test38 created
service/test38 exposed
service/test38-headless exposed
deployment.apps/test39 created
service/test39 exposed
service/test39-headless exposed
deployment.apps/test40 created
service/test40 exposed
service/test40-headless exposed
deployment.apps/test41 created
service/test41 exposed
service/test41-headless exposed
deployment.apps/test42 created
service/test42 exposed
service/test42-headless exposed
deployment.apps/test43 created
service/test43 exposed
service/test43-headless exposed
deployment.apps/test44 created
service/test44 exposed
service/test44-headless exposed
deployment.apps/test45 created
service/test45 exposed
service/test45-headless exposed
deployment.apps/test46 created
service/test46 exposed
service/test46-headless exposed
deployment.apps/test47 created
service/test47 exposed
service/test47-headless exposed
deployment.apps/test48 created
service/test48 exposed
service/test48-headless exposed
deployment.apps/test49 created
service/test49 exposed
service/test49-headless exposed
deployment.apps/test50 created
service/test50 exposed
service/test50-headless exposed
sleep 30sec
####################
number of node
     100
number of pod
     257
number of service
     104
k top pod
test1-6658f86b76-mppvq   istio-proxy   6m           42Mi            
test1-6658f86b76-vqkhm   istio-proxy   1m           23Mi            
number of cluster
     116
      12
number of listener
     222
      17
number of route
     108
       7
number of endpoint
      70
      13
####################
2024年 9月19日 木曜日 18時48分27秒 JST
create ns2
namespace/ns2 created
namespace/ns2 labeled
deployment.apps/test1 created
service/test1 exposed
service/test1-headless exposed
deployment.apps/test2 created
service/test2 exposed
service/test2-headless exposed
deployment.apps/test3 created
service/test3 exposed
service/test3-headless exposed
deployment.apps/test4 created
service/test4 exposed
service/test4-headless exposed
deployment.apps/test5 created
service/test5 exposed
service/test5-headless exposed
deployment.apps/test6 created
service/test6 exposed
service/test6-headless exposed
deployment.apps/test7 created
service/test7 exposed
service/test7-headless exposed
deployment.apps/test8 created
service/test8 exposed
service/test8-headless exposed
deployment.apps/test9 created
service/test9 exposed
service/test9-headless exposed
deployment.apps/test10 created
service/test10 exposed
service/test10-headless exposed
deployment.apps/test11 created
service/test11 exposed
service/test11-headless exposed
deployment.apps/test12 created
service/test12 exposed
service/test12-headless exposed
deployment.apps/test13 created
service/test13 exposed
service/test13-headless exposed
deployment.apps/test14 created
service/test14 exposed
service/test14-headless exposed
deployment.apps/test15 created
service/test15 exposed
service/test15-headless exposed
deployment.apps/test16 created
service/test16 exposed
service/test16-headless exposed
deployment.apps/test17 created
service/test17 exposed
service/test17-headless exposed
deployment.apps/test18 created
service/test18 exposed
service/test18-headless exposed
deployment.apps/test19 created
service/test19 exposed
service/test19-headless exposed
deployment.apps/test20 created
service/test20 exposed
service/test20-headless exposed
deployment.apps/test21 created
service/test21 exposed
service/test21-headless exposed
deployment.apps/test22 created
service/test22 exposed
service/test22-headless exposed
deployment.apps/test23 created
service/test23 exposed
service/test23-headless exposed
deployment.apps/test24 created
service/test24 exposed
service/test24-headless exposed
deployment.apps/test25 created
service/test25 exposed
service/test25-headless exposed
deployment.apps/test26 created
service/test26 exposed
service/test26-headless exposed
deployment.apps/test27 created
service/test27 exposed
service/test27-headless exposed
deployment.apps/test28 created
service/test28 exposed
service/test28-headless exposed
deployment.apps/test29 created
service/test29 exposed
service/test29-headless exposed
deployment.apps/test30 created
service/test30 exposed
service/test30-headless exposed
deployment.apps/test31 created
service/test31 exposed
service/test31-headless exposed
deployment.apps/test32 created
service/test32 exposed
service/test32-headless exposed
deployment.apps/test33 created
service/test33 exposed
service/test33-headless exposed
deployment.apps/test34 created
service/test34 exposed
service/test34-headless exposed
deployment.apps/test35 created
service/test35 exposed
service/test35-headless exposed
deployment.apps/test36 created
service/test36 exposed
service/test36-headless exposed
deployment.apps/test37 created
service/test37 exposed
service/test37-headless exposed
deployment.apps/test38 created
service/test38 exposed
service/test38-headless exposed
deployment.apps/test39 created
service/test39 exposed
service/test39-headless exposed
deployment.apps/test40 created
service/test40 exposed
service/test40-headless exposed
deployment.apps/test41 created
service/test41 exposed
service/test41-headless exposed
deployment.apps/test42 created
service/test42 exposed
service/test42-headless exposed
deployment.apps/test43 created
service/test43 exposed
service/test43-headless exposed
deployment.apps/test44 created
service/test44 exposed
service/test44-headless exposed
deployment.apps/test45 created
service/test45 exposed
service/test45-headless exposed
deployment.apps/test46 created
service/test46 exposed
service/test46-headless exposed
deployment.apps/test47 created
service/test47 exposed
service/test47-headless exposed
deployment.apps/test48 created
service/test48 exposed
service/test48-headless exposed
deployment.apps/test49 created
service/test49 exposed
service/test49-headless exposed
deployment.apps/test50 created
service/test50 exposed
service/test50-headless exposed
sleep 30sec
####################
number of node
     100
number of pod
     310
number of service
     204
k top pod
test1-6658f86b76-mppvq   istio-proxy   2m           56Mi            
test1-6658f86b76-vqkhm   istio-proxy   2m           24Mi            
number of cluster
     216
      12
number of listener
     422
      17
number of route
     208
       7
number of endpoint
     132
      25
####################
2024年 9月19日 木曜日 18時51分22秒 JST
create ns3
namespace/ns3 created
namespace/ns3 labeled
deployment.apps/test1 created
service/test1 exposed
service/test1-headless exposed
deployment.apps/test2 created
service/test2 exposed
service/test2-headless exposed
deployment.apps/test3 created
service/test3 exposed
service/test3-headless exposed
deployment.apps/test4 created
service/test4 exposed
service/test4-headless exposed
deployment.apps/test5 created
service/test5 exposed
service/test5-headless exposed
deployment.apps/test6 created
service/test6 exposed
service/test6-headless exposed
deployment.apps/test7 created
service/test7 exposed
service/test7-headless exposed
deployment.apps/test8 created
service/test8 exposed
service/test8-headless exposed
deployment.apps/test9 created
service/test9 exposed
service/test9-headless exposed
deployment.apps/test10 created
service/test10 exposed
service/test10-headless exposed
deployment.apps/test11 created
service/test11 exposed
service/test11-headless exposed
deployment.apps/test12 created
service/test12 exposed
service/test12-headless exposed
deployment.apps/test13 created
service/test13 exposed
service/test13-headless exposed
deployment.apps/test14 created
service/test14 exposed
service/test14-headless exposed
deployment.apps/test15 created
service/test15 exposed
service/test15-headless exposed
deployment.apps/test16 created
service/test16 exposed
service/test16-headless exposed
deployment.apps/test17 created
service/test17 exposed
service/test17-headless exposed
deployment.apps/test18 created
service/test18 exposed
service/test18-headless exposed
deployment.apps/test19 created
service/test19 exposed
service/test19-headless exposed
deployment.apps/test20 created
service/test20 exposed
service/test20-headless exposed
deployment.apps/test21 created
service/test21 exposed
service/test21-headless exposed
deployment.apps/test22 created
service/test22 exposed
service/test22-headless exposed
deployment.apps/test23 created
service/test23 exposed
service/test23-headless exposed
deployment.apps/test24 created
service/test24 exposed
service/test24-headless exposed
deployment.apps/test25 created
service/test25 exposed
service/test25-headless exposed
deployment.apps/test26 created
service/test26 exposed
service/test26-headless exposed
deployment.apps/test27 created
service/test27 exposed
service/test27-headless exposed
deployment.apps/test28 created
service/test28 exposed
service/test28-headless exposed
deployment.apps/test29 created
service/test29 exposed
service/test29-headless exposed
deployment.apps/test30 created
service/test30 exposed
service/test30-headless exposed
deployment.apps/test31 created
service/test31 exposed
service/test31-headless exposed
deployment.apps/test32 created
service/test32 exposed
service/test32-headless exposed
deployment.apps/test33 created
service/test33 exposed
service/test33-headless exposed
deployment.apps/test34 created
service/test34 exposed
service/test34-headless exposed
deployment.apps/test35 created
service/test35 exposed
service/test35-headless exposed
deployment.apps/test36 created
service/test36 exposed
service/test36-headless exposed
deployment.apps/test37 created
service/test37 exposed
service/test37-headless exposed
deployment.apps/test38 created
service/test38 exposed
service/test38-headless exposed
deployment.apps/test39 created
service/test39 exposed
service/test39-headless exposed
deployment.apps/test40 created
service/test40 exposed
service/test40-headless exposed
deployment.apps/test41 created
service/test41 exposed
service/test41-headless exposed
deployment.apps/test42 created
service/test42 exposed
service/test42-headless exposed
deployment.apps/test43 created
service/test43 exposed
service/test43-headless exposed
deployment.apps/test44 created
service/test44 exposed
service/test44-headless exposed
deployment.apps/test45 created
service/test45 exposed
service/test45-headless exposed
deployment.apps/test46 created
service/test46 exposed
service/test46-headless exposed
deployment.apps/test47 created
service/test47 exposed
service/test47-headless exposed
deployment.apps/test48 created
service/test48 exposed
service/test48-headless exposed
deployment.apps/test49 created
service/test49 exposed
service/test49-headless exposed
deployment.apps/test50 created
service/test50 exposed
service/test50-headless exposed
sleep 30sec
####################
number of node
     100
number of pod
     360
number of service
     304
k top pod
test1-6658f86b76-mppvq   istio-proxy   29m          73Mi            
test1-6658f86b76-vqkhm   istio-proxy   1m           23Mi            
number of cluster
     316
      12
number of listener
     622
      17
number of route
     308
       7
number of endpoint
     182
      25
####################
2024年 9月19日 木曜日 18時54分19秒 JST
create ns4
namespace/ns4 created
namespace/ns4 labeled
deployment.apps/test1 created
service/test1 exposed
service/test1-headless exposed
deployment.apps/test2 created
service/test2 exposed
service/test2-headless exposed
deployment.apps/test3 created
service/test3 exposed
service/test3-headless exposed
deployment.apps/test4 created
service/test4 exposed
service/test4-headless exposed
deployment.apps/test5 created
service/test5 exposed
service/test5-headless exposed
deployment.apps/test6 created
service/test6 exposed
service/test6-headless exposed
deployment.apps/test7 created
service/test7 exposed
service/test7-headless exposed
deployment.apps/test8 created
service/test8 exposed
service/test8-headless exposed
deployment.apps/test9 created
service/test9 exposed
service/test9-headless exposed
deployment.apps/test10 created
service/test10 exposed
service/test10-headless exposed
deployment.apps/test11 created
service/test11 exposed
service/test11-headless exposed
deployment.apps/test12 created
service/test12 exposed
service/test12-headless exposed
deployment.apps/test13 created
service/test13 exposed
service/test13-headless exposed
deployment.apps/test14 created
service/test14 exposed
service/test14-headless exposed
deployment.apps/test15 created
service/test15 exposed
service/test15-headless exposed
deployment.apps/test16 created
service/test16 exposed
service/test16-headless exposed
deployment.apps/test17 created
service/test17 exposed
service/test17-headless exposed
deployment.apps/test18 created
service/test18 exposed
service/test18-headless exposed
deployment.apps/test19 created
service/test19 exposed
service/test19-headless exposed
deployment.apps/test20 created
service/test20 exposed
service/test20-headless exposed
deployment.apps/test21 created
service/test21 exposed
service/test21-headless exposed
deployment.apps/test22 created
service/test22 exposed
service/test22-headless exposed
deployment.apps/test23 created
service/test23 exposed
service/test23-headless exposed
deployment.apps/test24 created
service/test24 exposed
service/test24-headless exposed
deployment.apps/test25 created
service/test25 exposed
service/test25-headless exposed
deployment.apps/test26 created
service/test26 exposed
service/test26-headless exposed
deployment.apps/test27 created
service/test27 exposed
service/test27-headless exposed
deployment.apps/test28 created
service/test28 exposed
service/test28-headless exposed
deployment.apps/test29 created
service/test29 exposed
service/test29-headless exposed
deployment.apps/test30 created
service/test30 exposed
service/test30-headless exposed
deployment.apps/test31 created
service/test31 exposed
service/test31-headless exposed
deployment.apps/test32 created
service/test32 exposed
service/test32-headless exposed
deployment.apps/test33 created
service/test33 exposed
service/test33-headless exposed
deployment.apps/test34 created
service/test34 exposed
service/test34-headless exposed
deployment.apps/test35 created
service/test35 exposed
service/test35-headless exposed
deployment.apps/test36 created
service/test36 exposed
service/test36-headless exposed
deployment.apps/test37 created
service/test37 exposed
service/test37-headless exposed
deployment.apps/test38 created
service/test38 exposed
service/test38-headless exposed
deployment.apps/test39 created
service/test39 exposed
service/test39-headless exposed
deployment.apps/test40 created
service/test40 exposed
service/test40-headless exposed
deployment.apps/test41 created
service/test41 exposed
service/test41-headless exposed
deployment.apps/test42 created
service/test42 exposed
service/test42-headless exposed
deployment.apps/test43 created
service/test43 exposed
service/test43-headless exposed
deployment.apps/test44 created
service/test44 exposed
service/test44-headless exposed
deployment.apps/test45 created
service/test45 exposed
service/test45-headless exposed
deployment.apps/test46 created
service/test46 exposed
service/test46-headless exposed
deployment.apps/test47 created
service/test47 exposed
service/test47-headless exposed
deployment.apps/test48 created
service/test48 exposed
service/test48-headless exposed
deployment.apps/test49 created
service/test49 exposed
service/test49-headless exposed
deployment.apps/test50 created
service/test50 exposed
service/test50-headless exposed
sleep 30sec
####################
number of node
     100
number of pod
     410
number of service
     404
k top pod
test1-6658f86b76-mppvq   istio-proxy   22m          89Mi            
test1-6658f86b76-vqkhm   istio-proxy   1m           23Mi            
number of cluster
     416
      12
number of listener
     822
      17
number of route
     408
       7
number of endpoint
     232
      25
####################
2024年 9月19日 木曜日 18時57分18秒 JST
create ns5
namespace/ns5 created
namespace/ns5 labeled
deployment.apps/test1 created
service/test1 exposed
service/test1-headless exposed
deployment.apps/test2 created
service/test2 exposed
service/test2-headless exposed
deployment.apps/test3 created
service/test3 exposed
service/test3-headless exposed
deployment.apps/test4 created
service/test4 exposed
service/test4-headless exposed
deployment.apps/test5 created
service/test5 exposed
service/test5-headless exposed
deployment.apps/test6 created
service/test6 exposed
service/test6-headless exposed
deployment.apps/test7 created
service/test7 exposed
service/test7-headless exposed
deployment.apps/test8 created
service/test8 exposed
service/test8-headless exposed
deployment.apps/test9 created
service/test9 exposed
service/test9-headless exposed
deployment.apps/test10 created
service/test10 exposed
service/test10-headless exposed
deployment.apps/test11 created
service/test11 exposed
service/test11-headless exposed
deployment.apps/test12 created
service/test12 exposed
service/test12-headless exposed
deployment.apps/test13 created
service/test13 exposed
service/test13-headless exposed
deployment.apps/test14 created
service/test14 exposed
service/test14-headless exposed
deployment.apps/test15 created
service/test15 exposed
service/test15-headless exposed
deployment.apps/test16 created
service/test16 exposed
service/test16-headless exposed
deployment.apps/test17 created
service/test17 exposed
service/test17-headless exposed
deployment.apps/test18 created
service/test18 exposed
service/test18-headless exposed
deployment.apps/test19 created
service/test19 exposed
service/test19-headless exposed
deployment.apps/test20 created
service/test20 exposed
service/test20-headless exposed
deployment.apps/test21 created
service/test21 exposed
service/test21-headless exposed
deployment.apps/test22 created
service/test22 exposed
service/test22-headless exposed
deployment.apps/test23 created
service/test23 exposed
service/test23-headless exposed
deployment.apps/test24 created
service/test24 exposed
service/test24-headless exposed
deployment.apps/test25 created
service/test25 exposed
service/test25-headless exposed
deployment.apps/test26 created
service/test26 exposed
service/test26-headless exposed
deployment.apps/test27 created
service/test27 exposed
service/test27-headless exposed
deployment.apps/test28 created
service/test28 exposed
service/test28-headless exposed
deployment.apps/test29 created
service/test29 exposed
service/test29-headless exposed
deployment.apps/test30 created
service/test30 exposed
service/test30-headless exposed
deployment.apps/test31 created
service/test31 exposed
service/test31-headless exposed
deployment.apps/test32 created
service/test32 exposed
service/test32-headless exposed
deployment.apps/test33 created
service/test33 exposed
service/test33-headless exposed
deployment.apps/test34 created
service/test34 exposed
service/test34-headless exposed
deployment.apps/test35 created
service/test35 exposed
service/test35-headless exposed
deployment.apps/test36 created
service/test36 exposed
service/test36-headless exposed
deployment.apps/test37 created
service/test37 exposed
service/test37-headless exposed
deployment.apps/test38 created
service/test38 exposed
service/test38-headless exposed
deployment.apps/test39 created
service/test39 exposed
service/test39-headless exposed
deployment.apps/test40 created
service/test40 exposed
service/test40-headless exposed
deployment.apps/test41 created
service/test41 exposed
service/test41-headless exposed
deployment.apps/test42 created
service/test42 exposed
service/test42-headless exposed
deployment.apps/test43 created
service/test43 exposed
service/test43-headless exposed
deployment.apps/test44 created
service/test44 exposed
service/test44-headless exposed
deployment.apps/test45 created
service/test45 exposed
service/test45-headless exposed
deployment.apps/test46 created
service/test46 exposed
service/test46-headless exposed
deployment.apps/test47 created
service/test47 exposed
service/test47-headless exposed
deployment.apps/test48 created
service/test48 exposed
service/test48-headless exposed
deployment.apps/test49 created
service/test49 exposed
service/test49-headless exposed
deployment.apps/test50 created
service/test50 exposed
service/test50-headless exposed
sleep 30sec
####################
number of node
     100
number of pod
     460
number of service
     504
k top pod
test1-6658f86b76-mppvq   istio-proxy   3m           105Mi           
test1-6658f86b76-vqkhm   istio-proxy   1m           23Mi            
number of cluster
     516
      12
number of listener
    1022
      17
number of route
     508
       7
number of endpoint
     282
      25
####################
2024年 9月19日 木曜日 19時00分19秒 JST
create ns6
namespace/ns6 created
namespace/ns6 labeled
deployment.apps/test1 created
service/test1 exposed
service/test1-headless exposed
deployment.apps/test2 created
service/test2 exposed
service/test2-headless exposed
deployment.apps/test3 created
service/test3 exposed
service/test3-headless exposed
deployment.apps/test4 created
service/test4 exposed
service/test4-headless exposed
deployment.apps/test5 created
service/test5 exposed
service/test5-headless exposed
deployment.apps/test6 created
service/test6 exposed
service/test6-headless exposed
deployment.apps/test7 created
service/test7 exposed
service/test7-headless exposed
deployment.apps/test8 created
service/test8 exposed
service/test8-headless exposed
deployment.apps/test9 created
service/test9 exposed
service/test9-headless exposed
deployment.apps/test10 created
service/test10 exposed
service/test10-headless exposed
deployment.apps/test11 created
service/test11 exposed
service/test11-headless exposed
deployment.apps/test12 created
service/test12 exposed
service/test12-headless exposed
deployment.apps/test13 created
service/test13 exposed
service/test13-headless exposed
deployment.apps/test14 created
service/test14 exposed
service/test14-headless exposed
deployment.apps/test15 created
service/test15 exposed
service/test15-headless exposed
deployment.apps/test16 created
service/test16 exposed
service/test16-headless exposed
deployment.apps/test17 created
service/test17 exposed
service/test17-headless exposed
deployment.apps/test18 created
service/test18 exposed
service/test18-headless exposed
deployment.apps/test19 created
service/test19 exposed
service/test19-headless exposed
deployment.apps/test20 created
service/test20 exposed
service/test20-headless exposed
deployment.apps/test21 created
service/test21 exposed
service/test21-headless exposed
deployment.apps/test22 created
service/test22 exposed
service/test22-headless exposed
deployment.apps/test23 created
service/test23 exposed
service/test23-headless exposed
deployment.apps/test24 created
service/test24 exposed
service/test24-headless exposed
deployment.apps/test25 created
service/test25 exposed
service/test25-headless exposed
deployment.apps/test26 created
service/test26 exposed
service/test26-headless exposed
deployment.apps/test27 created
service/test27 exposed
service/test27-headless exposed
deployment.apps/test28 created
service/test28 exposed
service/test28-headless exposed
deployment.apps/test29 created
service/test29 exposed
service/test29-headless exposed
deployment.apps/test30 created
service/test30 exposed
service/test30-headless exposed
deployment.apps/test31 created
service/test31 exposed
service/test31-headless exposed
deployment.apps/test32 created
service/test32 exposed
service/test32-headless exposed
deployment.apps/test33 created
service/test33 exposed
service/test33-headless exposed
deployment.apps/test34 created
service/test34 exposed
service/test34-headless exposed
deployment.apps/test35 created
service/test35 exposed
service/test35-headless exposed
deployment.apps/test36 created
service/test36 exposed
service/test36-headless exposed
deployment.apps/test37 created
service/test37 exposed
service/test37-headless exposed
deployment.apps/test38 created
service/test38 exposed
service/test38-headless exposed
deployment.apps/test39 created
service/test39 exposed
service/test39-headless exposed
deployment.apps/test40 created
service/test40 exposed
service/test40-headless exposed
deployment.apps/test41 created
service/test41 exposed
service/test41-headless exposed
deployment.apps/test42 created
service/test42 exposed
service/test42-headless exposed
deployment.apps/test43 created
service/test43 exposed
service/test43-headless exposed
deployment.apps/test44 created
service/test44 exposed
service/test44-headless exposed
deployment.apps/test45 created
service/test45 exposed
service/test45-headless exposed
deployment.apps/test46 created
service/test46 exposed
service/test46-headless exposed
deployment.apps/test47 created
service/test47 exposed
service/test47-headless exposed
deployment.apps/test48 created
service/test48 exposed
service/test48-headless exposed
deployment.apps/test49 created
service/test49 exposed
service/test49-headless exposed
deployment.apps/test50 created
service/test50 exposed
service/test50-headless exposed
sleep 30sec
####################
number of node
     100
number of pod
     510
number of service
     604
k top pod
test1-6658f86b76-mppvq   istio-proxy   3m           123Mi           
test1-6658f86b76-vqkhm   istio-proxy   1m           23Mi            
number of cluster
     616
      12
number of listener
    1222
      17
number of route
     608
       7
number of endpoint
     332
      25
####################
2024年 9月19日 木曜日 19時03分17秒 JST
create ns7
namespace/ns7 created
namespace/ns7 labeled
deployment.apps/test1 created
service/test1 exposed
service/test1-headless exposed
deployment.apps/test2 created
service/test2 exposed
service/test2-headless exposed
deployment.apps/test3 created
service/test3 exposed
service/test3-headless exposed
deployment.apps/test4 created
service/test4 exposed
service/test4-headless exposed
deployment.apps/test5 created
service/test5 exposed
service/test5-headless exposed
deployment.apps/test6 created
service/test6 exposed
service/test6-headless exposed
deployment.apps/test7 created
service/test7 exposed
service/test7-headless exposed
deployment.apps/test8 created
service/test8 exposed
service/test8-headless exposed
deployment.apps/test9 created
service/test9 exposed
service/test9-headless exposed
deployment.apps/test10 created
service/test10 exposed
service/test10-headless exposed
deployment.apps/test11 created
service/test11 exposed
service/test11-headless exposed
deployment.apps/test12 created
service/test12 exposed
service/test12-headless exposed
deployment.apps/test13 created
service/test13 exposed
service/test13-headless exposed
deployment.apps/test14 created
service/test14 exposed
service/test14-headless exposed
deployment.apps/test15 created
service/test15 exposed
service/test15-headless exposed
deployment.apps/test16 created
service/test16 exposed
service/test16-headless exposed
deployment.apps/test17 created
service/test17 exposed
service/test17-headless exposed
deployment.apps/test18 created
service/test18 exposed
service/test18-headless exposed
deployment.apps/test19 created
service/test19 exposed
service/test19-headless exposed
deployment.apps/test20 created
service/test20 exposed
service/test20-headless exposed
deployment.apps/test21 created
service/test21 exposed
service/test21-headless exposed
deployment.apps/test22 created
service/test22 exposed
service/test22-headless exposed
deployment.apps/test23 created
service/test23 exposed
service/test23-headless exposed
deployment.apps/test24 created
service/test24 exposed
service/test24-headless exposed
deployment.apps/test25 created
service/test25 exposed
service/test25-headless exposed
deployment.apps/test26 created
service/test26 exposed
service/test26-headless exposed
deployment.apps/test27 created
service/test27 exposed
service/test27-headless exposed
deployment.apps/test28 created
service/test28 exposed
service/test28-headless exposed
deployment.apps/test29 created
service/test29 exposed
service/test29-headless exposed
deployment.apps/test30 created
service/test30 exposed
service/test30-headless exposed
deployment.apps/test31 created
service/test31 exposed
service/test31-headless exposed
deployment.apps/test32 created
service/test32 exposed
service/test32-headless exposed
deployment.apps/test33 created
service/test33 exposed
service/test33-headless exposed
deployment.apps/test34 created
service/test34 exposed
service/test34-headless exposed
deployment.apps/test35 created
service/test35 exposed
service/test35-headless exposed
deployment.apps/test36 created
service/test36 exposed
service/test36-headless exposed
deployment.apps/test37 created
service/test37 exposed
service/test37-headless exposed
deployment.apps/test38 created
service/test38 exposed
service/test38-headless exposed
deployment.apps/test39 created
service/test39 exposed
service/test39-headless exposed
deployment.apps/test40 created
service/test40 exposed
service/test40-headless exposed
deployment.apps/test41 created
service/test41 exposed
service/test41-headless exposed
deployment.apps/test42 created
service/test42 exposed
service/test42-headless exposed
deployment.apps/test43 created
service/test43 exposed
service/test43-headless exposed
deployment.apps/test44 created
service/test44 exposed
service/test44-headless exposed
deployment.apps/test45 created
service/test45 exposed
service/test45-headless exposed
deployment.apps/test46 created
service/test46 exposed
service/test46-headless exposed
deployment.apps/test47 created
service/test47 exposed
service/test47-headless exposed
deployment.apps/test48 created
service/test48 exposed
service/test48-headless exposed
deployment.apps/test49 created
service/test49 exposed
service/test49-headless exposed
deployment.apps/test50 created
service/test50 exposed
service/test50-headless exposed
sleep 30sec
####################
number of node
     100
number of pod
     560
number of service
     704
k top pod
test1-6658f86b76-mppvq   istio-proxy   9m           136Mi           
test1-6658f86b76-vqkhm   istio-proxy   1m           23Mi            
number of cluster
     716
      12
number of listener
    1422
      17
number of route
     708
       7
number of endpoint
     382
      25
####################
2024年 9月19日 木曜日 19時06分17秒 JST
create ns8
namespace/ns8 created
namespace/ns8 labeled
deployment.apps/test1 created
service/test1 exposed
service/test1-headless exposed
deployment.apps/test2 created
service/test2 exposed
service/test2-headless exposed
deployment.apps/test3 created
service/test3 exposed
service/test3-headless exposed
deployment.apps/test4 created
service/test4 exposed
service/test4-headless exposed
deployment.apps/test5 created
service/test5 exposed
service/test5-headless exposed
deployment.apps/test6 created
service/test6 exposed
service/test6-headless exposed
deployment.apps/test7 created
service/test7 exposed
service/test7-headless exposed
deployment.apps/test8 created
service/test8 exposed
service/test8-headless exposed
deployment.apps/test9 created
service/test9 exposed
service/test9-headless exposed
deployment.apps/test10 created
service/test10 exposed
service/test10-headless exposed
deployment.apps/test11 created
service/test11 exposed
service/test11-headless exposed
deployment.apps/test12 created
service/test12 exposed
service/test12-headless exposed
deployment.apps/test13 created
service/test13 exposed
service/test13-headless exposed
deployment.apps/test14 created
service/test14 exposed
service/test14-headless exposed
deployment.apps/test15 created
service/test15 exposed
service/test15-headless exposed
deployment.apps/test16 created
service/test16 exposed
service/test16-headless exposed
deployment.apps/test17 created
service/test17 exposed
service/test17-headless exposed
deployment.apps/test18 created
service/test18 exposed
service/test18-headless exposed
deployment.apps/test19 created
service/test19 exposed
service/test19-headless exposed
deployment.apps/test20 created
service/test20 exposed
service/test20-headless exposed
deployment.apps/test21 created
service/test21 exposed
service/test21-headless exposed
deployment.apps/test22 created
service/test22 exposed
service/test22-headless exposed
deployment.apps/test23 created
service/test23 exposed
service/test23-headless exposed
deployment.apps/test24 created
service/test24 exposed
service/test24-headless exposed
deployment.apps/test25 created
service/test25 exposed
service/test25-headless exposed
deployment.apps/test26 created
service/test26 exposed
service/test26-headless exposed
deployment.apps/test27 created
service/test27 exposed
service/test27-headless exposed
deployment.apps/test28 created
service/test28 exposed
service/test28-headless exposed
deployment.apps/test29 created
service/test29 exposed
service/test29-headless exposed
deployment.apps/test30 created
service/test30 exposed
service/test30-headless exposed
deployment.apps/test31 created
service/test31 exposed
service/test31-headless exposed
deployment.apps/test32 created
service/test32 exposed
service/test32-headless exposed
deployment.apps/test33 created
service/test33 exposed
service/test33-headless exposed
deployment.apps/test34 created
service/test34 exposed
service/test34-headless exposed
deployment.apps/test35 created
service/test35 exposed
service/test35-headless exposed
deployment.apps/test36 created
service/test36 exposed
service/test36-headless exposed
deployment.apps/test37 created
service/test37 exposed
service/test37-headless exposed
deployment.apps/test38 created
service/test38 exposed
service/test38-headless exposed
deployment.apps/test39 created
service/test39 exposed
service/test39-headless exposed
deployment.apps/test40 created
service/test40 exposed
service/test40-headless exposed
deployment.apps/test41 created
service/test41 exposed
service/test41-headless exposed
deployment.apps/test42 created
service/test42 exposed
service/test42-headless exposed
deployment.apps/test43 created
service/test43 exposed
service/test43-headless exposed
deployment.apps/test44 created
service/test44 exposed
service/test44-headless exposed
deployment.apps/test45 created
service/test45 exposed
service/test45-headless exposed
deployment.apps/test46 created
service/test46 exposed
service/test46-headless exposed
deployment.apps/test47 created
service/test47 exposed
service/test47-headless exposed
deployment.apps/test48 created
service/test48 exposed
service/test48-headless exposed
deployment.apps/test49 created
service/test49 exposed
service/test49-headless exposed
deployment.apps/test50 created
service/test50 exposed
service/test50-headless exposed
sleep 30sec
####################
number of node
     100
number of pod
     610
number of service
     804
k top pod
test1-6658f86b76-mppvq   istio-proxy   19m          151Mi           
test1-6658f86b76-vqkhm   istio-proxy   2m           23Mi            
number of cluster
     816
      12
number of listener
    1622
      17
number of route
     808
       7
number of endpoint
     432
      25
####################
2024年 9月19日 木曜日 19時09分22秒 JST
create ns9
namespace/ns9 created
namespace/ns9 labeled
deployment.apps/test1 created
service/test1 exposed
service/test1-headless exposed
deployment.apps/test2 created
service/test2 exposed
service/test2-headless exposed
deployment.apps/test3 created
service/test3 exposed
service/test3-headless exposed
deployment.apps/test4 created
service/test4 exposed
service/test4-headless exposed
deployment.apps/test5 created
service/test5 exposed
service/test5-headless exposed
deployment.apps/test6 created
service/test6 exposed
service/test6-headless exposed
deployment.apps/test7 created
service/test7 exposed
service/test7-headless exposed
deployment.apps/test8 created
service/test8 exposed
service/test8-headless exposed
deployment.apps/test9 created
service/test9 exposed
service/test9-headless exposed
deployment.apps/test10 created
service/test10 exposed
service/test10-headless exposed
deployment.apps/test11 created
service/test11 exposed
service/test11-headless exposed
deployment.apps/test12 created
service/test12 exposed
service/test12-headless exposed
deployment.apps/test13 created
service/test13 exposed
service/test13-headless exposed
deployment.apps/test14 created
service/test14 exposed
service/test14-headless exposed
deployment.apps/test15 created
service/test15 exposed
service/test15-headless exposed
deployment.apps/test16 created
service/test16 exposed
service/test16-headless exposed
deployment.apps/test17 created
service/test17 exposed
service/test17-headless exposed
deployment.apps/test18 created
service/test18 exposed
service/test18-headless exposed
deployment.apps/test19 created
service/test19 exposed
service/test19-headless exposed
deployment.apps/test20 created
service/test20 exposed
service/test20-headless exposed
deployment.apps/test21 created
service/test21 exposed
service/test21-headless exposed
deployment.apps/test22 created
service/test22 exposed
service/test22-headless exposed
deployment.apps/test23 created
service/test23 exposed
service/test23-headless exposed
deployment.apps/test24 created
service/test24 exposed
service/test24-headless exposed
deployment.apps/test25 created
service/test25 exposed
service/test25-headless exposed
deployment.apps/test26 created
service/test26 exposed
service/test26-headless exposed
deployment.apps/test27 created
service/test27 exposed
service/test27-headless exposed
deployment.apps/test28 created
service/test28 exposed
service/test28-headless exposed
deployment.apps/test29 created
service/test29 exposed
service/test29-headless exposed
deployment.apps/test30 created
service/test30 exposed
service/test30-headless exposed
deployment.apps/test31 created
service/test31 exposed
service/test31-headless exposed
deployment.apps/test32 created
service/test32 exposed
service/test32-headless exposed
deployment.apps/test33 created
service/test33 exposed
service/test33-headless exposed
deployment.apps/test34 created
service/test34 exposed
service/test34-headless exposed
deployment.apps/test35 created
service/test35 exposed
service/test35-headless exposed
deployment.apps/test36 created
service/test36 exposed
service/test36-headless exposed
deployment.apps/test37 created
service/test37 exposed
service/test37-headless exposed
deployment.apps/test38 created
service/test38 exposed
service/test38-headless exposed
deployment.apps/test39 created
service/test39 exposed
service/test39-headless exposed
deployment.apps/test40 created
service/test40 exposed
service/test40-headless exposed
deployment.apps/test41 created
service/test41 exposed
service/test41-headless exposed
deployment.apps/test42 created
service/test42 exposed
service/test42-headless exposed
deployment.apps/test43 created
service/test43 exposed
service/test43-headless exposed
deployment.apps/test44 created
service/test44 exposed
service/test44-headless exposed
deployment.apps/test45 created
service/test45 exposed
service/test45-headless exposed
deployment.apps/test46 created
service/test46 exposed
service/test46-headless exposed
deployment.apps/test47 created
service/test47 exposed
service/test47-headless exposed
deployment.apps/test48 created
service/test48 exposed
service/test48-headless exposed
deployment.apps/test49 created
service/test49 exposed
service/test49-headless exposed
deployment.apps/test50 created
service/test50 exposed
service/test50-headless exposed
sleep 30sec
####################
number of node
     100
number of pod
     660
number of service
     904
k top pod
test1-6658f86b76-mppvq   istio-proxy   3m           169Mi           
test1-6658f86b76-vqkhm   istio-proxy   1m           23Mi            
number of cluster
     916
      12
number of listener
    1822
      17
number of route
     908
       7
number of endpoint
     482
      25
####################
2024年 9月19日 木曜日 19時12分27秒 JST
create ns10
namespace/ns10 created
namespace/ns10 labeled
deployment.apps/test1 created
service/test1 exposed
service/test1-headless exposed
deployment.apps/test2 created
service/test2 exposed
service/test2-headless exposed
deployment.apps/test3 created
service/test3 exposed
service/test3-headless exposed
deployment.apps/test4 created
service/test4 exposed
service/test4-headless exposed
deployment.apps/test5 created
service/test5 exposed
service/test5-headless exposed
deployment.apps/test6 created
service/test6 exposed
service/test6-headless exposed
deployment.apps/test7 created
service/test7 exposed
service/test7-headless exposed
deployment.apps/test8 created
service/test8 exposed
service/test8-headless exposed
deployment.apps/test9 created
service/test9 exposed
service/test9-headless exposed
deployment.apps/test10 created
service/test10 exposed
service/test10-headless exposed
deployment.apps/test11 created
service/test11 exposed
service/test11-headless exposed
deployment.apps/test12 created
service/test12 exposed
service/test12-headless exposed
deployment.apps/test13 created
service/test13 exposed
service/test13-headless exposed
deployment.apps/test14 created
service/test14 exposed
service/test14-headless exposed
deployment.apps/test15 created
service/test15 exposed
service/test15-headless exposed
deployment.apps/test16 created
service/test16 exposed
service/test16-headless exposed
deployment.apps/test17 created
service/test17 exposed
service/test17-headless exposed
deployment.apps/test18 created
service/test18 exposed
service/test18-headless exposed
deployment.apps/test19 created
service/test19 exposed
service/test19-headless exposed
deployment.apps/test20 created
service/test20 exposed
service/test20-headless exposed
deployment.apps/test21 created
service/test21 exposed
service/test21-headless exposed
deployment.apps/test22 created
service/test22 exposed
service/test22-headless exposed
deployment.apps/test23 created
service/test23 exposed
service/test23-headless exposed
deployment.apps/test24 created
service/test24 exposed
service/test24-headless exposed
deployment.apps/test25 created
service/test25 exposed
service/test25-headless exposed
deployment.apps/test26 created
service/test26 exposed
service/test26-headless exposed
deployment.apps/test27 created
service/test27 exposed
service/test27-headless exposed
deployment.apps/test28 created
service/test28 exposed
service/test28-headless exposed
deployment.apps/test29 created
service/test29 exposed
service/test29-headless exposed
deployment.apps/test30 created
service/test30 exposed
service/test30-headless exposed
deployment.apps/test31 created
service/test31 exposed
service/test31-headless exposed
deployment.apps/test32 created
service/test32 exposed
service/test32-headless exposed
deployment.apps/test33 created
service/test33 exposed
service/test33-headless exposed
deployment.apps/test34 created
service/test34 exposed
service/test34-headless exposed
deployment.apps/test35 created
service/test35 exposed
service/test35-headless exposed
deployment.apps/test36 created
service/test36 exposed
service/test36-headless exposed
deployment.apps/test37 created
service/test37 exposed
service/test37-headless exposed
deployment.apps/test38 created
service/test38 exposed
service/test38-headless exposed
deployment.apps/test39 created
service/test39 exposed
service/test39-headless exposed
deployment.apps/test40 created
service/test40 exposed
service/test40-headless exposed
deployment.apps/test41 created
service/test41 exposed
service/test41-headless exposed
deployment.apps/test42 created
service/test42 exposed
service/test42-headless exposed
deployment.apps/test43 created
service/test43 exposed
service/test43-headless exposed
deployment.apps/test44 created
service/test44 exposed
service/test44-headless exposed
deployment.apps/test45 created
service/test45 exposed
service/test45-headless exposed
deployment.apps/test46 created
service/test46 exposed
service/test46-headless exposed
deployment.apps/test47 created
service/test47 exposed
service/test47-headless exposed
deployment.apps/test48 created
service/test48 exposed
service/test48-headless exposed
deployment.apps/test49 created
service/test49 exposed
service/test49-headless exposed
deployment.apps/test50 created
service/test50 exposed
service/test50-headless exposed
sleep 30sec
####################
number of node
     100
number of pod
     710
number of service
    1004
k top pod
test1-6658f86b76-mppvq   istio-proxy   12m          179Mi           
test1-6658f86b76-vqkhm   istio-proxy   2m           23Mi            
number of cluster
    1016
      12
number of listener
    2022
      17
number of route
    1008
       7
number of endpoint
     532
      25
####################

グラフ

アプリケーションのインスタンス数に応じて線形に増えていることが確認できる。Sidecar がある場合は他の Namespace の影響を受けないので一切増えないことも確認できる。

Istio の Envoy Proxy のメモリ使用量の調査

Istio の Envoy Proxy のメモリ使用量を調査する。

クラスターの作成

クラスターを作成する。

CLUSTER_NAME="istio"
MY_ARN=$(aws sts get-caller-identity --output text --query Arn)
AWS_ACCOUNT_ID=$(aws sts get-caller-identity --output text --query Account)
cat << EOF > cluster.yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: ${CLUSTER_NAME}
  region: ap-northeast-1
  version: "1.29"
vpc:
  cidr: "10.0.0.0/16"

availabilityZones:
  - ap-northeast-1a
  - ap-northeast-1c

cloudWatch:
  clusterLogging:
    enableTypes: ["*"]

iam:
  withOIDC: true

accessConfig:
  bootstrapClusterCreatorAdminPermissions: false
  authenticationMode: API
  accessEntries:
    - principalARN: arn:aws:iam::${AWS_ACCOUNT_ID}:role/Admin
      accessPolicies:
        - policyARN: arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy
          accessScope:
            type: cluster
EOF
eksctl create cluster -f cluster.yaml

大きなインスタンス (m6i.32xlarge, 128 core) が 1 ノードのノードグループを作成する。

cat << EOF > m2.yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: ${CLUSTER_NAME}
  region: ap-northeast-1

managedNodeGroups:
  - name: m2
    instanceType: m6i.32xlarge
    minSize: 1
    maxSize: 20
    desiredCapacity: 1
    privateNetworking: true
    iam:
      attachPolicyARNs:
        - arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
        - arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
        - arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore
EOF
eksctl create nodegroup -f m2.yaml

ノードを確認する。

$ k get nodes
NAME                                            STATUS   ROLES    AGE   VERSION
ip-10-0-80-97.ap-northeast-1.compute.internal   Ready    <none>   10m   v1.29.6-eks-1552ad0

metrics-server のインストール

メモリ使用量計測のため metrics-server をインストールする。

$ kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
$ k -n kube-system get pods
NAME                              READY   STATUS    RESTARTS   AGE
aws-node-fwtlr                    2/2     Running   0          8m46s
coredns-676bf68468-f56zh          1/1     Running   0          41m
coredns-676bf68468-pmkwl          1/1     Running   0          15m
kube-proxy-99shl                  1/1     Running   0          8m46s
metrics-server-75bf97fcc9-9thcf   1/1     Running   0          33s

Istio のインストール

諸事情により Helm でインストールする。

helm repo add istio https://istio-release.storage.googleapis.com/charts
helm repo update

諸事情により core と istiod のみをインストールする。

$ helm install istio-base -n istio-system istio/base --version 1.23.0 --create-namespace
NAME: istio-base
LAST DEPLOYED: Wed Sep 18 20:47:17 2024
NAMESPACE: istio-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Istio base successfully installed!

To learn more about the release, try:
  $ helm status istio-base -n istio-system
  $ helm get all istio-base -n istio-system
$ helm install istiod -n istio-system istio/istiod --version 1.23.0
NAME: istiod
LAST DEPLOYED: Wed Sep 18 20:47:43 2024
NAMESPACE: istio-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
"istiod" successfully installed!

To learn more about the release, try:
  $ helm status istiod -n istio-system
  $ helm get all istiod -n istio-system

Next steps:
  * Deploy a Gateway: https://istio.io/latest/docs/setup/additional-setup/gateway/
  * Try out our tasks to get started on common configurations:
    * https://istio.io/latest/docs/tasks/traffic-management
    * https://istio.io/latest/docs/tasks/security/
    * https://istio.io/latest/docs/tasks/policy-enforcement/
  * Review the list of actively supported releases, CVE publications and our hardening guide:
    * https://istio.io/latest/docs/releases/supported-releases/
    * https://istio.io/latest/news/security/
    * https://istio.io/latest/docs/ops/best-practices/security/

For further documentation see https://istio.io website

Pod を確認する。

$ k -n istio-system get po
NAME                     READY   STATUS    RESTARTS   AGE
istiod-dd95d7bdc-hxv47   1/1     Running   0          3m57s

Pod 1 個

Namespace を作成し、自動インジェクションするためのラベルをつける。

$ k create ns ns1
namespace/ns1 created
$ k label namespace ns1 istio-injection=enabled
namespace/ns1 labeled

nginx の Deployment を作成する。

$ k -n ns1 create deployment test --image=nginx
deployment.apps/test created
$ k -n ns1 get po
NAME                    READY   STATUS    RESTARTS   AGE
test-7955cf7657-8zbn8   2/2     Running   0          8s

この状態のメモリ使用量を確認する。25MiB 程度

$ k -n ns1 top pod --containers
POD                     NAME          CPU(cores)   MEMORY(bytes)   
test-7955cf7657-8zbn8   istio-proxy   27m          25Mi            
test-7955cf7657-8zbn8   nginx         33m          95Mi 

オブジェクトの数を確認する。

$ k get no -A --no-headers | wc -l
       1
$ k get po -A --no-headers | wc -l
       7
$ k get svc -A --no-headers | wc -l
       4

istioctl のバージョンを確認する。

$ istioctl version
client version: 1.23.1
control plane version: 1.23.0
data plane version: 1.23.0 (1 proxies)

メッシュの状態を確認する。

$ istioctl proxy-status
NAME                          CLUSTER        CDS              LDS              EDS              RDS              ECDS        ISTIOD                     VERSION
test-7955cf7657-8zbn8.ns1     Kubernetes     SYNCED (41s)     SYNCED (41s)     SYNCED (41s)     SYNCED (41s)     IGNORED     istiod-dd95d7bdc-lbk7q     1.23.0

設定とその数を確認する。

$ istioctl proxy-config cluster test-7955cf7657-8zbn8.ns1 | wc -l
      16
$ istioctl proxy-config listener test-7955cf7657-8zbn8.ns1 | wc -l
      22
$ istioctl proxy-config route test-7955cf7657-8zbn8.ns1 | wc -l
       8
$ istioctl proxy-config endpoint test-7955cf7657-8zbn8.ns1 | wc -l
      16
$ istioctl proxy-config cluster test-7955cf7657-8zbn8.ns1
SERVICE FQDN                                     PORT      SUBSET     DIRECTION     TYPE             DESTINATION RULE
BlackHoleCluster                                 -         -          -             STATIC           
InboundPassthroughCluster                        -         -          -             ORIGINAL_DST     
PassthroughCluster                               -         -          -             ORIGINAL_DST     
agent                                            -         -          -             STATIC           
istiod.istio-system.svc.cluster.local            443       -          outbound      EDS              
istiod.istio-system.svc.cluster.local            15010     -          outbound      EDS              
istiod.istio-system.svc.cluster.local            15012     -          outbound      EDS              
istiod.istio-system.svc.cluster.local            15014     -          outbound      EDS              
kube-dns.kube-system.svc.cluster.local           53        -          outbound      EDS              
kube-dns.kube-system.svc.cluster.local           9153      -          outbound      EDS              
kubernetes.default.svc.cluster.local             443       -          outbound      EDS              
metrics-server.kube-system.svc.cluster.local     443       -          outbound      EDS              
prometheus_stats                                 -         -          -             STATIC           
sds-grpc                                         -         -          -             STATIC 
$ istioctl proxy-config listener test-7955cf7657-8zbn8.ns1
ADDRESSES      PORT  MATCH                                                   DESTINATION
172.20.0.10    53    ALL                                                     Cluster: outbound|53||kube-dns.kube-system.svc.cluster.local
172.20.0.1     443   ALL                                                     Cluster: outbound|443||kubernetes.default.svc.cluster.local
172.20.143.212 443   ALL                                                     Cluster: outbound|443||metrics-server.kube-system.svc.cluster.local
172.20.34.15   443   ALL                                                     Cluster: outbound|443||istiod.istio-system.svc.cluster.local
172.20.0.10    9153  Trans: raw_buffer; App: http/1.1,h2c                    Route: kube-dns.kube-system.svc.cluster.local:9153
172.20.0.10    9153  ALL                                                     Cluster: outbound|9153||kube-dns.kube-system.svc.cluster.local
0.0.0.0        15001 ALL                                                     PassthroughCluster
0.0.0.0        15001 Addr: *:15001                                           Non-HTTP/Non-TCP
0.0.0.0        15006 Addr: *:15006                                           Non-HTTP/Non-TCP
0.0.0.0        15006 Trans: tls; App: istio-http/1.0,istio-http/1.1,istio-h2 InboundPassthroughCluster
0.0.0.0        15006 Trans: raw_buffer; App: http/1.1,h2c                    InboundPassthroughCluster
0.0.0.0        15006 Trans: tls; App: TCP TLS                                InboundPassthroughCluster
0.0.0.0        15006 Trans: raw_buffer                                       InboundPassthroughCluster
0.0.0.0        15006 Trans: tls                                              InboundPassthroughCluster
0.0.0.0        15010 Trans: raw_buffer; App: http/1.1,h2c                    Route: 15010
0.0.0.0        15010 ALL                                                     PassthroughCluster
172.20.34.15   15012 ALL                                                     Cluster: outbound|15012||istiod.istio-system.svc.cluster.local
0.0.0.0        15014 Trans: raw_buffer; App: http/1.1,h2c                    Route: 15014
0.0.0.0        15014 ALL                                                     PassthroughCluster
0.0.0.0        15021 ALL                                                     Inline Route: /healthz/ready*
0.0.0.0        15090 ALL                                                     Inline Route: /stats/prometheus*
$ istioctl proxy-config route test-7955cf7657-8zbn8.ns1
NAME                                            VHOST NAME                                      DOMAINS                               MATCH                  VIRTUAL SERVICE
15010                                           istiod.istio-system.svc.cluster.local:15010     istiod.istio-system, 172.20.34.15     /*                     
kube-dns.kube-system.svc.cluster.local:9153     kube-dns.kube-system.svc.cluster.local:9153     *                                     /*                     
15014                                           istiod.istio-system.svc.cluster.local:15014     istiod.istio-system, 172.20.34.15     /*                     
InboundPassthroughCluster                       inbound|http|0                                  *                                     /*                     
                                                backend                                         *                                     /healthz/ready*        
                                                backend                                         *                                     /stats/prometheus*     
InboundPassthroughCluster                       inbound|http|0                                  *                                     /*     
$ istioctl proxy-config endpoint test-7955cf7657-8zbn8.ns1
ENDPOINT                                                STATUS      OUTLIER CHECK     CLUSTER
10.0.100.189:443                                        HEALTHY     OK                outbound|443||kubernetes.default.svc.cluster.local
10.0.75.112:10250                                       HEALTHY     OK                outbound|443||metrics-server.kube-system.svc.cluster.local
10.0.75.6:15010                                         HEALTHY     OK                outbound|15010||istiod.istio-system.svc.cluster.local
10.0.75.6:15012                                         HEALTHY     OK                outbound|15012||istiod.istio-system.svc.cluster.local
10.0.75.6:15014                                         HEALTHY     OK                outbound|15014||istiod.istio-system.svc.cluster.local
10.0.75.6:15017                                         HEALTHY     OK                outbound|443||istiod.istio-system.svc.cluster.local
10.0.81.87:53                                           HEALTHY     OK                outbound|53||kube-dns.kube-system.svc.cluster.local
10.0.81.87:9153                                         HEALTHY     OK                outbound|9153||kube-dns.kube-system.svc.cluster.local
10.0.85.135:443                                         HEALTHY     OK                outbound|443||kubernetes.default.svc.cluster.local
10.0.87.136:53                                          HEALTHY     OK                outbound|53||kube-dns.kube-system.svc.cluster.local
10.0.87.136:9153                                        HEALTHY     OK                outbound|9153||kube-dns.kube-system.svc.cluster.local
127.0.0.1:15000                                         HEALTHY     OK                prometheus_stats
127.0.0.1:15020                                         HEALTHY     OK                agent
unix://./etc/istio/proxy/XDS                            HEALTHY     OK                xds-grpc
unix://./var/run/secrets/workload-spiffe-uds/socket     HEALTHY     OK                sds-grpc

Pod 100 個

Pod をスケールして 100 個にする。

$ k -n ns1 scale deployment test --replicas=100
deployment.apps/test scaled

すべて Running なことを確認する。

$ k -n ns1 get pods
NAME                    READY   STATUS    RESTARTS   AGE
test-7955cf7657-2dq7j   2/2     Running   0          109s
test-7955cf7657-2kl8f   2/2     Running   0          109s
test-7955cf7657-2pwf7   2/2     Running   0          106s
test-7955cf7657-2szkw   2/2     Running   0          108s

(省略)

test-7955cf7657-zhm5p   2/2     Running   0          108s
test-7955cf7657-zm4hp   2/2     Running   0          107s
test-7955cf7657-zs7n7   2/2     Running   0          108s
test-7955cf7657-zwswj   2/2     Running   0          108s

メモリ使用量を確認すると 22-24MiB 程度で、増えていない。単純に Pod だけ増やしても設定が増えていないからと思われる。

$ k -n ns1 top pod --containers | head
POD                     NAME          CPU(cores)   MEMORY(bytes)   
test-7955cf7657-2dq7j   istio-proxy   4m           23Mi            
test-7955cf7657-2dq7j   nginx         0m           92Mi            
test-7955cf7657-2kl8f   istio-proxy   3m           22Mi            
test-7955cf7657-2kl8f   nginx         0m           90Mi            
test-7955cf7657-2pwf7   istio-proxy   4m           22Mi            
test-7955cf7657-2pwf7   nginx         0m           91Mi            
test-7955cf7657-2szkw   istio-proxy   4m           24Mi            
test-7955cf7657-2szkw   nginx         0m           90Mi            
test-7955cf7657-4wgqj   istio-proxy   4m           23Mi     

設定の数は増えてない。

$ istioctl proxy-config cluster test-7955cf7657-8zbn8.ns1 | wc -l
      16
$ istioctl proxy-config listener test-7955cf7657-8zbn8.ns1 | wc -l
      22
$ istioctl proxy-config route test-7955cf7657-8zbn8.ns1 | wc -l
       8
$ istioctl proxy-config endpoint test-7955cf7657-8zbn8.ns1 | wc -l
      16

Service の作成

Service を作ってみる。

$ k -n ns1 expose deployment test --port=80 --target-port=80
service/test exposed

メモリ使用量は 24-25MiB 程度で、微増した程度。

$ k -n ns1 top pod --containers | head                            
POD                     NAME          CPU(cores)   MEMORY(bytes)   
test-7955cf7657-2dq7j   istio-proxy   5m           24Mi            
test-7955cf7657-2dq7j   nginx         0m           92Mi            
test-7955cf7657-2kl8f   istio-proxy   5m           25Mi            
test-7955cf7657-2kl8f   nginx         0m           90Mi            
test-7955cf7657-2pwf7   istio-proxy   5m           24Mi            
test-7955cf7657-2pwf7   nginx         0m           91Mi            
test-7955cf7657-2szkw   istio-proxy   5m           25Mi            
test-7955cf7657-2szkw   nginx         0m           90Mi            
test-7955cf7657-4wgqj   istio-proxy   5m           24Mi  

設定も微増している。今回の場合、Service を追加したのでアウトバウンドが増えているが、自身がその Service なので、インバウンドの分も増えている。endpoint は Pod の数分増えた。

$ istioctl proxy-config cluster test-7955cf7657-8zbn8.ns1 | wc -l
      18
$ istioctl proxy-config listener test-7955cf7657-8zbn8.ns1 | wc -l
      29
$ istioctl proxy-config route test-7955cf7657-8zbn8.ns1 | wc -l
      11
$ istioctl proxy-config endpoint test-7955cf7657-8zbn8.ns1 | wc -l
     116
$ istioctl proxy-config cluster test-7955cf7657-8zbn8.ns1
SERVICE FQDN                                     PORT      SUBSET     DIRECTION     TYPE             DESTINATION RULE
                                                 80        -          inbound       ORIGINAL_DST     
BlackHoleCluster                                 -         -          -             STATIC           
InboundPassthroughCluster                        -         -          -             ORIGINAL_DST     
PassthroughCluster                               -         -          -             ORIGINAL_DST     
agent                                            -         -          -             STATIC           
istiod.istio-system.svc.cluster.local            443       -          outbound      EDS              
istiod.istio-system.svc.cluster.local            15010     -          outbound      EDS              
istiod.istio-system.svc.cluster.local            15012     -          outbound      EDS              
istiod.istio-system.svc.cluster.local            15014     -          outbound      EDS              
kube-dns.kube-system.svc.cluster.local           53        -          outbound      EDS              
kube-dns.kube-system.svc.cluster.local           9153      -          outbound      EDS              
kubernetes.default.svc.cluster.local             443       -          outbound      EDS              
metrics-server.kube-system.svc.cluster.local     443       -          outbound      EDS              
prometheus_stats                                 -         -          -             STATIC           
sds-grpc                                         -         -          -             STATIC           
test.ns1.svc.cluster.local                       80        -          outbound      EDS              
xds-grpc                                         -         -          -             STATIC   
$ istioctl proxy-config listener test-7955cf7657-8zbn8.ns1
ADDRESSES      PORT  MATCH                                                               DESTINATION
172.20.0.10    53    ALL                                                                 Cluster: outbound|53||kube-dns.kube-system.svc.cluster.local
172.20.160.18  80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
172.20.160.18  80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
172.20.0.1     443   ALL                                                                 Cluster: outbound|443||kubernetes.default.svc.cluster.local
172.20.143.212 443   ALL                                                                 Cluster: outbound|443||metrics-server.kube-system.svc.cluster.local
172.20.34.15   443   ALL                                                                 Cluster: outbound|443||istiod.istio-system.svc.cluster.local
172.20.0.10    9153  Trans: raw_buffer; App: http/1.1,h2c                                Route: kube-dns.kube-system.svc.cluster.local:9153
172.20.0.10    9153  ALL                                                                 Cluster: outbound|9153||kube-dns.kube-system.svc.cluster.local
0.0.0.0        15001 ALL                                                                 PassthroughCluster
0.0.0.0        15001 Addr: *:15001                                                       Non-HTTP/Non-TCP
0.0.0.0        15006 Addr: *:15006                                                       Non-HTTP/Non-TCP
0.0.0.0        15006 Trans: tls; App: istio-http/1.0,istio-http/1.1,istio-h2             InboundPassthroughCluster
0.0.0.0        15006 Trans: raw_buffer; App: http/1.1,h2c                                InboundPassthroughCluster
0.0.0.0        15006 Trans: tls; App: TCP TLS                                            InboundPassthroughCluster
0.0.0.0        15006 Trans: raw_buffer                                                   InboundPassthroughCluster
0.0.0.0        15006 Trans: tls                                                          InboundPassthroughCluster
0.0.0.0        15006 Trans: tls; App: istio-http/1.0,istio-http/1.1,istio-h2; Addr: *:80 Cluster: inbound|80||
0.0.0.0        15006 Trans: raw_buffer; App: http/1.1,h2c; Addr: *:80                    Cluster: inbound|80||
0.0.0.0        15006 Trans: tls; App: TCP TLS; Addr: *:80                                Cluster: inbound|80||
0.0.0.0        15006 Trans: raw_buffer; Addr: *:80                                       Cluster: inbound|80||
0.0.0.0        15006 Trans: tls; Addr: *:80                                              Cluster: inbound|80||
0.0.0.0        15010 Trans: raw_buffer; App: http/1.1,h2c                                Route: 15010
0.0.0.0        15010 ALL                                                                 PassthroughCluster
172.20.34.15   15012 ALL                                                                 Cluster: outbound|15012||istiod.istio-system.svc.cluster.local
0.0.0.0        15014 Trans: raw_buffer; App: http/1.1,h2c                                Route: 15014
0.0.0.0        15014 ALL                                                                 PassthroughCluster
0.0.0.0        15021 ALL                                                                 Inline Route: /healthz/ready*
0.0.0.0        15090 ALL                                                                 Inline Route: /stats/prometheus*
$ istioctl proxy-config route test-7955cf7657-8zbn8.ns1
NAME                                            VHOST NAME                                      DOMAINS                               MATCH                  VIRTUAL SERVICE
15014                                           istiod.istio-system.svc.cluster.local:15014     istiod.istio-system, 172.20.34.15     /*                     
test.ns1.svc.cluster.local:80                   test.ns1.svc.cluster.local:80                   *                                     /*                     
15010                                           istiod.istio-system.svc.cluster.local:15010     istiod.istio-system, 172.20.34.15     /*                     
kube-dns.kube-system.svc.cluster.local:9153     kube-dns.kube-system.svc.cluster.local:9153     *                                     /*                     
InboundPassthroughCluster                       inbound|http|0                                  *                                     /*                     
inbound|80||                                    inbound|http|80                                 *                                     /*                     
                                                backend                                         *                                     /healthz/ready*        
                                                backend                                         *                                     /stats/prometheus*     
InboundPassthroughCluster                       inbound|http|0                                  *                                     /*                     
inbound|80||                                    inbound|http|80                                 *                                     /*     
$ istioctl proxy-config endpoint test-7955cf7657-8zbn8.ns1
ENDPOINT                                                STATUS      OUTLIER CHECK     CLUSTER
10.0.100.189:443                                        HEALTHY     OK                outbound|443||kubernetes.default.svc.cluster.local
10.0.64.108:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.64.140:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.64.147:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.64.35:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.64.97:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.65.151:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.65.50:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.65.60:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.65.99:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.66.110:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.66.125:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.66.137:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.66.21:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.66.70:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.67.199:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.67.58:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.69.119:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.69.180:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.69.84:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.70.189:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.70.19:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.70.243:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.70.247:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.70.9:80                                            HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.71.18:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.71.200:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.71.27:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.71.63:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.71.93:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.72.13:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.72.242:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.73.178:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.73.94:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.74.117:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.74.14:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.74.159:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.75.108:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.75.112:10250                                       HEALTHY     OK                outbound|443||metrics-server.kube-system.svc.cluster.local
10.0.75.146:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.75.200:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.75.248:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.75.51:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.75.6:15010                                         HEALTHY     OK                outbound|15010||istiod.istio-system.svc.cluster.local
10.0.75.6:15012                                         HEALTHY     OK                outbound|15012||istiod.istio-system.svc.cluster.local
10.0.75.6:15014                                         HEALTHY     OK                outbound|15014||istiod.istio-system.svc.cluster.local
10.0.75.6:15017                                         HEALTHY     OK                outbound|443||istiod.istio-system.svc.cluster.local
10.0.76.216:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.76.229:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.76.80:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.76.83:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.77.219:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.77.59:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.78.160:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.78.19:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.78.215:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.79.181:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.79.43:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.79.57:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.80.127:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.80.252:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.81.201:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.81.23:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.81.87:53                                           HEALTHY     OK                outbound|53||kube-dns.kube-system.svc.cluster.local
10.0.81.87:9153                                         HEALTHY     OK                outbound|9153||kube-dns.kube-system.svc.cluster.local
10.0.82.119:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.82.208:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.82.24:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.82.40:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.83.218:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.84.174:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.84.212:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.84.58:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.85.135:443                                         HEALTHY     OK                outbound|443||kubernetes.default.svc.cluster.local
10.0.85.229:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.85.230:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.85.55:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.86.118:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.86.171:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.86.237:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.86.91:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.87.126:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.87.136:53                                          HEALTHY     OK                outbound|53||kube-dns.kube-system.svc.cluster.local
10.0.87.136:9153                                        HEALTHY     OK                outbound|9153||kube-dns.kube-system.svc.cluster.local
10.0.87.21:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.87.97:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.88.169:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.88.189:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.88.53:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.88.71:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.88.73:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.89.118:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.89.147:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.89.46:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.90.10:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.90.50:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.91.125:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.91.250:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.91.253:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.92.180:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.92.25:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.93.102:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.93.206:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.93.212:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.93.243:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.93.25:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.93.78:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.94.255:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.95.111:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.95.225:80                                          HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
10.0.95.64:80                                           HEALTHY     OK                outbound|80||test.ns1.svc.cluster.local
127.0.0.1:15000                                         HEALTHY     OK                prometheus_stats
127.0.0.1:15020                                         HEALTHY     OK                agent
unix://./etc/istio/proxy/XDS                            HEALTHY     OK                xds-grpc
unix://./var/run/secrets/workload-spiffe-uds/socket     HEALTHY     OK                sds-grpc

Headless Service の作成

一度 Service を削除して Headless Service として再作成する。

$ k -n ns1 delete svc test
service "test" deleted
$ k -n ns1 expose deployment test --port=80 --target-port=80 --cluster-ip=None
service/test exposed
$ k -n ns1 get svc
NAME   TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
test   ClusterIP   None         <none>        80/TCP    14s

31MiB と少し増えた。

$ k -n ns1 top pod --containers | head
POD                     NAME          CPU(cores)   MEMORY(bytes)   
test-7955cf7657-2dq7j   istio-proxy   3m           31Mi            
test-7955cf7657-2dq7j   nginx         0m           92Mi            
test-7955cf7657-2kl8f   istio-proxy   4m           31Mi            
test-7955cf7657-2kl8f   nginx         0m           90Mi            
test-7955cf7657-2pwf7   istio-proxy   4m           31Mi            
test-7955cf7657-2pwf7   nginx         0m           91Mi            
test-7955cf7657-2szkw   istio-proxy   3m           31Mi            
test-7955cf7657-2szkw   nginx         0m           90Mi            
test-7955cf7657-4wgqj   istio-proxy   3m           31Mi  

設定が減ったとしても、メモリ使用量はすぐには減らないような気がするので、念のため rollout して Pod を再作成してみる。

$ k -n ns1 rollout restart deployment test
deployment.apps/test restarted

むしろ rollout したことで微増した。rollout 中にオブジェクトが増えることで設定が増えてしまうのかもしれない。

$ k -n ns1 top pod --containers | head
POD                     NAME          CPU(cores)   MEMORY(bytes)   
test-74c897698f-22sjb   istio-proxy   10m          37Mi            
test-74c897698f-22sjb   nginx         0m           92Mi            
test-74c897698f-2h2x9   istio-proxy   8m           36Mi            
test-74c897698f-2h2x9   nginx         0m           90Mi            
test-74c897698f-2scl7   istio-proxy   10m          38Mi            
test-74c897698f-2scl7   nginx         0m           90Mi            
test-74c897698f-4258d   istio-proxy   11m          37Mi            
test-74c897698f-4258d   nginx         0m           91Mi            
test-74c897698f-45fnj   istio-proxy   9m           37Mi   

設定を見てみる。

$ istioctl proxy-status | head
NAME                          CLUSTER        CDS              LDS              EDS              RDS              ECDS        ISTIOD                     VERSION
test-74c897698f-22sjb.ns1     Kubernetes     SYNCED (85s)     SYNCED (85s)     SYNCED (55s)     SYNCED (85s)     IGNORED     istiod-dd95d7bdc-lbk7q     1.23.0
test-74c897698f-2h2x9.ns1     Kubernetes     SYNCED (85s)     SYNCED (85s)     SYNCED (55s)     SYNCED (85s)     IGNORED     istiod-dd95d7bdc-lbk7q     1.23.0
test-74c897698f-2scl7.ns1     Kubernetes     SYNCED (85s)     SYNCED (85s)     SYNCED (55s)     SYNCED (85s)     IGNORED     istiod-dd95d7bdc-lbk7q     1.23.0
test-74c897698f-4258d.ns1     Kubernetes     SYNCED (85s)     SYNCED (85s)     SYNCED (55s)     SYNCED (85s)     IGNORED     istiod-dd95d7bdc-lbk7q     1.23.0
test-74c897698f-45fnj.ns1     Kubernetes     SYNCED (85s)     SYNCED (85s)     SYNCED (55s)     SYNCED (85s)     IGNORED     istiod-dd95d7bdc-lbk7q     1.23.0
test-74c897698f-4zxz7.ns1     Kubernetes     SYNCED (85s)     SYNCED (85s)     SYNCED (55s)     SYNCED (85s)     IGNORED     istiod-dd95d7bdc-lbk7q     1.23.0
test-74c897698f-56xsq.ns1     Kubernetes     SYNCED (85s)     SYNCED (85s)     SYNCED (55s)     SYNCED (85s)     IGNORED     istiod-dd95d7bdc-lbk7q     1.23.0
test-74c897698f-5rmvq.ns1     Kubernetes     SYNCED (85s)     SYNCED (85s)     SYNCED (55s)     SYNCED (85s)     IGNORED     istiod-dd95d7bdc-lbk7q     1.23.0
test-74c897698f-5z6kd.ns1     Kubernetes     SYNCED (85s)     SYNCED (85s)     SYNCED (55s)     SYNCED (85s)     IGNORED     istiod-dd95d7bdc-lbk7q     1.23.0

Headless Service だと listener は増えたが、 endpoint が減った。

$ istioctl proxy-config cluster test-74c897698f-22sjb.ns1 | wc -l
      18
$ istioctl proxy-config listener test-74c897698f-22sjb.ns1 | wc -l
     225
$ istioctl proxy-config route test-74c897698f-22sjb.ns1 | wc -l
      11
$ istioctl proxy-config endpoint test-74c897698f-22sjb.ns1 | wc -l
      16
$ istioctl proxy-config cluster test-74c897698f-22sjb.ns1
SERVICE FQDN                                     PORT      SUBSET     DIRECTION     TYPE             DESTINATION RULE
                                                 80        -          inbound       ORIGINAL_DST     
BlackHoleCluster                                 -         -          -             STATIC           
InboundPassthroughCluster                        -         -          -             ORIGINAL_DST     
PassthroughCluster                               -         -          -             ORIGINAL_DST     
agent                                            -         -          -             STATIC           
istiod.istio-system.svc.cluster.local            443       -          outbound      EDS              
istiod.istio-system.svc.cluster.local            15010     -          outbound      EDS              
istiod.istio-system.svc.cluster.local            15012     -          outbound      EDS              
istiod.istio-system.svc.cluster.local            15014     -          outbound      EDS              
kube-dns.kube-system.svc.cluster.local           53        -          outbound      EDS              
kube-dns.kube-system.svc.cluster.local           9153      -          outbound      EDS              
kubernetes.default.svc.cluster.local             443       -          outbound      EDS              
metrics-server.kube-system.svc.cluster.local     443       -          outbound      EDS              
prometheus_stats                                 -         -          -             STATIC           
sds-grpc                                         -         -          -             STATIC           
test.ns1.svc.cluster.local                       80        -          outbound      ORIGINAL_DST     
xds-grpc                                         -         -          -             STATIC          
$ istioctl proxy-config listener test-74c897698f-22sjb.ns1
ADDRESSES      PORT  MATCH                                                               DESTINATION
172.20.0.10    53    ALL                                                                 Cluster: outbound|53||kube-dns.kube-system.svc.cluster.local
10.0.64.197    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.64.197    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.64.218    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.64.218    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.64.91     80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.64.91     80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.64.92     80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.64.92     80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.65.14     80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.65.14     80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.65.15     80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.65.15     80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.65.153    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.65.153    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.65.76     80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.65.76     80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.65.95     80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.65.95     80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.66.200    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.66.200    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.66.207    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.66.207    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.67.113    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.67.113    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.67.132    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.67.132    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.67.208    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.67.208    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.67.249    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.67.249    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.67.60     80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.67.60     80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.68.175    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.68.175    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.69.160    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.69.160    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.69.194    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.69.194    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.69.52     80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.69.52     80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.70.140    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.70.140    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.70.242    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.70.242    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.71.152    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.71.152    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.71.190    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.71.190    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.71.221    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.71.221    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.71.29     80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.71.29     80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.71.58     80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.71.58     80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.72.127    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.72.127    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.73.141    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.73.141    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.73.188    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.73.188    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.73.32     80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.73.32     80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.73.73     80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.73.73     80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.74.216    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.74.216    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.74.73     80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.74.73     80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.75.147    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.75.147    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.75.178    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.75.178    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.75.197    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.75.197    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.75.215    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.75.215    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.76.34     80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.76.34     80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.77.106    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.77.106    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.77.114    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.77.114    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.78.107    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.78.107    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.78.112    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.78.112    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.78.119    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.78.119    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.78.125    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.78.125    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.78.230    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.78.230    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.78.244    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.78.244    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.78.31     80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.78.31     80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.78.63     80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.78.63     80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.78.83     80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.78.83     80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.79.153    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.79.153    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.79.161    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.79.161    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.79.21     80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.79.21     80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.79.238    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.79.238    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.79.239    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.79.239    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.80.166    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.80.166    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.80.223    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.80.223    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.80.29     80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.80.29     80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.81.133    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.81.133    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.81.192    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.81.192    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.81.231    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.81.231    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.82.127    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.82.127    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.82.141    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.82.141    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.82.220    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.82.220    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.82.235    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.82.235    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.83.105    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.83.105    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.83.26     80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.83.26     80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.83.30     80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.83.30     80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.84.208    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.84.208    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.85.138    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.85.138    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.85.228    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.85.228    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.85.69     80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.85.69     80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.86.125    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.86.125    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.86.130    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.86.130    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.87.144    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.87.144    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.87.254    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.87.254    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.87.90     80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.87.90     80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.89.183    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.89.183    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.89.82     80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.89.82     80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.90.139    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.90.139    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.91.17     80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.91.17     80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.91.226    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.91.226    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.91.233    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.91.233    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.91.4      80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.91.4      80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.92.126    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.92.126    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.93.125    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.93.125    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.93.131    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.93.131    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.93.142    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.93.142    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.93.204    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.93.204    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.94.112    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.94.112    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.94.118    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.94.118    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.94.236    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.94.236    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.94.33     80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.94.33     80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.94.44     80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.94.44     80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.94.54     80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.94.54     80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.95.238    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.95.238    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.95.244    80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.95.244    80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.95.4      80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.95.4      80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
10.0.95.58     80    Trans: raw_buffer; App: http/1.1,h2c                                Route: test.ns1.svc.cluster.local:80
10.0.95.58     80    ALL                                                                 Cluster: outbound|80||test.ns1.svc.cluster.local
172.20.0.1     443   ALL                                                                 Cluster: outbound|443||kubernetes.default.svc.cluster.local
172.20.143.212 443   ALL                                                                 Cluster: outbound|443||metrics-server.kube-system.svc.cluster.local
172.20.34.15   443   ALL                                                                 Cluster: outbound|443||istiod.istio-system.svc.cluster.local
172.20.0.10    9153  Trans: raw_buffer; App: http/1.1,h2c                                Route: kube-dns.kube-system.svc.cluster.local:9153
172.20.0.10    9153  ALL                                                                 Cluster: outbound|9153||kube-dns.kube-system.svc.cluster.local
0.0.0.0        15001 ALL                                                                 PassthroughCluster
0.0.0.0        15001 Addr: *:15001                                                       Non-HTTP/Non-TCP
0.0.0.0        15006 Addr: *:15006                                                       Non-HTTP/Non-TCP
0.0.0.0        15006 Trans: tls; App: istio-http/1.0,istio-http/1.1,istio-h2             InboundPassthroughCluster
0.0.0.0        15006 Trans: raw_buffer; App: http/1.1,h2c                                InboundPassthroughCluster
0.0.0.0        15006 Trans: tls; App: TCP TLS                                            InboundPassthroughCluster
0.0.0.0        15006 Trans: raw_buffer                                                   InboundPassthroughCluster
0.0.0.0        15006 Trans: tls                                                          InboundPassthroughCluster
0.0.0.0        15006 Trans: tls; App: istio-http/1.0,istio-http/1.1,istio-h2; Addr: *:80 Cluster: inbound|80||
0.0.0.0        15006 Trans: raw_buffer; App: http/1.1,h2c; Addr: *:80                    Cluster: inbound|80||
0.0.0.0        15006 Trans: tls; App: TCP TLS; Addr: *:80                                Cluster: inbound|80||
0.0.0.0        15006 Trans: raw_buffer; Addr: *:80                                       Cluster: inbound|80||
0.0.0.0        15006 Trans: tls; Addr: *:80                                              Cluster: inbound|80||
0.0.0.0        15010 Trans: raw_buffer; App: http/1.1,h2c                                Route: 15010
0.0.0.0        15010 ALL                                                                 PassthroughCluster
172.20.34.15   15012 ALL                                                                 Cluster: outbound|15012||istiod.istio-system.svc.cluster.local
0.0.0.0        15014 Trans: raw_buffer; App: http/1.1,h2c                                Route: 15014
0.0.0.0        15014 ALL                                                                 PassthroughCluster
0.0.0.0        15021 ALL                                                                 Inline Route: /healthz/ready*
0.0.0.0        15090 ALL                                                                 Inline Route: /stats/prometheus*
$ istioctl proxy-config route test-74c897698f-22sjb.ns1
NAME                                            VHOST NAME                                      DOMAINS                               MATCH                  VIRTUAL SERVICE
test.ns1.svc.cluster.local:80                   test.ns1.svc.cluster.local:80                   *                                     /*                     
15010                                           istiod.istio-system.svc.cluster.local:15010     istiod.istio-system, 172.20.34.15     /*                     
kube-dns.kube-system.svc.cluster.local:9153     kube-dns.kube-system.svc.cluster.local:9153     *                                     /*                     
15014                                           istiod.istio-system.svc.cluster.local:15014     istiod.istio-system, 172.20.34.15     /*                     
InboundPassthroughCluster                       inbound|http|0                                  *                                     /*                     
inbound|80||                                    inbound|http|80                                 *                                     /*                     
InboundPassthroughCluster                       inbound|http|0                                  *                                     /*                     
inbound|80||                                    inbound|http|80                                 *                                     /*                     
                                                backend                                         *                                     /healthz/ready*        
                                                backend                                         *                                     /stats/prometheus*     
$ istioctl proxy-config endpoint test-74c897698f-22sjb.ns1
ENDPOINT                                                STATUS      OUTLIER CHECK     CLUSTER
10.0.100.189:443                                        HEALTHY     OK                outbound|443||kubernetes.default.svc.cluster.local
10.0.75.112:10250                                       HEALTHY     OK                outbound|443||metrics-server.kube-system.svc.cluster.local
10.0.75.6:15010                                         HEALTHY     OK                outbound|15010||istiod.istio-system.svc.cluster.local
10.0.75.6:15012                                         HEALTHY     OK                outbound|15012||istiod.istio-system.svc.cluster.local
10.0.75.6:15014                                         HEALTHY     OK                outbound|15014||istiod.istio-system.svc.cluster.local
10.0.75.6:15017                                         HEALTHY     OK                outbound|443||istiod.istio-system.svc.cluster.local
10.0.81.87:53                                           HEALTHY     OK                outbound|53||kube-dns.kube-system.svc.cluster.local
10.0.81.87:9153                                         HEALTHY     OK                outbound|9153||kube-dns.kube-system.svc.cluster.local
10.0.85.135:443                                         HEALTHY     OK                outbound|443||kubernetes.default.svc.cluster.local
10.0.87.136:53                                          HEALTHY     OK                outbound|53||kube-dns.kube-system.svc.cluster.local
10.0.87.136:9153                                        HEALTHY     OK                outbound|9153||kube-dns.kube-system.svc.cluster.local
127.0.0.1:15000                                         HEALTHY     OK                prometheus_stats
127.0.0.1:15020                                         HEALTHY     OK                agent
unix://./etc/istio/proxy/XDS                            HEALTHY     OK                xds-grpc
unix://./var/run/secrets/workload-spiffe-uds/socket     HEALTHY     OK                sds-grpc

rollout ではなく、スケールインして少し待ってスケールアウトしてみる。

$ k -n ns1 scale deployment test --replicas=1
deployment.apps/test scaled
$ k -n ns1 scale deployment test --replicas=100
deployment.apps/test scaled

これだとさっきより少し減った。

$ k -n ns1 top pod --containers | head
POD                     NAME          CPU(cores)   MEMORY(bytes)   
test-74c897698f-24x2k   istio-proxy   3m           33Mi            
test-74c897698f-24x2k   nginx         0m           91Mi            
test-74c897698f-2kcck   istio-proxy   4m           33Mi            
test-74c897698f-2kcck   nginx         0m           95Mi            
test-74c897698f-2kgdx   istio-proxy   3m           33Mi            
test-74c897698f-2kgdx   nginx         0m           94Mi            
test-74c897698f-462wj   istio-proxy   3m           33Mi            
test-74c897698f-462wj   nginx         0m           91Mi            
test-74c897698f-48rhq   istio-proxy   3m           34Mi     

ノード追加

大きな 1 ノードではなく、小さな 100 ノードに分散してみる。

小さなインスタンス (m6i.large, 2 core) のノードグループを作成する。

cat << EOF > m3.yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: ${CLUSTER_NAME}
  region: ap-northeast-1

managedNodeGroups:
  - name: m3
    instanceType: m6i.large
    minSize: 1
    maxSize: 20
    desiredCapacity: 20
    privateNetworking: true
    iam:
      attachPolicyARNs:
        - arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
        - arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
        - arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore
EOF
eksctl create nodegroup -f m3.yaml

大きなインスタンスは削除する。

eksctl delete nodegroup m2 --cluster ${CLUSTER_NAME}

ノードを確認する。

$ k get nodes
NAME                                              STATUS                        ROLES    AGE     VERSION
ip-10-0-106-38.ap-northeast-1.compute.internal    Ready                         <none>   4m21s   v1.29.6-eks-1552ad0
ip-10-0-107-27.ap-northeast-1.compute.internal    Ready                         <none>   4m19s   v1.29.6-eks-1552ad0
ip-10-0-107-55.ap-northeast-1.compute.internal    Ready                         <none>   4m23s   v1.29.6-eks-1552ad0
ip-10-0-108-95.ap-northeast-1.compute.internal    Ready                         <none>   4m17s   v1.29.6-eks-1552ad0
ip-10-0-109-108.ap-northeast-1.compute.internal   Ready                         <none>   4m22s   v1.29.6-eks-1552ad0
ip-10-0-114-75.ap-northeast-1.compute.internal    Ready                         <none>   4m10s   v1.29.6-eks-1552ad0
ip-10-0-117-226.ap-northeast-1.compute.internal   Ready                         <none>   4m22s   v1.29.6-eks-1552ad0
ip-10-0-121-37.ap-northeast-1.compute.internal    Ready                         <none>   4m11s   v1.29.6-eks-1552ad0
ip-10-0-122-44.ap-northeast-1.compute.internal    Ready                         <none>   4m21s   v1.29.6-eks-1552ad0
ip-10-0-64-210.ap-northeast-1.compute.internal    Ready                         <none>   4m17s   v1.29.6-eks-1552ad0
ip-10-0-65-152.ap-northeast-1.compute.internal    Ready                         <none>   4m19s   v1.29.6-eks-1552ad0
ip-10-0-71-158.ap-northeast-1.compute.internal    Ready                         <none>   4m18s   v1.29.6-eks-1552ad0
ip-10-0-71-188.ap-northeast-1.compute.internal    Ready                         <none>   4m17s   v1.29.6-eks-1552ad0
ip-10-0-73-100.ap-northeast-1.compute.internal    Ready                         <none>   4m15s   v1.29.6-eks-1552ad0
ip-10-0-73-13.ap-northeast-1.compute.internal     Ready                         <none>   4m17s   v1.29.6-eks-1552ad0
ip-10-0-80-97.ap-northeast-1.compute.internal     NotReady,SchedulingDisabled   <none>   66m     v1.29.6-eks-1552ad0
ip-10-0-81-103.ap-northeast-1.compute.internal    Ready                         <none>   4m18s   v1.29.6-eks-1552ad0
ip-10-0-88-105.ap-northeast-1.compute.internal    Ready                         <none>   4m16s   v1.29.6-eks-1552ad0
ip-10-0-94-113.ap-northeast-1.compute.internal    Ready                         <none>   4m18s   v1.29.6-eks-1552ad0
ip-10-0-95-162.ap-northeast-1.compute.internal    Ready                         <none>   4m17s   v1.29.6-eks-1552ad0
ip-10-0-97-3.ap-northeast-1.compute.internal      Ready                         <none>   4m21s   v1.29.6-eks-1552ad0

念のためスケールインしてスケールアウトする。

$ k -n ns1 scale deployment test --replicas=1
deployment.apps/test scaled
$ k -n ns1 scale deployment test --replicas=100
deployment.apps/test scaled

メモリ使用量はほとんど変わっていない。

$ k -n ns1 top pod --containers | head
POD                     NAME          CPU(cores)   MEMORY(bytes)   
test-74c897698f-2ctvb   istio-proxy   2m           33Mi            
test-74c897698f-2ctvb   nginx         0m           2Mi             
test-74c897698f-2fvbs   istio-proxy   1m           33Mi            
test-74c897698f-2fvbs   nginx         0m           2Mi             
test-74c897698f-2vgdl   istio-proxy   2m           33Mi            
test-74c897698f-2vgdl   nginx         0m           2Mi             
test-74c897698f-4fb2g   istio-proxy   1m           33Mi            
test-74c897698f-4fb2g   nginx         0m           2Mi             
test-74c897698f-4lh9r   istio-proxy   2m           33Mi   

設定も増えておらず、ノードが増えてもそれだけだと変わらないことがわかった。

$ istioctl proxy-config cluster test-74c897698f-2ctvb.ns1 | wc -l
      18
$ istioctl proxy-config listener test-74c897698f-2ctvb.ns1 | wc -l
     225
$ istioctl proxy-config route test-74c897698f-2ctvb.ns1 | wc -l
      11
$ istioctl proxy-config endpoint test-74c897698f-2ctvb.ns1 | wc -l
      16

ネームスペース追加

ns2 にも同じような構成を作る。こちらは Headless ではない Service で作る。

$ k create ns ns2
namespace/ns2 created
$ k label namespace ns2 istio-injection=enabled
namespace/ns2 labeled
$ k -n ns2 create deployment test --image=nginx
deployment.apps/test created
$ k -n ns2 scale deployment test --replicas=100
deployment.apps/test scaled
$ k -n ns2 expose deployment test --port=80 --target-port=80
service/test exposed

全ての Pod が Running なことを確認する。

$ k get po -A
NAMESPACE      NAME                              READY   STATUS    RESTARTS   AGE
istio-system   istiod-dd95d7bdc-jw984            1/1     Running   0          18m
kube-system    aws-node-2nhft                    2/2     Running   0          20m
kube-system    aws-node-2wzwq                    2/2     Running   0          20m
kube-system    aws-node-4jqdn                    2/2     Running   0          20m
kube-system    aws-node-5h9gd                    2/2     Running   0          20m
kube-system    aws-node-6q9kv                    2/2     Running   0          20m
kube-system    aws-node-d4z89                    2/2     Running   0          20m
kube-system    aws-node-dmpzs                    2/2     Running   0          20m
kube-system    aws-node-jbrt8                    2/2     Running   0          20m
kube-system    aws-node-k5v7d                    2/2     Running   0          20m
kube-system    aws-node-lphnm                    2/2     Running   0          20m
kube-system    aws-node-lz5xq                    2/2     Running   0          20m
kube-system    aws-node-p46mp                    2/2     Running   0          20m
kube-system    aws-node-p4llc                    2/2     Running   0          20m
kube-system    aws-node-q2n84                    2/2     Running   0          20m
kube-system    aws-node-rg87t                    2/2     Running   0          20m
kube-system    aws-node-tkwdd                    2/2     Running   0          20m
kube-system    aws-node-vt67z                    2/2     Running   0          20m
kube-system    aws-node-wbd9v                    2/2     Running   0          20m
kube-system    aws-node-wtq4m                    2/2     Running   0          20m
kube-system    aws-node-z6mft                    2/2     Running   0          20m
kube-system    coredns-676bf68468-8kg66          1/1     Running   0          18m
kube-system    coredns-676bf68468-tjl4f          1/1     Running   0          19m
kube-system    kube-proxy-2mzvv                  1/1     Running   0          20m
kube-system    kube-proxy-47fms                  1/1     Running   0          20m
kube-system    kube-proxy-4vhzw                  1/1     Running   0          20m
kube-system    kube-proxy-67z7x                  1/1     Running   0          20m
kube-system    kube-proxy-788vj                  1/1     Running   0          20m
kube-system    kube-proxy-d7pns                  1/1     Running   0          20m
kube-system    kube-proxy-g6xvm                  1/1     Running   0          20m
kube-system    kube-proxy-h5vtq                  1/1     Running   0          20m
kube-system    kube-proxy-h7kjq                  1/1     Running   0          20m
kube-system    kube-proxy-kmrsz                  1/1     Running   0          20m
kube-system    kube-proxy-lbfwz                  1/1     Running   0          20m
kube-system    kube-proxy-mz7cj                  1/1     Running   0          20m
kube-system    kube-proxy-nr6wn                  1/1     Running   0          20m
kube-system    kube-proxy-qtsbk                  1/1     Running   0          20m
kube-system    kube-proxy-tcjf5                  1/1     Running   0          20m
kube-system    kube-proxy-vjc64                  1/1     Running   0          20m
kube-system    kube-proxy-wrh2h                  1/1     Running   0          20m
kube-system    kube-proxy-x492q                  1/1     Running   0          20m
kube-system    kube-proxy-zngh4                  1/1     Running   0          20m
kube-system    kube-proxy-zrh4c                  1/1     Running   0          20m
kube-system    metrics-server-75bf97fcc9-fhwmj   1/1     Running   0          19m
ns1            test-74c897698f-2ctvb             2/2     Running   0          16m
ns1            test-74c897698f-2fvbs             2/2     Running   0          16m
ns1            test-74c897698f-2vgdl             2/2     Running   0          16m
ns1            test-74c897698f-4fb2g             2/2     Running   0          16m

(省略)

ns2            test-7955cf7657-z58s8             2/2     Running   0          106s
ns2            test-7955cf7657-zhz67             2/2     Running   0          106s
ns2            test-7955cf7657-zplrx             2/2     Running   0          105s
ns2            test-7955cf7657-zx6zd             2/2     Running   0          107s

メモリ使用量の増加は ns1 に 100 Pod と Service を追加したときと同程度。

$ k -n ns1 top pod --containers | head -5
POD                     NAME          CPU(cores)   MEMORY(bytes)   
test-74c897698f-2ctvb   istio-proxy   2m           45Mi            
test-74c897698f-2ctvb   nginx         0m           2Mi             
test-74c897698f-2fvbs   istio-proxy   1m           38Mi            
test-74c897698f-2fvbs   nginx         0m           2Mi             
$ k -n ns2 top pod --containers | head -5
POD                     NAME          CPU(cores)   MEMORY(bytes)   
test-7955cf7657-27xkv   istio-proxy   2m           38Mi            
test-7955cf7657-27xkv   nginx         0m           2Mi             
test-7955cf7657-29jhg   istio-proxy   2m           38Mi            
test-7955cf7657-29jhg   nginx         0m           2Mi  

設定も endpoint が増えるが他はさほど変わらない。

$ istioctl proxy-config cluster test-74c897698f-2ctvb.ns1 | wc -l
      19
$ istioctl proxy-config listener test-74c897698f-2ctvb.ns1 | wc -l
     227
$ istioctl proxy-config route test-74c897698f-2ctvb.ns1 | wc -l
      12
$ istioctl proxy-config endpoint test-74c897698f-2ctvb.ns1 | wc -l
     116

Service を追加

宛先が同じ 100 Pod な Service を 9 つ追加する。

$ k -n ns1 expose deployment test --port=81 --target-port=80 --name test81
service/test81 exposed
$ k -n ns1 expose deployment test --port=82 --target-port=80 --name test82
service/test82 exposed
$ k -n ns1 expose deployment test --port=83 --target-port=80 --name test83
service/test83 exposed
$ k -n ns1 expose deployment test --port=84 --target-port=80 --name test84
service/test84 exposed
$ k -n ns1 expose deployment test --port=85 --target-port=80 --name test85
service/test85 exposed
$ k -n ns1 expose deployment test --port=86 --target-port=80 --name test86
service/test86 exposed
$ k -n ns1 expose deployment test --port=87 --target-port=80 --name test87
service/test87 exposed
$ k -n ns1 expose deployment test --port=88 --target-port=80 --name test88
service/test88 exposed
$ k -n ns1 expose deployment test --port=89 --target-port=80 --name test89
service/test89 exposed
$ k -n ns2 expose deployment test --port=81 --target-port=80 --name test81
service/test81 exposed
$ k -n ns2 expose deployment test --port=82 --target-port=80 --name test82
service/test82 exposed
$ k -n ns2 expose deployment test --port=83 --target-port=80 --name test83
service/test83 exposed
$ k -n ns2 expose deployment test --port=84 --target-port=80 --name test84
service/test84 exposed
$ k -n ns2 expose deployment test --port=85 --target-port=80 --name test85
service/test85 exposed
$ k -n ns2 expose deployment test --port=86 --target-port=80 --name test86
service/test86 exposed
$ k -n ns2 expose deployment test --port=87 --target-port=80 --name test87
service/test87 exposed
$ k -n ns2 expose deployment test --port=88 --target-port=80 --name test88
service/test88 exposed
$ k -n ns2 expose deployment test --port=89 --target-port=80 --name test89
service/test89 exposed

これもそれほど増えるわけではない。

$ k -n ns1 top pod --containers | head -5                                 
POD                     NAME          CPU(cores)   MEMORY(bytes)   
test-74c897698f-2ctvb   istio-proxy   2m           47Mi            
test-74c897698f-2ctvb   nginx         0m           2Mi             
test-74c897698f-2fvbs   istio-proxy   1m           40Mi            
test-74c897698f-2fvbs   nginx         0m           2Mi             
$ k -n ns2 top pod --containers | head -5                                 
POD                     NAME          CPU(cores)   MEMORY(bytes)   
test-7955cf7657-27xkv   istio-proxy   2m           41Mi            
test-7955cf7657-27xkv   nginx         0m           2Mi             
test-7955cf7657-29jhg   istio-proxy   2m           40Mi            
test-7955cf7657-29jhg   nginx         0m           2Mi    

endpoint の数はかなり増えている。

$ istioctl proxy-config cluster test-74c897698f-2ctvb.ns1 | wc -l
      37
$ istioctl proxy-config listener test-74c897698f-2ctvb.ns1 | wc -l
     263
$ istioctl proxy-config route test-74c897698f-2ctvb.ns1 | wc -l
      30
$ istioctl proxy-config endpoint test-74c897698f-2ctvb.ns1 | wc -l
    1916

この状態のオブジェクトの数を確認する。

$ k get no -A --no-headers | wc -l
      20
$ k get po -A --no-headers | wc -l
     244
$ k get svc -A --no-headers | wc -l
      24

まとめ

  • 1 node, 7 pod, 4 svc の時の 25MiB から、20 node, 244 pod 24 svc でも 47MiB まで程度しか増やせなかった
  • Pod が増えてもそれだけだと増えず、Service が必要
  • Service を作ることで endpoint が増える (このとき Service 配下の Pod の数の分が増える)
  • Headless Service の場合は endpoint は増えず listener が増える
  • ノードを増やしてもそれだけだと増えない
  • Service をもつ DaemonSet がある場合はノードが増えるとメモリ使用量が増えると推測できる

結局のところ、クラスターにデプロイされているアプリケーションの Pod や Service といったオブジェクトの数やルーティングの複雑さに依存する。一概に Pod やノードや Service の数との関係を出すのは難しい。

続きます。

補足

大きなクラスターでは Envoy のメモリ使用量が肥大してしまうことがあり、Sidecar を使って通信範囲を絞ることが重要とのこと。

オフラインインストールのための Ubuntu パッケージの取得

Ubuntu のパッケージ事前に取得し、オフラインでインストールするための手順のメモ。

手順

パッケージ情報を更新する。

apt-get udpate

キャッシュをクリアしておく。

apt clean

必要なパッケージをダウンロードする。

root@ip-172-31-40-206:~# apt --download-only --yes install apache2
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following additional packages will be installed:
  apache2-bin apache2-data apache2-utils bzip2 libapr1 libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap liblua5.3-0 mailcap
  mime-support ssl-cert
Suggested packages:
  apache2-doc apache2-suexec-pristine | apache2-suexec-custom www-browser bzip2-doc
The following NEW packages will be installed:
  apache2 apache2-bin apache2-data apache2-utils bzip2 libapr1 libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap liblua5.3-0
  mailcap mime-support ssl-cert
0 upgraded, 13 newly installed, 0 to remove and 28 not upgraded.
Need to get 2139 kB of archives.
After this operation, 8521 kB of additional disk space will be used.
Get:1 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/main amd64 libapr1 amd64 1.7.0-8ubuntu0.22.04.1 [108 kB]
Get:2 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/main amd64 libaprutil1 amd64 1.6.1-5ubuntu4.22.04.2 [92.8 kB]
Get:3 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/main amd64 libaprutil1-dbd-sqlite3 amd64 1.6.1-5ubuntu4.22.04.2 [11.3 kB]
Get:4 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/main amd64 libaprutil1-ldap amd64 1.6.1-5ubuntu4.22.04.2 [9170 B]
Get:5 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy/main amd64 liblua5.3-0 amd64 5.3.6-1build1 [140 kB]
Get:6 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/main amd64 apache2-bin amd64 2.4.52-1ubuntu4.9 [1347 kB]
Get:7 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/main amd64 apache2-data all 2.4.52-1ubuntu4.9 [165 kB]
Get:8 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/main amd64 apache2-utils amd64 2.4.52-1ubuntu4.9 [88.7 kB]
Get:9 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy/main amd64 mailcap all 3.70+nmu1ubuntu1 [23.8 kB]
Get:10 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy/main amd64 mime-support all 3.66 [3696 B]
Get:11 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/main amd64 apache2 amd64 2.4.52-1ubuntu4.9 [97.9 kB]
Get:12 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy/main amd64 bzip2 amd64 1.0.8-5build1 [34.8 kB]
Get:13 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy/main amd64 ssl-cert all 1.1.2 [17.4 kB]
Fetched 2139 kB in 0s (25.6 MB/s)
Download complete and in download only mode
root@ip-172-31-40-206:~#

ダウンロードされた deb パッケージは以下にある。

root@ip-172-31-40-206:~# ls -l /var/cache/apt/archives/*.deb
-rw-r--r-- 1 root root 1346534 Apr 11 16:19 /var/cache/apt/archives/apache2-bin_2.4.52-1ubuntu4.9_amd64.deb
-rw-r--r-- 1 root root  164870 Apr 11 16:19 /var/cache/apt/archives/apache2-data_2.4.52-1ubuntu4.9_all.deb
-rw-r--r-- 1 root root   88746 Apr 11 16:19 /var/cache/apt/archives/apache2-utils_2.4.52-1ubuntu4.9_amd64.deb
-rw-r--r-- 1 root root   97878 Apr 11 16:19 /var/cache/apt/archives/apache2_2.4.52-1ubuntu4.9_amd64.deb
-rw-r--r-- 1 root root   34822 Mar 24  2022 /var/cache/apt/archives/bzip2_1.0.8-5build1_amd64.deb
-rw-r--r-- 1 root root  108002 Feb 27  2023 /var/cache/apt/archives/libapr1_1.7.0-8ubuntu0.22.04.1_amd64.deb
-rw-r--r-- 1 root root   11344 Sep  4  2023 /var/cache/apt/archives/libaprutil1-dbd-sqlite3_1.6.1-5ubuntu4.22.04.2_amd64.deb
-rw-r--r-- 1 root root    9170 Sep  4  2023 /var/cache/apt/archives/libaprutil1-ldap_1.6.1-5ubuntu4.22.04.2_amd64.deb
-rw-r--r-- 1 root root   92758 Sep  4  2023 /var/cache/apt/archives/libaprutil1_1.6.1-5ubuntu4.22.04.2_amd64.deb
-rw-r--r-- 1 root root  140026 Mar 25  2022 /var/cache/apt/archives/liblua5.3-0_5.3.6-1build1_amd64.deb
-rw-r--r-- 1 root root   23828 Dec 10  2021 /var/cache/apt/archives/mailcap_3.70+nmu1ubuntu1_all.deb
-rw-r--r-- 1 root root    3696 Nov 20  2020 /var/cache/apt/archives/mime-support_3.66_all.deb
-rw-r--r-- 1 root root   17364 Jan 26  2022 /var/cache/apt/archives/ssl-cert_1.1.2_all.deb
root@ip-172-31-40-206:~#

これをまとめて持っていき、オフライン環境で、以下を実行すればよい。

apt -y install ./*.deb

取得元の URL が必要な場合は、パッケージ名を以下の出力で確認し、

The following NEW packages will be installed:
  apache2 apache2-bin apache2-data apache2-utils bzip2 libapr1 libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap liblua5.3-0
  mailcap mime-support ssl-cert

apt-cache show コマンドで確認できる。

root@ip-172-31-40-206:~# apt-cache show apache2 | grep Filename
Filename: pool/main/a/apache2/apache2_2.4.52-1ubuntu4.9_amd64.deb
Filename: pool/main/a/apache2/apache2_2.4.52-1ubuntu4_amd64.deb

これを http://archive.ubuntu.com/ubuntu/ とくっつけて、

http://archive.ubuntu.com/ubuntu/pool/main/a/apache2/apache2_2.4.52-1ubuntu4.9_amd64.deb

とすればよい。

参考リンク