-
Notifications
You must be signed in to change notification settings - Fork 42k
Description
What happened?
kubeadm (v.128.1) failed to start kubelet (v1.28.1). It attempted to connect "/run/containerd/containerd.sock" instead of /var/run/crio/crio.sock. cri-o is up and healthy and it is set up by default to CNI bridge mode.
crictl version
Version: 0.1.0
RuntimeName: cri-o
RuntimeVersion: 1.28.0
RuntimeApiVersion: v1
crictl info
{
"status": {
"conditions": [
{
"type": "RuntimeReady",
"status": true,
"reason": "",
"message": ""
},
{
"type": "NetworkReady",
"status": true,
"reason": "",
"message": ""
}
]
},
"config": {
"sandboxImage": "registry.k8s.io/pause:3.9"
}
}
K8s cluster init (CIDR default bridge mode):
sudo kubeadm init --apiserver-advertise-address=100.77.34.221 --pod-network-cidr=10.85.0.0/16 --kubernetes-version=1.28.1 --cri-socket=unix:///var/run/crio/crio.sock --v=5
I0918 11:50:57.230465 4364 kubelet.go:196] the value of KubeletConfiguration.cgroupDriver is empty; setting it to "systemd"
[init] Using Kubernetes version: v1.28.1
[preflight] Running pre-flight checks
I0918 11:50:57.245753 4364 checks.go:563] validating Kubernetes and kubeadm version
I0918 11:50:57.245822 4364 checks.go:168] validating if the firewall is enabled and active
I0918 11:50:57.268689 4364 checks.go:203] validating availability of port 6443
I0918 11:50:57.269120 4364 checks.go:203] validating availability of port 10259
I0918 11:50:57.269221 4364 checks.go:203] validating availability of port 10257
I0918 11:50:57.269521 4364 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml
I0918 11:50:57.269641 4364 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml
I0918 11:50:57.269658 4364 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml
I0918 11:50:57.269671 4364 checks.go:280] validating the existence of file /etc/kubernetes/manifests/etcd.yaml
I0918 11:50:57.269719 4364 checks.go:430] validating if the connectivity type is via proxy or direct
I0918 11:50:57.269789 4364 checks.go:469] validating http connectivity to first IP address in the CIDR
I0918 11:50:57.269941 4364 checks.go:469] validating http connectivity to first IP address in the CIDR
I0918 11:50:57.269994 4364 checks.go:104] validating the container runtime
I0918 11:50:57.306565 4364 checks.go:639] validating whether swap is enabled or not
[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
I0918 11:50:57.306827 4364 checks.go:370] validating the presence of executable crictl
I0918 11:50:57.306864 4364 checks.go:370] validating the presence of executable conntrack
I0918 11:50:57.306921 4364 checks.go:370] validating the presence of executable ip
I0918 11:50:57.307098 4364 checks.go:370] validating the presence of executable iptables
I0918 11:50:57.307164 4364 checks.go:370] validating the presence of executable mount
I0918 11:50:57.307226 4364 checks.go:370] validating the presence of executable nsenter
I0918 11:50:57.307250 4364 checks.go:370] validating the presence of executable ebtables
I0918 11:50:57.307368 4364 checks.go:370] validating the presence of executable ethtool
I0918 11:50:57.307388 4364 checks.go:370] validating the presence of executable socat
I0918 11:50:57.307411 4364 checks.go:370] validating the presence of executable tc
I0918 11:50:57.307430 4364 checks.go:370] validating the presence of executable touch
I0918 11:50:57.307488 4364 checks.go:516] running all checks
I0918 11:50:57.319586 4364 checks.go:401] checking whether the given node name is valid and reachable using net.LookupHost
I0918 11:50:57.319923 4364 checks.go:605] validating kubelet version
I0918 11:50:57.412855 4364 checks.go:130] validating if the "kubelet" service is enabled and active
I0918 11:50:57.436830 4364 checks.go:203] validating availability of port 10250
I0918 11:50:57.436987 4364 checks.go:329] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
I0918 11:50:57.437152 4364 checks.go:329] validating the contents of file /proc/sys/net/ipv4/ip_forward
I0918 11:50:57.437224 4364 checks.go:203] validating availability of port 2379
I0918 11:50:57.437352 4364 checks.go:203] validating availability of port 2380
I0918 11:50:57.437445 4364 checks.go:243] validating the existence and emptiness of directory /var/lib/etcd
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0918 11:50:57.437770 4364 checks.go:828] using image pull policy: IfNotPresent
I0918 11:50:57.478374 4364 checks.go:846] image exists: registry.k8s.io/kube-apiserver:v1.28.1
I0918 11:50:57.517793 4364 checks.go:846] image exists: registry.k8s.io/kube-controller-manager:v1.28.1
I0918 11:50:57.555967 4364 checks.go:846] image exists: registry.k8s.io/kube-scheduler:v1.28.1
I0918 11:50:57.593534 4364 checks.go:846] image exists: registry.k8s.io/kube-proxy:v1.28.1
I0918 11:50:57.668573 4364 checks.go:846] image exists: registry.k8s.io/pause:3.9
I0918 11:50:57.707658 4364 checks.go:846] image exists: registry.k8s.io/etcd:3.5.9-0
I0918 11:50:57.746599 4364 checks.go:846] image exists: registry.k8s.io/coredns/coredns:v1.10.1
[certs] Using certificateDir folder "/etc/kubernetes/pki"
I0918 11:50:57.746724 4364 certs.go:112] creating a new certificate authority for ca
[certs] Generating "ca" certificate and key
I0918 11:50:58.221967 4364 certs.go:519] validating certificate period for ca certificate
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local osmk8-amir.snphxprshared1.gbucdsint02phx.oraclevcn.com] and IPs [10.96.0.1 100.77.34.221]
[certs] Generating "apiserver-kubelet-client" certificate and key
I0918 11:50:58.653225 4364 certs.go:112] creating a new certificate authority for front-proxy-ca
[certs] Generating "front-proxy-ca" certificate and key
I0918 11:50:59.191502 4364 certs.go:519] validating certificate period for front-proxy-ca certificate
[certs] Generating "front-proxy-client" certificate and key
I0918 11:50:59.340763 4364 certs.go:112] creating a new certificate authority for etcd-ca
[certs] Generating "etcd/ca" certificate and key
I0918 11:50:59.537792 4364 certs.go:519] validating certificate period for etcd/ca certificate
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost osmk8-amir.snphxprshared1.gbucdsint02phx.oraclevcn.com] and IPs [100.77.34.221 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost osmk8-amir.snphxprshared1.gbucdsint02phx.oraclevcn.com] and IPs [100.77.34.221 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
I0918 11:51:01.348498 4364 certs.go:78] creating new public/private key files for signing service account users
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0918 11:51:02.327700 4364 kubeconfig.go:103] creating kubeconfig file for admin.conf
[kubeconfig] Writing "admin.conf" kubeconfig file
I0918 11:51:02.757849 4364 kubeconfig.go:103] creating kubeconfig file for kubelet.conf
[kubeconfig] Writing "kubelet.conf" kubeconfig file
I0918 11:51:03.374172 4364 kubeconfig.go:103] creating kubeconfig file for controller-manager.conf
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0918 11:51:03.728184 4364 kubeconfig.go:103] creating kubeconfig file for scheduler.conf
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0918 11:51:04.225179 4364 local.go:65] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I0918 11:51:04.225276 4364 manifests.go:102] [control-plane] getting StaticPodSpecs
I0918 11:51:04.225731 4364 certs.go:519] validating certificate period for CA certificate
I0918 11:51:04.225859 4364 manifests.go:128] [control-plane] adding volume "ca-certs" for component "kube-apiserver"
I0918 11:51:04.225902 4364 manifests.go:128] [control-plane] adding volume "etc-pki" for component "kube-apiserver"
I0918 11:51:04.225914 4364 manifests.go:128] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"
I0918 11:51:04.226888 4364 manifests.go:157] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I0918 11:51:04.226960 4364 manifests.go:102] [control-plane] getting StaticPodSpecs
I0918 11:51:04.227192 4364 manifests.go:128] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"
I0918 11:51:04.227257 4364 manifests.go:128] [control-plane] adding volume "etc-pki" for component "kube-controller-manager"
I0918 11:51:04.227269 4364 manifests.go:128] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"
I0918 11:51:04.227279 4364 manifests.go:128] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"
I0918 11:51:04.227358 4364 manifests.go:128] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"
I0918 11:51:04.228142 4364 manifests.go:157] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[control-plane] Creating static Pod manifest for "kube-scheduler"
I0918 11:51:04.228219 4364 manifests.go:102] [control-plane] getting StaticPodSpecs
I0918 11:51:04.228510 4364 manifests.go:128] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
I0918 11:51:04.229104 4364 manifests.go:157] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
I0918 11:51:04.229174 4364 kubelet.go:67] Stopping the kubelet
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
I0918 11:51:04.613890 4364 waitcontrolplane.go:83] [wait-control-plane] Waiting for the API server to be healthy
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
kubelet log:
journalctl -xeu kubelet
-- The start-up result is done.
Sep 18 11:51:25 osmk8-amir.snphxprshared1.gbucdsint02phx.oraclevcn.com kubelet[4528]: I0918 11:51:25.786690 4528 server.go:467] "Kubelet version" kubeletVersion="v1.28.1"
Sep 18 11:51:25 osmk8-amir.snphxprshared1.gbucdsint02phx.oraclevcn.com kubelet[4528]: I0918 11:51:25.786856 4528 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Sep 18 11:51:25 osmk8-amir.snphxprshared1.gbucdsint02phx.oraclevcn.com kubelet[4528]: I0918 11:51:25.787114 4528 server.go:630] "Standalone mode, no API client"
Sep 18 11:51:25 osmk8-amir.snphxprshared1.gbucdsint02phx.oraclevcn.com kubelet[4528]: W0918 11:51:25.787783 4528 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to con>
Sep 18 11:51:25 osmk8-amir.snphxprshared1.gbucdsint02phx.oraclevcn.com kubelet[4528]: "Addr": "/run/containerd/containerd.sock",
Sep 18 11:51:25 osmk8-amir.snphxprshared1.gbucdsint02phx.oraclevcn.com kubelet[4528]: "ServerName": "/run/containerd/containerd.sock",
Sep 18 11:51:25 osmk8-amir.snphxprshared1.gbucdsint02phx.oraclevcn.com kubelet[4528]: "Attributes": null,
Sep 18 11:51:25 osmk8-amir.snphxprshared1.gbucdsint02phx.oraclevcn.com kubelet[4528]: "BalancerAttributes": null,
Sep 18 11:51:25 osmk8-amir.snphxprshared1.gbucdsint02phx.oraclevcn.com kubelet[4528]: "Type": 0,
Sep 18 11:51:25 osmk8-amir.snphxprshared1.gbucdsint02phx.oraclevcn.com kubelet[4528]: "Metadata": null
Sep 18 11:51:25 osmk8-amir.snphxprshared1.gbucdsint02phx.oraclevcn.com kubelet[4528]: }. Err: connection error: desc = "transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no su>
Sep 18 11:51:25 osmk8-amir.snphxprshared1.gbucdsint02phx.oraclevcn.com kubelet[4528]: E0918 11:51:25.788636 4528 run.go:74] "command failed" err="failed to run Kubelet: validate service connection: validate>
Sep 18 11:51:25 osmk8-amir.snphxprshared1.gbucdsint02phx.oraclevcn.com systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Sep 18 11:51:25 osmk8-amir.snphxprshared1.gbucdsint02phx.oraclevcn.com systemd[1]: kubelet.service: Failed with result 'exit-code'.
-- Subject: Unit failed
-- Defined-By: systemd
-- Support: https://support.oracle.com
-- The unit kubelet.service has entered the 'failed' state with result 'exit-code'.
What did you expect to happen?
kubelet expected to use cri-o socket unix:///var/run/crio/crio.sock
How can we reproduce it (as minimally and precisely as possible)?
sudo kubeadm init --apiserver-advertise-address=100.77.34.221 --pod-network-cidr=10.85.0.0/16 --kubernetes-version=1.28.1 --cri-socket=unix:///var/run/crio/crio.sock --v=5
I0918 11:50:57.230465 4364 kubelet.go:196] the value of KubeletConfiguration.cgroupDriver is empty; setting it to "systemd"
[init] Using Kubernetes version: v1.28.1
[preflight] Running pre-flight checks
I0918 11:50:57.245753 4364 checks.go:563] validating Kubernetes and kubeadm version
I0918 11:50:57.245822 4364 checks.go:168] validating if the firewall is enabled and active
I0918 11:50:57.268689 4364 checks.go:203] validating availability of port 6443
I0918 11:50:57.269120 4364 checks.go:203] validating availability of port 10259
I0918 11:50:57.269221 4364 checks.go:203] validating availability of port 10257
I0918 11:50:57.269521 4364 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml
I0918 11:50:57.269641 4364 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml
I0918 11:50:57.269658 4364 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml
I0918 11:50:57.269671 4364 checks.go:280] validating the existence of file /etc/kubernetes/manifests/etcd.yaml
I0918 11:50:57.269719 4364 checks.go:430] validating if the connectivity type is via proxy or direct
I0918 11:50:57.269789 4364 checks.go:469] validating http connectivity to first IP address in the CIDR
I0918 11:50:57.269941 4364 checks.go:469] validating http connectivity to first IP address in the CIDR
I0918 11:50:57.269994 4364 checks.go:104] validating the container runtime
I0918 11:50:57.306565 4364 checks.go:639] validating whether swap is enabled or not
[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
I0918 11:50:57.306827 4364 checks.go:370] validating the presence of executable crictl
I0918 11:50:57.306864 4364 checks.go:370] validating the presence of executable conntrack
I0918 11:50:57.306921 4364 checks.go:370] validating the presence of executable ip
I0918 11:50:57.307098 4364 checks.go:370] validating the presence of executable iptables
I0918 11:50:57.307164 4364 checks.go:370] validating the presence of executable mount
I0918 11:50:57.307226 4364 checks.go:370] validating the presence of executable nsenter
I0918 11:50:57.307250 4364 checks.go:370] validating the presence of executable ebtables
I0918 11:50:57.307368 4364 checks.go:370] validating the presence of executable ethtool
I0918 11:50:57.307388 4364 checks.go:370] validating the presence of executable socat
I0918 11:50:57.307411 4364 checks.go:370] validating the presence of executable tc
I0918 11:50:57.307430 4364 checks.go:370] validating the presence of executable touch
I0918 11:50:57.307488 4364 checks.go:516] running all checks
I0918 11:50:57.319586 4364 checks.go:401] checking whether the given node name is valid and reachable using net.LookupHost
I0918 11:50:57.319923 4364 checks.go:605] validating kubelet version
I0918 11:50:57.412855 4364 checks.go:130] validating if the "kubelet" service is enabled and active
I0918 11:50:57.436830 4364 checks.go:203] validating availability of port 10250
I0918 11:50:57.436987 4364 checks.go:329] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
I0918 11:50:57.437152 4364 checks.go:329] validating the contents of file /proc/sys/net/ipv4/ip_forward
I0918 11:50:57.437224 4364 checks.go:203] validating availability of port 2379
I0918 11:50:57.437352 4364 checks.go:203] validating availability of port 2380
I0918 11:50:57.437445 4364 checks.go:243] validating the existence and emptiness of directory /var/lib/etcd
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0918 11:50:57.437770 4364 checks.go:828] using image pull policy: IfNotPresent
I0918 11:50:57.478374 4364 checks.go:846] image exists: registry.k8s.io/kube-apiserver:v1.28.1
I0918 11:50:57.517793 4364 checks.go:846] image exists: registry.k8s.io/kube-controller-manager:v1.28.1
I0918 11:50:57.555967 4364 checks.go:846] image exists: registry.k8s.io/kube-scheduler:v1.28.1
I0918 11:50:57.593534 4364 checks.go:846] image exists: registry.k8s.io/kube-proxy:v1.28.1
I0918 11:50:57.668573 4364 checks.go:846] image exists: registry.k8s.io/pause:3.9
I0918 11:50:57.707658 4364 checks.go:846] image exists: registry.k8s.io/etcd:3.5.9-0
I0918 11:50:57.746599 4364 checks.go:846] image exists: registry.k8s.io/coredns/coredns:v1.10.1
[certs] Using certificateDir folder "/etc/kubernetes/pki"
I0918 11:50:57.746724 4364 certs.go:112] creating a new certificate authority for ca
[certs] Generating "ca" certificate and key
I0918 11:50:58.221967 4364 certs.go:519] validating certificate period for ca certificate
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local osmk8-amir.snphxprshared1.gbucdsint02phx.oraclevcn.com] and IPs [10.96.0.1 100.77.34.221]
[certs] Generating "apiserver-kubelet-client" certificate and key
I0918 11:50:58.653225 4364 certs.go:112] creating a new certificate authority for front-proxy-ca
[certs] Generating "front-proxy-ca" certificate and key
I0918 11:50:59.191502 4364 certs.go:519] validating certificate period for front-proxy-ca certificate
[certs] Generating "front-proxy-client" certificate and key
I0918 11:50:59.340763 4364 certs.go:112] creating a new certificate authority for etcd-ca
[certs] Generating "etcd/ca" certificate and key
I0918 11:50:59.537792 4364 certs.go:519] validating certificate period for etcd/ca certificate
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost osmk8-amir.snphxprshared1.gbucdsint02phx.oraclevcn.com] and IPs [100.77.34.221 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost osmk8-amir.snphxprshared1.gbucdsint02phx.oraclevcn.com] and IPs [100.77.34.221 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
I0918 11:51:01.348498 4364 certs.go:78] creating new public/private key files for signing service account users
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0918 11:51:02.327700 4364 kubeconfig.go:103] creating kubeconfig file for admin.conf
[kubeconfig] Writing "admin.conf" kubeconfig file
I0918 11:51:02.757849 4364 kubeconfig.go:103] creating kubeconfig file for kubelet.conf
[kubeconfig] Writing "kubelet.conf" kubeconfig file
I0918 11:51:03.374172 4364 kubeconfig.go:103] creating kubeconfig file for controller-manager.conf
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0918 11:51:03.728184 4364 kubeconfig.go:103] creating kubeconfig file for scheduler.conf
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0918 11:51:04.225179 4364 local.go:65] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I0918 11:51:04.225276 4364 manifests.go:102] [control-plane] getting StaticPodSpecs
I0918 11:51:04.225731 4364 certs.go:519] validating certificate period for CA certificate
I0918 11:51:04.225859 4364 manifests.go:128] [control-plane] adding volume "ca-certs" for component "kube-apiserver"
I0918 11:51:04.225902 4364 manifests.go:128] [control-plane] adding volume "etc-pki" for component "kube-apiserver"
I0918 11:51:04.225914 4364 manifests.go:128] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"
I0918 11:51:04.226888 4364 manifests.go:157] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I0918 11:51:04.226960 4364 manifests.go:102] [control-plane] getting StaticPodSpecs
I0918 11:51:04.227192 4364 manifests.go:128] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"
I0918 11:51:04.227257 4364 manifests.go:128] [control-plane] adding volume "etc-pki" for component "kube-controller-manager"
I0918 11:51:04.227269 4364 manifests.go:128] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"
I0918 11:51:04.227279 4364 manifests.go:128] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"
I0918 11:51:04.227358 4364 manifests.go:128] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"
I0918 11:51:04.228142 4364 manifests.go:157] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[control-plane] Creating static Pod manifest for "kube-scheduler"
I0918 11:51:04.228219 4364 manifests.go:102] [control-plane] getting StaticPodSpecs
I0918 11:51:04.228510 4364 manifests.go:128] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
I0918 11:51:04.229104 4364 manifests.go:157] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
I0918 11:51:04.229174 4364 kubelet.go:67] Stopping the kubelet
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
I0918 11:51:04.613890 4364 waitcontrolplane.go:83] [wait-control-plane] Waiting for the API server to be healthy
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
kubelet log:
journalctl -xeu kubelet
-- The start-up result is done.
Sep 18 11:51:25 osmk8-amir.snphxprshared1.gbucdsint02phx.oraclevcn.com kubelet[4528]: I0918 11:51:25.786690 4528 server.go:467] "Kubelet version" kubeletVersion="v1.28.1"
Sep 18 11:51:25 osmk8-amir.snphxprshared1.gbucdsint02phx.oraclevcn.com kubelet[4528]: I0918 11:51:25.786856 4528 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Sep 18 11:51:25 osmk8-amir.snphxprshared1.gbucdsint02phx.oraclevcn.com kubelet[4528]: I0918 11:51:25.787114 4528 server.go:630] "Standalone mode, no API client"
Sep 18 11:51:25 osmk8-amir.snphxprshared1.gbucdsint02phx.oraclevcn.com kubelet[4528]: W0918 11:51:25.787783 4528 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to con>
Sep 18 11:51:25 osmk8-amir.snphxprshared1.gbucdsint02phx.oraclevcn.com kubelet[4528]: "Addr": "/run/containerd/containerd.sock",
Sep 18 11:51:25 osmk8-amir.snphxprshared1.gbucdsint02phx.oraclevcn.com kubelet[4528]: "ServerName": "/run/containerd/containerd.sock",
Sep 18 11:51:25 osmk8-amir.snphxprshared1.gbucdsint02phx.oraclevcn.com kubelet[4528]: "Attributes": null,
Sep 18 11:51:25 osmk8-amir.snphxprshared1.gbucdsint02phx.oraclevcn.com kubelet[4528]: "BalancerAttributes": null,
Sep 18 11:51:25 osmk8-amir.snphxprshared1.gbucdsint02phx.oraclevcn.com kubelet[4528]: "Type": 0,
Sep 18 11:51:25 osmk8-amir.snphxprshared1.gbucdsint02phx.oraclevcn.com kubelet[4528]: "Metadata": null
Sep 18 11:51:25 osmk8-amir.snphxprshared1.gbucdsint02phx.oraclevcn.com kubelet[4528]: }. Err: connection error: desc = "transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no su>
Sep 18 11:51:25 osmk8-amir.snphxprshared1.gbucdsint02phx.oraclevcn.com kubelet[4528]: E0918 11:51:25.788636 4528 run.go:74] "command failed" err="failed to run Kubelet: validate service connection: validate>
Sep 18 11:51:25 osmk8-amir.snphxprshared1.gbucdsint02phx.oraclevcn.com systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Sep 18 11:51:25 osmk8-amir.snphxprshared1.gbucdsint02phx.oraclevcn.com systemd[1]: kubelet.service: Failed with result 'exit-code'.
-- Subject: Unit failed
-- Defined-By: systemd
-- Support: https://support.oracle.com
-- The unit kubelet.service has entered the 'failed' state with result 'exit-code'.
Anything else we need to know?
kuebeadm generates kubelet configuration as specified below. Unexpected it is generated to use /run/containerd/containerd.sock instaed of unix:///var/run/crio/crio.sock
sudo kubeadm config print init-defaults --component-configs KubeletConfiguration
[sudo] password for oracle:
Sorry, try again.
[sudo] password for oracle:
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages: - signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 1.2.3.4
bindPort: 6443
nodeRegistration:
criSocket: unix:///var/run/containerd/containerd.sock
imagePullPolicy: IfNotPresent
name: node
taints: null
- system:bootstrappers:kubeadm:default-node-token
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.k8s.io
kind: ClusterConfiguration
kubernetesVersion: 1.28.0
networking:
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/12
scheduler: {}
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 0s
enabled: true
x509:
clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 0s
cacheUnauthorizedTTL: 0s
cgroupDriver: systemd
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
containerRuntimeEndpoint: ""
cpuManagerReconcilePeriod: 0s
evictionPressureTransitionPeriod: 0s
fileCheckFrequency: 0s
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 0s
imageMinimumGCAge: 0s
kind: KubeletConfiguration
logging:
flushFrequency: 0
options:
json:
infoBufferSize: "0"
verbosity: 0
memorySwap: {}
nodeStatusReportFrequency: 0s
nodeStatusUpdateFrequency: 0s
rotateCertificates: true
runtimeRequestTimeout: 0s
shutdownGracePeriod: 0s
shutdownGracePeriodCriticalPods: 0s
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 0s
syncFrequency: 0s
volumeStatsAggPeriod: 0s
Kubernetes version
Details
$ kubectl version
# paste output here
$ kubectl version
Client Version: v1.28.1
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
The connection to the server localhost:8080 was refused - did you specify the right host or port?
$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"28", GitVersion:"v1.28.1", GitCommit:"8dc49c4b984b897d423aab4971090e1879eb4f23", GitTreeState:"clean", BuildDate:"2023-08-24T11:21:51Z", GoVersion:"go1.20.7", Compiler:"gc", Platform:"linux/amd64"}
</details>
### Cloud provider
<details>
Linux CenTOS based VM
</details>
### OS version
NAME="Oracle Linux Server"
VERSION="8.8"
ID="ol"
ID_LIKE="fedora"
VARIANT="Server"
VARIANT_ID="server"
VERSION_ID="8.8"
PLATFORM_ID="platform:el8"
PRETTY_NAME="Oracle Linux Server 8.8"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:oracle:linux:8:8:server"
HOME_URL="https://linux.oracle.com/"
BUG_REPORT_URL="https://github.com/oracle/oracle-linux"
ORACLE_BUGZILLA_PRODUCT="Oracle Linux 8"
ORACLE_BUGZILLA_PRODUCT_VERSION=8.8
ORACLE_SUPPORT_PRODUCT="Oracle Linux"
ORACLE_SUPPORT_PRODUCT_VERSION=8.8
### Install tools
<details>
sudo kubeadm init --apiserver-advertise-address=$ip_addr --pod-network-cidr=10.85.0.0/16 --kubernetes-version=1.28.1 --cri-socket=unix:///var/run/crio/crio.sock --v=5 --dry-run
</details>
### Container runtime (CRI) and version (if applicable)
<details>
cri-o
> crictl version
Version: 0.1.0
RuntimeName: cri-o
RuntimeVersion: 1.28.0
RuntimeApiVersion: v1
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
Default bridge mode:--pod-network-cidr=10.85.0.0/16
</details>