Description
How to reproduce the problem:
Set up a new demo cluster with kubeadm 1.13.1.
Create default configurationwith kubeadm config print init-defaults
Initialize cluster as usual with kubeadm init
Change the --etcd-servers
list in kube-apiserver manifest to --etcd-servers=https://127.0.0.2:2379,https://127.0.0.1:2379
, so that the first etcd node is unavailable ("connection refused").
The kube-apiserver is then not able to connect to etcd any more.
Last message: Unable to create storage backend: config (\u0026{ /registry [https://127.0.0.2:2379 https://127.0.0.1:2379] /etc/kubernetes/pki/apiserver-etcd-client.key /etc/kubernetes/pki/apiserver-etcd-client.crt /etc/kubernetes/pki/etcd/ca.crt true 0xc000381dd0 \u003cnil\u003e 5m0s 1m0s}), err (dial tcp 127.0.0.2:2379: connect: connection refused)\n","stream":"stderr","time":"2018-12-17T12:13:19.608822816Z"}
kube-apiserver does not start.
If I upgrade etcd to version 3.3.10, it reports an error remote error: tls: bad certificate", ServerName ""
Environment:
- Kubernetes version 1.13.1
- kubeadm in Vagrant box
I also experience this bug in an environment with a real etcd cluster.
/kind bug
Activity