This repository was archived by the owner on Sep 7, 2022. It is now read-only.
This repository was archived by the owner on Sep 7, 2022. It is now read-only.
After power off ESXi host, disk cannot be mounted to new node. #116
Closed
Description
Shanghai team reported the issue when testing with mongoDB.
Each node has multiple pods and each pod has their own persistent volume.
After power off one of the ESXi host, the node is restarted on another host.
However the disk cannot be mounted to other node.
Error from vCenter:
File system specific implementation of LookupAndOpen[file] failed
Failed to add disk 'scsi1:3'.
Cannot open the disk '/vmfs/volumes/vsan:5231ae302a9fb63a-fe0fadf567dd24cb/562ba158-d8a2-046f-6ba7-0050569ce285/k8s-test-cluster-1-dynamic-pvc-19e1e6d7-07cd-11e7-a07b-005056b281c6.vmdk' or one of the snapshot disks it depends on.
From kubernetes:
Failed to attach volume "pvc-1a79af4e-07cd-11e7-a07b-005056b281c6" on node "node6" with: Failed to add disk 'scsi1:0'.
Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "default"/"mongodb-shard-node01-472414307-qvl77". list of unattached/unmounted volumes=[vsphere-storage-mongoc01 vsphere-storage-db-rsp01 vsphere-storage-db-rss04 vsphere-storage-db-rsa0
Divyen tried to reproduce this error with a single wordpress pod and everything looks good.
The same error shows up for a while but disappears after the pod is rescheduled.
The next step we may want to try out the same yaml files to see if the mounting will be successful when there are multiple disks need to be unmounted/mounted.
Metadata
Metadata
Assignees
Labels
No labels