-
Notifications
You must be signed in to change notification settings - Fork 39.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
e2e: share /dev with host in hostpath driver deployment #84501
Conversation
This is needed for raw block volumes. It mirrors a change made in the upstream deployment in kubernetes-csi/csi-driver-host-path#109 Raw block volumes use loop devices under the hood. "losetup --find --show" uses LOOP_CTL_GET_FREE to get a free loop device. It then expects to have the corresponding /dev/loopX already available. When /dev inside the container is a static tmpfs which doesn't already have those /dev/loop* devices (*) the new device fails to show up, resulting in: I1028 13:25:19.937846 1 server.go:117] GRPC call: /csi.v1.Controller/CreateVolume I1028 13:25:19.938083 1 server.go:118] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"topology.hostpath.csi/node":"pmem-csi-pmem-govm-worker3"}}],"requisite":[{"segments":{"topology.hostpath.csi/node":"pmem-csi-pmem-govm-worker3"}}]},"capacity_range":{"required_bytes":5368709120},"name":"pvc-24985a49-5638-4bf6-b789-bb99a28d1073","volume_capabilities":[{"AccessType":{"Block":{}},"access_mode":{"mode":1}}]} I1028 13:25:19.961124 1 volume_path_handler_linux.go:41] Creating device for path: /csi-data-dir/635c6569-f986-11e9-baa6-0242ac110004 I1028 13:25:20.391472 1 volume_path_handler_linux.go:75] Failed device create command for path: /csi-data-dir/635c6569-f986-11e9-baa6-0242ac110004 exit status 1 losetup: /csi-data-dir/635c6569-f986-11e9-baa6-0242ac110004: failed to set up loop device: No such file or directory E1028 13:25:20.392916 1 server.go:121] GRPC error: rpc error: code = Internal desc = failed to create volume 635c6569-f986-11e9-baa6-0242ac110004: failed to attach device /csi-data-dir/635c6569-f986-11e9-baa6-0242ac110004: exit status 1 (*) It seems that the static tmpfs gets populated by Docker based on what's currently on the host when the container starts. That would explain why it worked in the Kubernetes Prow testing - the host must have had enough loop devices already defined.
/sig storage |
/retest |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: msau42, pohly The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@BenTheElder I think that this could fix one of the e2e tests that we are skipping in kind presubmits |
@aojea i'd be a little nervous bout this probably being the host dev shared to each node. we need to be careful about touching this sort of thing. |
Benjamin Elder <[email protected]> writes:
@aojea i'd be a little nervous bout this probably being the host dev
shared to each node. we need to be careful about touching this sort of
thing.
The CSI driver is already running with
securityContext:
privileged: true
So it already has the potential to cause havoc on the host also without
the host's /dev directly available.
But I agree, the effect of bugs might be smaller if it wasn't needed. I
haven't looked into the code; perhaps the dependency can be
removed. That could be the long-term solution.
|
@aojea I don't think this change with fix the test you mentioned. This change addresses hostpath CSI driver, but the test you linked to refers to local PV |
@pohly we're not running this test on kind, I mostly mean with regard to kind, it's reasonably for kubernetes tests / components to leverage this, but normally these are run on throwaway clusters on throwaway VMs as opposed to dev getting passed all the way down from a persistent prow host. I haven't specifically looked into what the impact of this with kind on prow would be. |
hm @BenTheElder we may want to investigate the kubernetes-csi/csi-driver-host-path prow tests then, because those are directly bringing up a kind cluster in the prow job: https://github.com/kubernetes-csi/csi-driver-host-path/blob/master/release-tools/prow.sh |
Michelle Au <[email protected]> writes:
hm @BenTheElder we may want to investigate the
kubernetes-csi/csi-driver-host-path prow tests then, because those are
directly bringing up a kind cluster in the prow job:
https://github.com/kubernetes-csi/csi-driver-host-path/blob/master/release-tools/prow.sh
It's not just the hostpath driver itself which relies on /dev/loop from
the host for provisioning a raw block device: kubelet itself also uses
a loop device when making a raw block device available to containers. As
far as I remember, that's to ensure that the device remains available as
long as the container runs.
So even without passing /dev into the hostpath driver container the end
result is the same, there is a risk that testing in a kind cluster
permanently leaks a /dev/loop* device on the host.
Can /dev/loop be namespaced? I tried with "docker run -ti --rm
--privileged" and least that shares (and leaks) loop devices created
inside the container.
|
only if you ask for a block device though right? I don't think typical kind workloads are doing this (except k8s e2e creating them...).
I don't think so sadly |
What type of PR is this?
/kind flake
What this PR does / why we need it:
This is needed for raw block volumes.
Raw block volumes use loop devices under the hood. "losetup --find
--show" uses LOOP_CTL_GET_FREE to get a free loop device. It then
expects to have the corresponding /dev/loopX already available. When
/dev inside the container is a static tmpfs which doesn't already have
those /dev/loop* devices (*) the new device fails to show up,
resulting in:
I1028 13:25:19.937846 1 server.go:117] GRPC call: /csi.v1.Controller/CreateVolume
I1028 13:25:19.938083 1 server.go:118] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"topology.hostpath.csi/node":"pmem-csi-pmem-govm-worker3"}}],"requisite":[{"segments":{"topology.hostpath.csi/node":"pmem-csi-pmem-govm-worker3"}}]},"capacity_range":{"required_bytes":5368709120},"name":"pvc-24985a49-5638-4bf6-b789-bb99a28d1073","volume_capabilities":[{"AccessType":{"Block":{}},"access_mode":{"mode":1}}]}
I1028 13:25:19.961124 1 volume_path_handler_linux.go:41] Creating device for path: /csi-data-dir/635c6569-f986-11e9-baa6-0242ac110004
I1028 13:25:20.391472 1 volume_path_handler_linux.go:75] Failed device create command for path: /csi-data-dir/635c6569-f986-11e9-baa6-0242ac110004 exit status 1 losetup: /csi-data-dir/635c6569-f986-11e9-baa6-0242ac110004: failed to set up loop device: No such file or directory
E1028 13:25:20.392916 1 server.go:121] GRPC error: rpc error: code = Internal desc = failed to create volume 635c6569-f986-11e9-baa6-0242ac110004: failed to attach device /csi-data-dir/635c6569-f986-11e9-baa6-0242ac110004: exit status 1
(*) It seems that the static tmpfs gets populated by Docker based on
what's currently on the host when the container starts. That would
explain why it worked in the Kubernetes Prow testing - the host must
have had enough loop devices already defined.
Special notes for your reviewer:
The same change was already made upstream in kubernetes-csi/csi-driver-host-path#109
Does this PR introduce a user-facing change?: