-
Notifications
You must be signed in to change notification settings - Fork 39.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kube-controller-manager crash looping with "nil pointer dereference" in volume/cinder/cinder.go #49418
Comments
@saad-ali
Note: Method 1 will trigger an email to the group. You can find the group list here and label list here. |
CC @kubernetes/sig-storage-bugs |
Suspected cause: PR #39732 introduces code that populates the attach/detach controller's "actual state of the world" cache with a volume with a This means the error is not due to any particular volume plugin (cinder, azure, gce pd, etc.), but can potentially happen due to any of the attached volumes. Details: Let's follow the stack trace:
This indicates that
This indicates that the actual_state_of_the_world must have been populated with a volume with an empty spec. Let's see if
This code was added in PR #39732 merged on April 20, 2017 and part of the v1.7.0 release. There is a comment in that PR apparently calling out this exact issue: #39732 (comment) So I suspect that the assumptions that code is making are not always true (TODO: need to figure out what cases that is) resulting in the volume spec being nil and causing the controller crash. Suggested temporary work around: The suspect code, which populates the ASW with a volume with a nil spec, does not have a feature gate, so it can not be disabled. However, the code that is crashing because of the nil spec, does have a flag gate. To prevent the crash looping you can set the |
I think what is happening is, we added this check - https://github.com/kubernetes/kubernetes/blob/master/pkg/volume/util/operationexecutor/operation_executor.go#L571 thinking it would prevent processing of volumes with nil spec, but that check isn't enough because if there is another volume on the node which has proper spec we are breaking from that loop and we perform verification of volumes one by one. So to fix this, I am thinking what we can do is - we introduce additional check here - https://github.com/kubernetes/kubernetes/blob/master/pkg/volume/util/operationexecutor/operation_generator.go#L119 and skip the volume if it has Also, I think relying on reconciler to fill all those specs is racy and it depends on whether syncStates runs after reconile completes. But since bunch of stuff in reconciler run in its own goroutine we can make no such guarantee. cc @tsmetana |
@gnufied Ya, we just reached the same conclusion. That said, the extra nil check will mask the real issue, i.e. why was there a nil spec to begin with (but we should still do it).
Exactly. @msau42's has a solid theory, for what might be happening:
So basically @msau42's theory is that a detach triggered by the populator introduced in #39732 races with the |
To expand on the first part of my comment above, regarding PR #49420. It's fine to add this check. But I question if the caller logic even makes sense. Why is for _, volumeAttached := range nodeAttachedVolumes {
...
// If node doesn't support Bulk volume polling it is best to poll individually
nodeError := oe.VerifyVolumesAreAttachedPerNode(nodeAttachedVolumes, node, actualStateOfWorld)
...
} This means that |
No it doesn't get called In a ideal world - people wouldn't be using more than one volume type, but even if they do and they have volume types that supports bulkvolumeverification and that doesn't, the code takes care of both. |
@gnufied Ack, we'll revisit that code later. More from @msau42's debugging:
So repro steps should be:
|
Huge props to @msau42 for helping debug this! And thanks to @gnufied for jumping in to help patch it on a Friday evening. We think the above is the root cause of this issue. Work around for folks hitting this issue:
Permanent fixes:
|
@saad-ali sure I will test my patch above with repo. steps and update here. |
Automatic merge from submit-queue (batch tested with PRs 49420, 49296, 49299, 49371, 46514) Fix controller crash because of nil volume spec For volumes that don't support bulk volume verification, a nil volume spec can cause crash of controller. Fixes #49418 **Release note**: ```release-note Fixes #49418 where kube-controller-manager can panic on volume.CanSupport methods and enter a crash loop. ```
Tested my fix against both patched and unpatched kube versions and it seems to be working fine. |
I'm afraid the fix for PR#47855 will not fix the race I introduced with the fix for PR#39732: it may remove the hack with splitting of the uniqueName. I actually think that the best option really is to count with the possibility the volume spec might be nil. Once the pod is gone there is no sure way to reconstruct the spec. Detaching the volumes with no pods separately from the other volumes (outside the reconciliation loop) might lead to other kinds of problems (it has to be another loop since detach may fail, it would have to be synced with the states of the worlds of the controller, etc.). |
Just a few things we might consider related to this issue
|
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.. E2E test to make sure controller does not crash because of nil volume spec Fixes #49521 Tests fix of issue referenced in #49418
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened:
We have at least one report of the v1.7.1 kube-controller-manager binary on a master machine crash looping.
Stack trace:
What you expected to happen:
Not crash loop.
How to reproduce it (as minimally and precisely as possible):
Uncertain
Anything else we need to know?:
No
Environment:
kubectl version
): master 1.7.1 node 1.5.3uname -a
):The text was updated successfully, but these errors were encountered: