-
Notifications
You must be signed in to change notification settings - Fork 39.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
reset resultRun on pod restart #46371
reset resultRun on pod restart #46371
Conversation
@sjenning - added release note text, poke when you have a test. |
2087aa0
to
e87ed0a
Compare
e87ed0a
to
2c866a7
Compare
@derekwaynecarr test is ready |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: derekwaynecarr, sjenning
Needs approval from an approver in each of these OWNERS Files:
You can indicate your approval by writing |
@k8s-bot pull-kubernetes-e2e-gce-etcd3 test this |
2 similar comments
@k8s-bot pull-kubernetes-e2e-gce-etcd3 test this |
@k8s-bot pull-kubernetes-e2e-gce-etcd3 test this |
@k8s-bot pull-kubernetes-unit test this |
flake for unit test failure #46446 |
@sjenning Have been running into this in 1.4 and 1.5 clusters also. Looks like this issue has been around for a long time. Thanks for the fix. Waiting for this to be released in 1.6.5. |
@k8s-bot pull-kubernetes-e2e-gce-etcd3 test this |
@k8s-bot pull-kubernetes-unit test this |
Automatic merge from submit-queue |
@sjenning Can we cherrypick this to 1.6 please. |
@ravilr i don't have the authority. the cherrypick-candidate label has already been applied. i assume that label puts this on the radar of the person that has the authority to pick it to 1.6. |
@enisoc @yujuhong @derekwaynecarr what needs to be done here to get this cherry-pick'ed to 1.6 release? |
I'm the one who approves 1.6 cherrypicks, but I don't actually monitor the |
@enisoc good to know. I'll open a PR then. |
@yujuhong great, thanks! |
Hi all. I think something may have gone wrong with the cherry-pick conversation here. @yujuhong referred to #48099 but that has nothing to do with this issue as far as I can tell. That relates to cherry-picking of #46246. So 1.6.11 still doesn't have the required fix: https://github.com/kubernetes/kubernetes/blob/v1.6.11/pkg/kubelet/prober/worker.go#L226 but 1.7.8 does have it: https://github.com/kubernetes/kubernetes/blob/v1.7.8/pkg/kubelet/prober/worker.go#L227 Have I missed something here? Do we need a new PR that covers this cherry-pick? |
@joelittlejohn Thanks for pointing this out! It does look like a miscommunication of some sort. @yujuhong can you confirm? In the meantime, I went ahead and created a cherrypick of this: #53544. |
@enisoc oops...I probably replied to the wrong issue. Go ahead and cherrypick this. |
This is now merged into the 1.6 branch, for real. Thanks again for pointing it out @joelittlejohn! |
Np, thanks all! |
xref https://bugzilla.redhat.com/show_bug.cgi?id=1455056
There is currently an issue where, if the pod is restarted due to liveness probe failures exceeding failureThreshold, the failure count is not reset on the probe worker. When the pod restarts, if the liveness probe fails even once, the pod is restarted again, not honoring failureThreshold on the restart.
Before this PR:
After this PR:
Restarts now happen at even intervals.
@derekwaynecarr