-
Notifications
You must be signed in to change notification settings - Fork 42.1k
First part of improved rolling update, allow dynamic next replication controller generation. #7268
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
@deads2k @smarterclayton for good measure. |
|
Hey @brendandburns can you recommend a reviewer for me to assign this to? |
pkg/kubectl/cmd/rollingupdate.go
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this TODO still relevant?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like the answer is "no"
|
Generally lgtm. Should run the e2e kubectl test to make sure existing behavior still works. Also need a test for the generation path; unit would be fine, though e2e is probably easiest (see test/e2e/kubectl.go). |
|
Comments addressed (and PR expanded somewhat both to fix a bug, and add a If this looks good enough to merge, I'll add some testing. |
pkg/kubectl/cmd/rollingupdate.go
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you pull the static validation into a separate method?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done.
|
Comments addressed, please re-check. Thanks! |
pkg/kubectl/cmd/rollingupdate.go
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
static validation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done.
|
Just a couple nits. lgtm. |
|
Sorry for the delay. Taking a look now. |
|
cc @ironcladlou |
|
Comments addressed, and initial unit test added. Please take another look. Thanks! |
pkg/kubectl/cmd/rollingupdate.go
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Replicas will just be set back to oldRc.Spec.Replicas, below, so this seems pointless.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done.
pkg/kubectl/cmd/rollingupdate.go
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ideally, this would use the label update command, which we'd factor out into a library package, at least for the pod updates. We also need to update the selector on the RC, but there's not currently a generic way to do that (though I suspect we'll need one), so I'm more sympathetic to custom code for that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TODO'd
|
Comment addressed. Added additional tests. Please take another look. --brendan |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is racy. Really, the RC needs to be updated first to add the label to its template, then the pods need to be updated, then the selector needs to be changed. Or, delete the RC before updating pods and recreate it after.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What you propose is actually worse, because the minute you update the RC, it orphans the old pods, which causes it to notice that all pods are missing, freak out and creates N pods, so you end up with N orphans, and N new pods. Even if you delete the orphans, it's going to cause a bunch of restart churn in your system.
Pro-actively labeling, updating the pods, and then deleting any orphans that might sneak in, is the best you can do, I think.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, adding a new label key to the template doesn't orphan pods. That's why the selector update should be a separate step.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah, I see your point wrt to splitting the updates. Will send a PR to do that.
--brendan
|
This is more or less ok. Just some implementation details to iron out. |
… controller generation.
|
Comments addressed (or at least responded to) ptal. Thanks |
|
LGTM |
First part of improved rolling update, allow dynamic next replication controller generation.
|
Wow these changes are very interesting, however things do not work as expected for me: Is there anything I didn't grasp? |
|
Fix was merged: #7540 |
@ghodss @jlowdermilk @bgrant0607