-
Notifications
You must be signed in to change notification settings - Fork 40.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add cleanup policy to RollingUpdater #6996
Add cleanup policy to RollingUpdater #6996
Conversation
Thanks for your pull request. It looks like this may be your first contribution to a Google open source project, in which case you'll need to sign a Contributor License Agreement (CLA). 📝 Please visit https://cla.developers.google.com/ to sign. Once you've signed, please reply here (e.g.
|
Release the bees! @googlebot 🐝🐝🐝🐝🐝 |
@ironcladlou Is this is step towards unification with Openshift's DeploymentConfig? Have you followed the discussion on #1743 at all? I'll definitely take a look. I totally agree the parameters shouldn't come directly from flags. Ideally, I'd like the parameter block be suitable for inclusion in an API object (Deployment). |
Perhaps indirectly. We need to support a rolling deployment strategy in the short term, and it would be nice if OpenShift could reuse the upstream
This PR addresses 1. To really reuse the code, we'd also need to address 2. If there's no interest in the modifications required to address 2, accepting this PR may not make sense in the context of the broader goal. |
There will inevitably be such an API in OpenShift to represent many of the fields on the |
I think the work in the short term leads naturally into prequel work for Deployment - this will force our two models closer together and expose any gaps we can highlight in the working code. |
@bgrant0607 Given the above comments, how do you feel about this change (and the followup I'd make to support deploying when there's no "old" RC)? |
Sorry. Will take a look in about 30 minutes. |
|
||
const ( | ||
// DeleteRollingUpdateCleanupPolicy means delete the old controller. | ||
DeleteRollingUpdateCleanupPolicy RollingUpdaterCleanupPolicy = "DeleteRollingUpdateCleanupPolicy" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should just be "Delete".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are a lot of files in this package. Any concerns about having sort of ambiguous constants like "Delete" and "Preserve" given the scope?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was just referring to the string values, not the variable names.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the only thing I was waiting for, but I'm fine with this going in as is, so long as you don't mind this getting changed later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah! Changed.
This change seems ok. FYI, future changes coming down the road:
|
In what scenario do you imagine there would be no old RC? Why wouldn't create rc be used? |
This is the event sequence in OpenShift:
The separation of concerns we've established means the rolling updater's create functionality is irrelevant in this context; our strategies don't create deployments. During the very first deployment process, there is no old deployment, only the new one created by OpenShift and handed to the strategy to activate. I'm not saying one way or the other is preferable at this point, just trying to establish context for the use case to illustrate the conceptual gaps which are limiting RollingUpdater's reuse potential. If changing RollingUpdater to accommodate our needs in this way would be special-casing for a mechanism that would never make sense upstream given the plans for Kube deployments, maybe it makes more sense for us to DIY a rolling update strategy in the very short term. |
If someone deleted the old RC during a rolling update, if the user had to quit out and try to come back in, they may have no previous context, but would want to continue the scale up. I guess with the RC gone you'd probably lose the old label selector, so you couldn't scale anything old down, but you could finish the scale up. When we have deployment as a first class kube object, the first deployment would have no previous version, but because of races you can't be sure of that. So the code that does the roll-out (based on whatever strategy inputs are provided) would at least need to be aware of the fact that the old RC might not exist anymore. I'd prefer to call into one code path (roll from X to Y) and have X not exist either be a short circuit or at least handled. ----- Original Message -----
|
Makes sense. I'd like to see support for 0 or more "old" RCs. |
LGTM |
* Support configurable cleanup policies in RollingUpdater. Downstream library consumers don't necessarily have the same rules for post deployment cleanup; making the behavior policy driven is more flexible. * Refactor RollingUpdater to accept a config object during Update instead of a long argument list. * Add test coverage for cleanup policy.
8510f67
to
093b7c2
Compare
Restarted Travis. |
Add cleanup policy to RollingUpdater
library consumers don't necessarily have the same rules for post
deployment cleanup; making the behavior policy driven is more flexible.
of a long argument list.