-
Notifications
You must be signed in to change notification settings - Fork 39.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Old version pods are created when deploying a new version with more replicas - Reopen old issue? #120157
Comments
This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/sig apps |
@RyanAoh: The label(s) In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/sig apps |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close not-planned |
@k8s-triage-robot: Closing this issue, marking it as "Not Planned". In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
What happened?
Same issue as #105395, can't reopen it.
We've observed this issue and it was causing issues when we wanted to change the version of the software deployed while also scaling out the deployment. We have some data store configuration and database schema update scripts implemented as init containers. In our case needed to do a non backward compatible configuration upgrade, so the old version of the service was not able to work with the new config, and the new version of the service was not able to work with the new config.
The following happened:
What did you expect to happen?
Old replicaset is not scaled out, but stays on it's previous size. The current behaviour is trying to saturate the new replicacount, even if with old pods, while doing rolling upgrade.
As the old replicaset is connected to the old intent of the deployment, there is no guarantee that it can support scaling out to the size described in the new deployment.
How can we reproduce it (as minimally and precisely as possible)?
Taken from #105395
#1. Provision a simple nginx deployment with 2 replicas using the below manifest.
The deployment works as expected.
#2. Update the deployment by just changing the image version and also increase the replicas to 3 as shown below
Anything else we need to know?
No response
Kubernetes version
Cloud provider
OS version
Install tools
Container runtime (CRI) and version (if applicable)
Related plugins (CNI, CSI, ...) and versions (if applicable)
The text was updated successfully, but these errors were encountered: