-
Notifications
You must be signed in to change notification settings - Fork 39.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
hpa: Prevent scaling below MinReplicas if desiredReplicas is zero #48997
Conversation
Hi @johanneswuerbach. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
desiredReplicas = *hpa.Spec.MinReplicas | ||
} else { | ||
desiredReplicas = 1 | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure whether fallthrough
would be more in-line with the coding style, happy to change if preferred.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about as this:
case desiredReplicas == 0:
if hpa.Spec.MinReplicas != nil {
// never scale down to 0, reserved for disabling autoscaling
setCondition(hpa, autoscalingv2.ScalingLimited, v1.ConditionTrue, "TooFewReplicas", "the desired replica count was zero and less than the minimum replica count")
desiredReplicas = *hpa.Spec.MinReplicas
} else {
// never scale down to 0, reserved for disabling autoscaling
setCondition(hpa, autoscalingv2.ScalingLimited, v1.ConditionTrue, "TooFewReplicas", "the desired replica count was zero")
desiredReplicas = 1
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of duplicating the condition I now aligned the order of cases to
kubernetes/pkg/controller/podautoscaler/horizontal.go
Lines 413 to 418 in e8eb858
} else if hpa.Spec.MinReplicas != nil && currentReplicas < *hpa.Spec.MinReplicas { | |
rescaleReason = "Current number of replicas below Spec.MinReplicas" | |
desiredReplicas = *hpa.Spec.MinReplicas | |
} else if currentReplicas == 0 { | |
rescaleReason = "Current number of replicas must be greater than 0" | |
desiredReplicas = 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 for this, thanks @johanneswuerbach
desiredReplicas = 1 | ||
// make sure we aren't below our minimum if one is defined | ||
if hpa.Spec.MinReplicas != nil { | ||
desiredReplicas = *hpa.Spec.MinReplicas |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about logging a warning message here instead of setting the desiredReplicas
silently?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, I'm not sure I understand. In case of near zero load (which causes the desired replicas to be calculated as 0), the hpa should still try to downscale to a possible minimum.
This PR changes this behaviour only to scale to the defined minimum instead of hardcoded 1
, which causes all except one running pod to be killed and then HPA scaling to the defined min during the next iteration with Current number of replicas below Spec.MinReplicas
, which causes some unnecessary pod churn and increased load on the cluster.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, I mean both adding a warning message and setting desiredReplicas
as your current PR here, make sense?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure, isn't this desired behaviour that low-usage defaults to scaling to min replicas?
If I read https://kubernetes.io/docs/api-reference/v1.7/#horizontalpodautoscalerspec-v1-autoscaling correctly, minReplicas
will default to 1
and can't be unset. Maybe the desiredReplicas == 0
case could be completely dropped and just be ensured that minReplicas
is set. In that case the case hpa.Spec.MinReplicas != nil && desiredReplicas < *hpa.Spec.MinReplicas:
should catch also the 0
case and handle it correctly.
66b347a
to
a99d988
Compare
/ok-to-test |
Semi-unrelated question: Is this something that would be back-ported to 1.7 if approved or is there anything I would need to do for that? |
@johanneswuerbach I am strongly convinced we need to cherry-pick it on 1.7 and make sure it gets included in the next patch release, as it is a serious regression in hpa. Let's wait for @DirectXMan12 (I think it's still pretty early in his timezone) to review this and than we can try to get it cherrypicked. |
/lgtm this is supposed to have an associated issue. Please create one, add it to the PR body.
Agreed, please do cherry-pick this to any appropriate versions. |
Created and referenced #49028 |
/approve no-issue |
/lgtm |
/approve no-issue |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: DirectXMan12, johanneswuerbach, mikedanese, piosz Associated issue: 49028 The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these OWNERS Files:
You can indicate your approval by writing |
Automatic merge from submit-queue (batch tested with PRs 48997, 48595, 48898, 48711, 48972) |
Commit found in the "release-1.7" branch appears to be this PR. Removing the "cherrypick-candidate" label. If this is an error find help to get your PR picked. |
What this PR does / why we need it:
Prevent a HPA scaling below
minReplicas
ifdesiredReplicas
is calculated as0
.Example events of a HPA continuously scaling between
1
andMinReplicas
:Which issue this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close that issue when PR gets merged): fixes #49028Special notes for your reviewer:
Release note: