-
Notifications
You must be signed in to change notification settings - Fork 40.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP: Allow kubemark to conditionally disrupt kubelet runtime #73399
WIP: Allow kubemark to conditionally disrupt kubelet runtime #73399
Conversation
Kubemark is very useful tool for exercising various solutions requiring to run many nodes on a small set of physical nodes. E.g. for development and testing of cluster autoscaler where some use cases might require tens or hundreds of nodes to be scalled up and down. Using kubemark saves computation resources. As part of integration of cluster-api project into cluster-autoscaler project, the autoscaler uses the ProviderID field to index nodes through informer. Setting the ProviderID field even by Kubemark allows to autoscale cluster made of hollow nodes.
The main purpose of Kubemark is to allow register more virtual nodes than there is physical ones. Yet, presenting all the virtuals nodes as real ones by running kubelet with fake container runtime. Though, the real nodes can manifest in many ways (e.g. nodes going into Unready status, nodes reporting disk pressure) that Kubemark can not currently simulate. Providing a way to configure hollow node to e.g. go Unready would help to improve ability to cover more testing scenarios. In case of cluster-api nodes are backed up by machine objects. One of the goals of the cluster-api project is to improve self-healing ability of a cluster, Allowing a node to conditionally go unready allows to test a case of self-healing machine which is not easy to reproduce in a cluster with only real nodes. This PR gives kubelet the ability to inject external runtime health checker. In case of Kubemark ability to inject runtime disruptor that allows Kubelet runtime to conditionally report error causing a node to report unreadiness.
@ingvagabund: Adding the "do-not-merge/release-note-label-needed" label because no release-note block was detected, please follow our release note process to remove it. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: ingvagabund If they are not already assigned, you can assign the PR to them by writing The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
The main purpose of Kubemark is to allow register more virtual nodes than there is physical ones. Yet, presenting all the virtuals nodes as real ones by running kubelet with fake container runtime. Though, the real nodes can manifest in many ways (e.g. nodes going into Unready status, nodes reporting disk pressure) that Kubemark can not currently simulate. Providing a way to configure hollow node to e.g. go Unready would help to improve ability to cover more testing scenarios. In case of cluster-api nodes are backed up by machine objects. One of the goals of the cluster-api project is to improve self-healing ability of a cluster, Allowing a node to conditionally go unready allows to test a case of self-healing machine which is not easy to reproduce in a cluster with only real nodes. This PR gives kubemark the ability to: - configure kubelet to force node status to report unreadiness (either indefinitely or periodically) - set node taint to NoSchedule - change node status update frequency Examples: - Start reporting Node status Unready after 40s from the kubelet startup: $ ./kubemark ... --turn-unhealthy-after=true --healthy-duration=40s - Switch node status to Unready after 40s in Ready status to Unready for 5s and back to Ready periodically $ ./kubemark ... --turn-unhealthy-periodically=true --unhealthy-duration=5s --healthy-duration=40s
02d24d2
to
59f2028
Compare
@ingvagabund: The following tests failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
@ingvagabund: PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
The main purpose of Kubemark is to allow register more virtual nodes than there is physical ones. Yet, presenting all the virtuals nodes as real ones by running kubelet with fake container runtime.
Though, the real nodes can manifest in many ways (e.g. nodes going into Unready status, nodes reporting disk pressure) that Kubemark can not currently simulate. Providing a way to configure hollow node to e.g. go Unready would help to improve ability to cover more testing scenarios.
In case of cluster-api nodes are backed up by machine objects. One of the goals of the cluster-api project is to improve self-healing ability of a cluster, Allowing a node to conditionally go unready allows
to test a case of self-healing machine which is not easy to reproduce in a cluster with only real nodes.
Depends on:
What type of PR is this?
/kind feature
What this PR does / why we need it:
This PR gives kubemark the ability to:
Examples:
$ ./kubemark ... --turn-unhealthy-after=true --healthy-duration=40s
$ ./kubemark ... --turn-unhealthy-periodically=true --unhealthy-duration=5s --healthy-duration=40s
Which issue(s) this PR fixes:
Fixes #
Special notes for your reviewer:
Does this PR introduce a user-facing change?: