Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: Allow kubemark to conditionally disrupt kubelet runtime #73399

Closed

Conversation

ingvagabund
Copy link
Contributor

The main purpose of Kubemark is to allow register more virtual nodes than there is physical ones. Yet, presenting all the virtuals nodes as real ones by running kubelet with fake container runtime.

Though, the real nodes can manifest in many ways (e.g. nodes going into Unready status, nodes reporting disk pressure) that Kubemark can not currently simulate. Providing a way to configure hollow node to e.g. go Unready would help to improve ability to cover more testing scenarios.

In case of cluster-api nodes are backed up by machine objects. One of the goals of the cluster-api project is to improve self-healing ability of a cluster, Allowing a node to conditionally go unready allows
to test a case of self-healing machine which is not easy to reproduce in a cluster with only real nodes.

Depends on:

What type of PR is this?

Uncomment only one /kind <> line, hit enter to put that in a new line, and remove leading whitespaces from that line:

/kind feature

What this PR does / why we need it:

This PR gives kubemark the ability to:

  • configure kubelet to force node status to report unreadiness (either indefinitely or periodically)
  • set node taint to NoSchedule
  • change node status update frequency

Examples:

  • Start reporting Node status Unready after 40s from the kubelet startup:
    $ ./kubemark ... --turn-unhealthy-after=true --healthy-duration=40s
  • Switch node status to Unready after 40s in Ready status to Unready for 5s and back to Ready periodically
    $ ./kubemark ... --turn-unhealthy-periodically=true --unhealthy-duration=5s --healthy-duration=40s

Which issue(s) this PR fixes:

Fixes #

Special notes for your reviewer:

Does this PR introduce a user-facing change?:


Kubemark is very useful tool for exercising various solutions requiring to run
many nodes on a small set of physical nodes. E.g. for development and testing
of cluster autoscaler where some use cases might require tens or hundreds
of nodes to be scalled up and down. Using kubemark saves computation
resources.

As part of integration of cluster-api project into cluster-autoscaler project,
the autoscaler uses the ProviderID field to index nodes through informer.
Setting the ProviderID field even by Kubemark allows to autoscale cluster
made of hollow nodes.
The main purpose of Kubemark is to allow register more virtual nodes
than there is physical ones. Yet, presenting all the virtuals nodes
as real ones by running kubelet with fake container runtime.

Though, the real nodes can manifest in many ways (e.g. nodes going
into Unready status, nodes reporting disk pressure) that Kubemark
can not currently simulate. Providing a way to configure hollow node
to e.g. go Unready would help to improve ability to cover more
testing scenarios.

In case of cluster-api nodes are backed up by machine objects.
One of the goals of the cluster-api project is to improve self-healing
ability of a cluster, Allowing a node to conditionally go unready allows
to test a case of self-healing machine which is not easy to reproduce
in a cluster with only real nodes.

This PR gives kubelet the ability to inject external runtime health checker.
In case of Kubemark ability to inject runtime disruptor that allows
Kubelet runtime to conditionally report error causing a node to report
unreadiness.
@k8s-ci-robot
Copy link
Contributor

@ingvagabund: Adding the "do-not-merge/release-note-label-needed" label because no release-note block was detected, please follow our release note process to remove it.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. kind/feature Categorizes issue or PR as related to a new feature. do-not-merge/release-note-label-needed Indicates that a PR should not merge because it's missing one of the release note labels. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. labels Jan 28, 2019
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: ingvagabund
To fully approve this pull request, please assign additional approvers.
We suggest the following additional approvers: dchen1107, gmarek

If they are not already assigned, you can assign the PR to them by writing /assign @dchen1107 @gmarek in a comment when ready.

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added area/kubelet sig/node Categorizes an issue or PR as relevant to SIG Node. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Jan 28, 2019
The main purpose of Kubemark is to allow register more virtual nodes
than there is physical ones. Yet, presenting all the virtuals nodes
as real ones by running kubelet with fake container runtime.

Though, the real nodes can manifest in many ways (e.g. nodes going
into Unready status, nodes reporting disk pressure) that Kubemark
can not currently simulate. Providing a way to configure hollow node
to e.g. go Unready would help to improve ability to cover more
testing scenarios.

In case of cluster-api nodes are backed up by machine objects.
One of the goals of the cluster-api project is to improve self-healing
ability of a cluster, Allowing a node to conditionally go unready allows
to test a case of self-healing machine which is not easy to reproduce
in a cluster with only real nodes.

This PR gives kubemark the ability to:
- configure kubelet to force node status to report unreadiness (either indefinitely or periodically)
- set node taint to NoSchedule
- change node status update frequency

Examples:
- Start reporting Node status Unready after 40s from the kubelet startup:
  $ ./kubemark ... --turn-unhealthy-after=true --healthy-duration=40s
- Switch node status to Unready after 40s in Ready status to Unready for 5s and back to Ready periodically
  $ ./kubemark ... --turn-unhealthy-periodically=true --unhealthy-duration=5s --healthy-duration=40s
@ingvagabund ingvagabund force-pushed the kubelet-runtime-disruptor branch from 02d24d2 to 59f2028 Compare January 28, 2019 12:26
@k8s-ci-robot
Copy link
Contributor

k8s-ci-robot commented Jan 28, 2019

@ingvagabund: The following tests failed, say /retest to rerun them all:

Test name Commit Details Rerun command
pull-kubernetes-bazel-test 59f2028 link /test pull-kubernetes-bazel-test
pull-kubernetes-node-e2e 59f2028 link /test pull-kubernetes-node-e2e
pull-kubernetes-local-e2e-containerized 59f2028 link /test pull-kubernetes-local-e2e-containerized
pull-kubernetes-verify 59f2028 link /test pull-kubernetes-verify
pull-kubernetes-e2e-gce 59f2028 link /test pull-kubernetes-e2e-gce

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@k8s-ci-robot
Copy link
Contributor

@ingvagabund: PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Feb 5, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 6, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 5, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closed this PR.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/kubelet cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/release-note-label-needed Indicates that a PR should not merge because it's missing one of the release note labels. do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. sig/node Categorizes an issue or PR as relevant to SIG Node.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants