Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Delete the federation namespace from fcp instead of individual objects #51768

Merged
merged 1 commit into from
Sep 15, 2017

Conversation

shashidharatd
Copy link

@shashidharatd shashidharatd commented Sep 1, 2017

What this PR does / why we need it:
This PR simplifies cleanup by deleting the entire namespace instead of individual objects.
This PR is linked to #50543. This may not solve the issue but instead to try an alternative.

Release note:

Delete the federation namespace from control plane instead of individual objects

/assign @madhusudancs

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Sep 1, 2017
@k8s-github-robot k8s-github-robot added the release-note-none Denotes a PR that doesn't merit a release note. label Sep 1, 2017
# performing a cascading deletion of all the resources in the
# namespace. So we sleep a little longer than other resources
# before retrying
sleep 5
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we rather wait with a polled query on the ns; and bigger timeout which fails, if it could not delete the ns in that time.
Also I am curious as to why this was not done earlier. (probably some specific reason @madhusudancs )

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we rather wait with a polled query on the ns; and bigger timeout which fails, if it could not delete the ns in that time.

@irfanurrehman What Shashi has now is actually the preferred method in kubernetes.

Also I am curious as to why this was not done earlier. (probably some specific reason @madhusudancs )

It's mostly for historic reasons. We wanted to be super careful about what we were deleting in the federation-cluster (now federation-system) namespace in the pre-kubefed days. It just carried over to kubefed days.

Copy link
Contributor

@irfanurrehman irfanurrehman Sep 6, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@madhusudan thanks for the clarifications. Seems ok to me.

@irfanurrehman
Copy link
Contributor

/assign @madhusudancs

sleep 2
kube::log::status "Removing namespace \"${FEDERATION_NAMESPACE}\" from \"${FEDERATION_KUBE_CONTEXT}\""
# Try deleting until the namespace is completely gone.
while $host_kubectl --context="${FEDERATION_KUBE_CONTEXT}" delete namespace "${FEDERATION_NAMESPACE}" >/dev/null 2>&1; do
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we really need to do this separately at all. Shouldn't the for loop below in line 91 take care of this?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@madhusudancs, the for loop below does not include the federation control plane context and only iterates over federated clusters. may be some resource are leftover, if we just delete the federation api-server and controllermanager pods, but do not delete the objects in the fcp.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see Ok, fair enough.

# performing a cascading deletion of all the resources in the
# namespace. So we sleep a little longer than other resources
# before retrying
sleep 5
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we rather wait with a polled query on the ns; and bigger timeout which fails, if it could not delete the ns in that time.

@irfanurrehman What Shashi has now is actually the preferred method in kubernetes.

Also I am curious as to why this was not done earlier. (probably some specific reason @madhusudancs )

It's mostly for historic reasons. We wanted to be super careful about what we were deleting in the federation-cluster (now federation-system) namespace in the pre-kubefed days. It just carried over to kubefed days.

Copy link
Contributor

@madhusudancs madhusudancs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm
/approve

sleep 2
kube::log::status "Removing namespace \"${FEDERATION_NAMESPACE}\" from \"${FEDERATION_KUBE_CONTEXT}\""
# Try deleting until the namespace is completely gone.
while $host_kubectl --context="${FEDERATION_KUBE_CONTEXT}" delete namespace "${FEDERATION_NAMESPACE}" >/dev/null 2>&1; do
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see Ok, fair enough.

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Sep 6, 2017
@k8s-github-robot
Copy link

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: madhusudancs, shashidharatd

Associated issue: 50543

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these OWNERS Files:

You can indicate your approval by writing /approve in a comment
You can cancel your approval by writing /approve cancel in a comment

@k8s-github-robot k8s-github-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Sep 6, 2017
@madhusudancs
Copy link
Contributor

@shashidharatd can you also please cherry pick this to 1.8?

@fejta-bot
Copy link

/retest
This bot automatically retries jobs that failed/flaked on approved PRs (send feedback to @fejta).

Review the full test history for this PR.

3 similar comments
@fejta-bot
Copy link

/retest
This bot automatically retries jobs that failed/flaked on approved PRs (send feedback to @fejta).

Review the full test history for this PR.

@fejta-bot
Copy link

/retest
This bot automatically retries jobs that failed/flaked on approved PRs (send feedback to @fejta).

Review the full test history for this PR.

@fejta-bot
Copy link

/retest
This bot automatically retries jobs that failed/flaked on approved PRs (send feedback to @fejta).

Review the full test history for this PR.

@madhusudancs madhusudancs added this to the v1.8 milestone Sep 13, 2017
@madhusudancs
Copy link
Contributor

@kubernetes/release-reviewers this is a bug fix. Is there anything else we need to do get this merged?

cc @csbell

@shashidharatd
Copy link
Author

/test pull-kubernetes-unit

@fejta-bot
Copy link

/retest
This bot automatically retries jobs that failed/flaked on approved PRs (send feedback to @fejta).

Review the full test history for this PR.

2 similar comments
@fejta-bot
Copy link

/retest
This bot automatically retries jobs that failed/flaked on approved PRs (send feedback to @fejta).

Review the full test history for this PR.

@fejta-bot
Copy link

/retest
This bot automatically retries jobs that failed/flaked on approved PRs (send feedback to @fejta).

Review the full test history for this PR.

@k8s-ci-robot
Copy link
Contributor

k8s-ci-robot commented Sep 15, 2017

@shashidharatd: The following test failed, say /retest to rerun them all:

Test name Commit Details Rerun command
pull-kubernetes-e2e-gce-bazel 320548f link /test pull-kubernetes-e2e-gce-bazel

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@k8s-github-robot
Copy link

Automatic merge from submit-queue

@k8s-github-robot k8s-github-robot merged commit 5a4d1dd into kubernetes:master Sep 15, 2017
@wojtek-t wojtek-t removed this from the v1.8 milestone Oct 2, 2017
@wojtek-t wojtek-t added this to the v1.7 milestone Oct 2, 2017
@k8s-ci-robot k8s-ci-robot added release-note Denotes a PR that will be considered when it comes time to generate release notes. size/S Denotes a PR that changes 10-29 lines, ignoring generated files. and removed release-note-none Denotes a PR that doesn't merit a release note. labels Oct 3, 2017
@wojtek-t wojtek-t added the cherry-pick-approved Indicates a cherry-pick PR into a release branch has been approved by the release branch manager. label Oct 3, 2017
@wojtek-t
Copy link
Member

wojtek-t commented Oct 3, 2017

Cherrypick in #53372

k8s-github-robot pushed a commit that referenced this pull request Oct 3, 2017
…68-upstream-release-1.7

Automatic merge from submit-queue.

Automated cherry pick of #51768 upstream release 1.7 

Cherry pick of #51768 on release-1.7.

#51768: Delete the federation namespace from fcp instead of individual objects
@k8s-cherrypick-bot
Copy link

Commit found in the "release-1.7" branch appears to be this PR. Removing the "cherrypick-candidate" label. If this is an error find help to get your PR picked.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cherry-pick-approved Indicates a cherry-pick PR into a release branch has been approved by the release branch manager. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. release-note Denotes a PR that will be considered when it comes time to generate release notes. size/S Denotes a PR that changes 10-29 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants