-
Notifications
You must be signed in to change notification settings - Fork 42.1k
places vsphere volumes as per zones #74654
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
places vsphere volumes as per zones #74654
Conversation
|
Hi @subramanian-neelakantan. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
|
/ok-to-test |
|
/retest |
|
/lgtm /assign @tpepper This is for the next 1.13 patch release |
|
/assign @SandeepPissay @divyenpatel @deads2k @derekwaynecarr PTAL |
|
cc @cheftako |
|
/approve |
|
In the future please use the cherry-pick automation. As I read the description and saw the size/XL this came across as a feature. Then I looked at the resolved issue and it is marked as a feature. I need some convincing this is actually a cherry pick candidate. |
Will do. Thanks for letting me know the right process.
This cherry-pick contains two related PRs - #72731 and #72687 - both of which fix slightly different issues. vSphere cloud provider already implements the zones interface. With Zones support the nodes get zone labels as per the tags in vcenter inventory and users could provision pods to zones. However it did not work well when volumes were not placed in specific configurations. PR #72687 fixes issue #73301 (zone labels need to be auto-applied to a volume, so that pod scheduler can choose the right pod as per volume) and PR #72731 fixes the issue of placing the volume in a datastore of the right zone. These issues were causing wrong and strange behaviour of volumes in a vsphere environment that uses zones when the datastore were not shared across all k8s nodes. I apologize for not giving the reference of the right issues in the description. I have done that now. Kindly take another look and let me know if you have questions. |
|
@tpepper PTAL |
|
@deads2k @derekwaynecarr can you take a look at this PR for a cherry pick into 1.13? This needs your approve before it can be considered for a merge. |
|
echo the sentiment on using the automation. the admission changes look fine. /approve |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: derekwaynecarr, frapposelli, SandeepPissay, subramanian-neelakantan The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
/test pull-kubernetes-integration |
|
Cleaned up the release notes in description. |
What type of PR is this?
/kind bug
What this PR does / why we need it:
This is a cherry-pick of PRs #72731 and PR #72687 to be merged into release-1.13. When this PR is merged, vSphere volumes will get placed as per zone requested in allowedTopologies property of a storage class. Also the newly created volume is labelled with the zone information. See the base PRs for more details.
Which issue(s) this PR fixes:
Fixes #73301
refers to issue #67703
Special notes for your reviewer:
All the existing vsphere e2e tests were run against this PR and found to be passing.
PV creation Logs:
Does this PR introduce a user-facing change?:
/sig vmware
/sig storage
/sig api-machinery
/cc @vladimirvivien @frapposelli