-
Notifications
You must be signed in to change notification settings - Fork 510
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add disk provisioning customization #12794
Add disk provisioning customization #12794
Conversation
4cdda9d
to
883bc7b
Compare
883bc7b
to
36d500d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank-you for the PR, just a few small things, but LGTM otherwise.
Thank you for the review @hpidcock ! |
36d500d
to
0fe24f1
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
QA:
No issues with thin or thickEagarZero values. Using an unknown type returns and error with allowed values which is nice.
When I set juju model-config disk-provisioning-type=thick
, the setting is correct, but the disk was provisioned with "Thick Provision Lazy Zeroed" instead of thick.
Something to chat on Monday, is it possible and do we want to add the ability to use the default storage policy for the vSphere? |
There are two types of "thick" provisioning:
Controlling this behavior is a boolean value whether or not to "scrub" unused space: I can rename from "thick" to "thickLazyZero" to make it more explicit if you prefer. |
Ah, I was reading too fast and didn't catch Lazy vs Eager. I'm looking to see if there is a norm for what to call it. |
Add the ability to specify if the disks of the newly created instance should be thin, thick or thickEagerZero.
* Fixed the default disk provisioning type if the model does not hold a valid option * Added tests
7765384
to
146c1ce
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM with the one naming change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you. LGTM
Hi @hpidcock, Could you take another look and let me know if there are any changes I have missed? Thanks! |
|
#12909 Merge 2.9 #12827 Unsubscribe from hub when closing state pool #12829 Correct default bootstrap-timeout value displayed in help. #12840 Constraint tags can be used for pod affinity #12842 Fix upgrade series agent version handling #12794 Add disk provisioning customization #12845 Restore space support for manual machines #12839 Support merging of netplan configs #12853 Add display type for network-get results #12854 Fix for LP1921557 sni in Juju login. #12850 Use Base in Charmhub packge and its response structures. #12858 Ensure assess-upgrade-series does not report started prematuremly #12860 Removed logging from core annotations. #12861 Fixes bug where empty error can happen in storage #12865 Update Pebble version to include new files API #12866 Workaround for k8s dashboard URL with k8s client proxy #12862 Fix/lp 1923051 #12867 Fix/lp 1923561 #12870 Use channel logic in charm library #12873 Add support for setting pod affinity topology key #12874 Use Patch instead of Update for ensuring ingress resources #12831 Integration fixes #12879 Ensure refresh uses version #12864 bug: fix for bootstrap fail on vsphere 7 + multi network #12883 Initial work to allow juju trust for sidecar charms #12884 Fix ssh with sidecar charms and containers. #12886 Charmhub bases #12881 Use charm pkg updates #12889 Ignore projected volume mounts when looking up juju storage #12890 Fix passing empty string container name to unit params #12893 Add CLA checker GH action and remove codecov push action #12897 Use production charmhub endpoint #12887 Resource validation error #12888 Ensure we validate the model target #12898 Remove usage of systems package from CAAS application provisioner #12899 CAAS bundle deployments #12900 Bump up Pebble version to include user/group in list-files #12901 charm Format helper #12902 charm Iskubernetes helper #12903 Display ... for really long k8s app versions in status #12904 Filter out more full registry paths for app version in status #12905 Fix k8s bundle deploys with v2 charms #12906 Register resource-get for containeragent binary Conflicts mostly due to charm v8 vs v9 imports. The other one was due to changes to dashboard CLI. ``` # Conflicts: # api/common/charms/common.go # api/common/charms/common_test.go # apiserver/facades/client/application/application.go # apiserver/facades/client/application/charmstore.go # apiserver/facades/client/application/update_series_mocks_test.go # apiserver/facades/client/charms/client.go # apiserver/facades/client/charms/convertions.go # apiserver/facades/client/machinemanager/types_mock_test.go # apiserver/facades/controller/caasoperatorprovisioner/provisioner.go # cmd/juju/application/deployer/bundlehandler_test.go # cmd/juju/application/refresh_test.go # cmd/juju/application/refresher/refresher_mock_test.go # cmd/juju/dashboard/dashboard.go # core/charm/strategies_mock_test.go # core/model/model.go # core/model/model_test.go # go.mod # go.sum # resource/resourceadapters/charmhub.go # scripts/win-installer/setup.iss # service/agentconf_test.go # snap/snapcraft.yaml # state/charm.go # state/migration_export.go # state/state.go # version/version.go # worker/caasfirewallerembedded/applicationworker.go # worker/caasfirewallerembedded/applicationworker_test.go ``` ## QA steps See PRs
This change allows operators to set a new model-level config option which dictates how template VM disks should be cloned when creating a new machine. Current values are:
provider/vsphere/internal/vsphereclient/client.go
regardingDiskProvisioningType
.Checklist
QA steps
Verify your vsphere deployment that the root disk of the newly provisioned VM is thinly provisioned.
NOTE: The VM template used must not have disks provisioned as "flat". Disks of the source template must be "thin" or "thick". Flat disks cannot be cloned as "thin" or "thick".
Documentation changes
No changes required to CLI or API.
Bug reference
https://bugs.launchpad.net/juju/+bug/1807957