Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add disk provisioning customization #12794

Merged

Conversation

gabriel-samfira
Copy link
Contributor

@gabriel-samfira gabriel-samfira commented Mar 22, 2021

This change allows operators to set a new model-level config option which dictates how template VM disks should be cloned when creating a new machine. Current values are:

  • thin - Sparse provisioning, only written blocks will take up disk space on the datastore
  • thick - The entire size of the virtual disk will be deducted from the datastore, but unwritten blocks will not be zeroed out. This adds 2 potential pitfalls. See comments in provider/vsphere/internal/vsphereclient/client.go regarding DiskProvisioningType.
  • thickEagerZero (default) - The entire size of the virtual disk is deducted from the datastore, and unwritten blocks are zeroed out. Improves latency when committing to disk, as no extra step needs to be taken before writing data.

Checklist

  • Requires a pylibjuju change
  • Added integration tests for the PR
  • Added or updated doc.go related to packages changed
  • Comments answer the question of why design decisions were made

QA steps

juju bootstrap
juju model-config disk-provisioning-type=thin
juju add-machine

Verify your vsphere deployment that the root disk of the newly provisioned VM is thinly provisioned.

NOTE: The VM template used must not have disks provisioned as "flat". Disks of the source template must be "thin" or "thick". Flat disks cannot be cloned as "thin" or "thick".

Documentation changes

No changes required to CLI or API.

Bug reference

https://bugs.launchpad.net/juju/+bug/1807957

@gabriel-samfira gabriel-samfira force-pushed the add-disk-provisioning-customization branch from 4cdda9d to 883bc7b Compare March 22, 2021 11:06
@gabriel-samfira gabriel-samfira changed the base branch from develop to 2.9 March 22, 2021 14:12
@gabriel-samfira gabriel-samfira changed the base branch from 2.9 to develop March 22, 2021 14:13
@gabriel-samfira gabriel-samfira force-pushed the add-disk-provisioning-customization branch from 883bc7b to 36d500d Compare March 22, 2021 14:19
@gabriel-samfira gabriel-samfira changed the base branch from develop to 2.9 March 22, 2021 14:19
@hpidcock hpidcock added the 2.9 label Mar 24, 2021
Copy link
Member

@hpidcock hpidcock left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank-you for the PR, just a few small things, but LGTM otherwise.

provider/vsphere/config.go Outdated Show resolved Hide resolved
provider/vsphere/config.go Outdated Show resolved Hide resolved
provider/vsphere/internal/vsphereclient/createvm.go Outdated Show resolved Hide resolved
provider/vsphere/internal/vsphereclient/createvm.go Outdated Show resolved Hide resolved
provider/vsphere/internal/vsphereclient/createvm.go Outdated Show resolved Hide resolved
provider/vsphere/internal/vsphereclient/createvm.go Outdated Show resolved Hide resolved
provider/vsphere/internal/vsphereclient/createvm.go Outdated Show resolved Hide resolved
provider/vsphere/internal/vsphereclient/client.go Outdated Show resolved Hide resolved
@hpidcock
Copy link
Member

@hpidcock hpidcock self-assigned this Mar 24, 2021
@gabriel-samfira
Copy link
Contributor Author

Thank you for the review @hpidcock !

@gabriel-samfira gabriel-samfira force-pushed the add-disk-provisioning-customization branch from 36d500d to 0fe24f1 Compare March 26, 2021 14:08
Copy link
Member

@hmlanigan hmlanigan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

QA:

No issues with thin or thickEagarZero values. Using an unknown type returns and error with allowed values which is nice.

When I set juju model-config disk-provisioning-type=thick, the setting is correct, but the disk was provisioned with "Thick Provision Lazy Zeroed" instead of thick.

@hmlanigan
Copy link
Member

Something to chat on Monday, is it possible and do we want to add the ability to use the default storage policy for the vSphere?

@gabriel-samfira
Copy link
Contributor Author

QA:

No issues with thin or thickEagarZero values. Using an unknown type returns and error with allowed values which is nice.

When I set juju model-config disk-provisioning-type=thick, the setting is correct, but the disk was provisioned with "Thick Provision Lazy Zeroed" instead of thick.

There are two types of "thick" provisioning:

  • Lazy zeros
  • Eager zeros

https://docs.vmware.com/en/VMware-vSphere/5.5/com.vmware.vsphere.storage.doc/GUID-4C0F4D73-82F2-4B81-8AA7-1DD752A8A5AC.html

Controlling this behavior is a boolean value whether or not to "scrub" unused space:

https://github.com/juju/juju/pull/12794/files#diff-31c3de9d4ede680560b714357df5e54ba8652ab428941b320e5dc5a81005a586R615

I can rename from "thick" to "thickLazyZero" to make it more explicit if you prefer.

@hmlanigan
Copy link
Member

Ah, I was reading too fast and didn't catch Lazy vs Eager. I'm looking to see if there is a norm for what to call it.

Add the ability to specify if the disks of the newly created instance
should be thin, thick or thickEagerZero.
  * Fixed the default disk provisioning type if the model does not
hold a valid option
  * Added tests
@gabriel-samfira gabriel-samfira force-pushed the add-disk-provisioning-customization branch from 7765384 to 146c1ce Compare April 1, 2021 07:56
provider/vsphere/internal/vsphereclient/client.go Outdated Show resolved Hide resolved
provider/vsphere/config.go Outdated Show resolved Hide resolved
Copy link
Member

@hmlanigan hmlanigan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM with the one naming change.

Copy link
Member

@hmlanigan hmlanigan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you. LGTM

@hmlanigan hmlanigan requested a review from hpidcock April 1, 2021 21:12
@gabriel-samfira
Copy link
Contributor Author

Hi @hpidcock,

Could you take another look and let me know if there are any changes I have missed?

Thanks!

@hpidcock
Copy link
Member

hpidcock commented Apr 6, 2021

$$merge$$

@jujubot jujubot merged commit cb4ac1f into juju:2.9 Apr 6, 2021
@wallyworld wallyworld mentioned this pull request Apr 21, 2021
jujubot added a commit that referenced this pull request Apr 21, 2021
#12909

Merge 2.9

#12827 Unsubscribe from hub when closing state pool
#12829 Correct default bootstrap-timeout value displayed in help.
#12840 Constraint tags can be used for pod affinity
#12842 Fix upgrade series agent version handling
#12794 Add disk provisioning customization
#12845 Restore space support for manual machines
#12839 Support merging of netplan configs
#12853 Add display type for network-get results
#12854 Fix for LP1921557 sni in Juju login.
#12850 Use Base in Charmhub packge and its response structures.
#12858 Ensure assess-upgrade-series does not report started prematuremly
#12860 Removed logging from core annotations.
#12861 Fixes bug where empty error can happen in storage
#12865 Update Pebble version to include new files API
#12866 Workaround for k8s dashboard URL with k8s client proxy
#12862 Fix/lp 1923051
#12867 Fix/lp 1923561
#12870 Use channel logic in charm library
#12873 Add support for setting pod affinity topology key
#12874 Use Patch instead of Update for ensuring ingress resources
#12831 Integration fixes
#12879 Ensure refresh uses version
#12864 bug: fix for bootstrap fail on vsphere 7 + multi network
#12883 Initial work to allow juju trust for sidecar charms
#12884 Fix ssh with sidecar charms and containers.
#12886 Charmhub bases
#12881 Use charm pkg updates
#12889 Ignore projected volume mounts when looking up juju storage
#12890 Fix passing empty string container name to unit params
#12893 Add CLA checker GH action and remove codecov push action
#12897 Use production charmhub endpoint
#12887 Resource validation error
#12888 Ensure we validate the model target
#12898 Remove usage of systems package from CAAS application provisioner
#12899 CAAS bundle deployments
#12900 Bump up Pebble version to include user/group in list-files
#12901 charm Format helper
#12902 charm Iskubernetes helper
#12903 Display ... for really long k8s app versions in status
#12904 Filter out more full registry paths for app version in status
#12905 Fix k8s bundle deploys with v2 charms
#12906 Register resource-get for containeragent binary

Conflicts mostly due to charm v8 vs v9 imports.
The other one was due to changes to dashboard CLI.
```
# Conflicts:
# api/common/charms/common.go
# api/common/charms/common_test.go
# apiserver/facades/client/application/application.go
# apiserver/facades/client/application/charmstore.go
# apiserver/facades/client/application/update_series_mocks_test.go
# apiserver/facades/client/charms/client.go
# apiserver/facades/client/charms/convertions.go
# apiserver/facades/client/machinemanager/types_mock_test.go
# apiserver/facades/controller/caasoperatorprovisioner/provisioner.go
# cmd/juju/application/deployer/bundlehandler_test.go
# cmd/juju/application/refresh_test.go
# cmd/juju/application/refresher/refresher_mock_test.go
# cmd/juju/dashboard/dashboard.go
# core/charm/strategies_mock_test.go
# core/model/model.go
# core/model/model_test.go
# go.mod
# go.sum
# resource/resourceadapters/charmhub.go
# scripts/win-installer/setup.iss
# service/agentconf_test.go
# snap/snapcraft.yaml
# state/charm.go
# state/migration_export.go
# state/state.go
# version/version.go
# worker/caasfirewallerembedded/applicationworker.go
# worker/caasfirewallerembedded/applicationworker_test.go
```

## QA steps

See PRs
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants