-
Notifications
You must be signed in to change notification settings - Fork 1.5k
Open
Description
Describe the bug
Using the patch pixel sampler (patch_size=32) and adding masks results in very slow training increasing training time from minutes (w/o masks) to days (w/ masks). After logging times I could identify that the torch.nn.functional.max_pool2d(tensor, kernel_size=kernel_size, stride=1, padding=(kernel_size - 1) // 2) within the dialate function in nerfstudio/data/utils/pixel_sampling_utils.py taking up to 77sec for one batch.
To Reproduce
Steps to reproduce the behavior:
- Find a NeRF scene with images and masks
- Set
--pipeline.datamanager.patch-size 32 - Run w/ and w/o masks to see the difference
Expected behavior
A faster default implementation or the possibility to swap to GPU for fast max_pool2d computation.
Factral
Metadata
Metadata
Assignees
Labels
No labels