You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Note that pixsfm only uses the GPU to accelerate when performing dense feature extraction, and the remaining steps are completed on the CPU. However, the GPU will continue to be occupied until the entire reconstruction process is completed. I tried adding in Hierarchical-Localization/hloc/extract_features.py:
del model
torch.cuda.empty_cache()
gc.collect()
However, the GPU occupied by the initialization network is still not released. Are there any methods to solve this problem?
Also, can hloc be further accelerated? For example, trading space for time, using all cores to run sfm, etc.
If you can receive a reply, I would be grateful!
The text was updated successfully, but these errors were encountered:
Note that pixsfm only uses the GPU to accelerate when performing dense feature extraction, and the remaining steps are completed on the CPU. However, the GPU will continue to be occupied until the entire reconstruction process is completed. I tried adding in Hierarchical-Localization/hloc/extract_features.py:
However, the GPU occupied by the initialization network is still not released. Are there any methods to solve this problem?
Also, can hloc be further accelerated? For example, trading space for time, using all cores to run sfm, etc.
If you can receive a reply, I would be grateful!
The text was updated successfully, but these errors were encountered: