Self-Distilled Hierarchical Network for Unsupervised Deformable Image Registration
IEEE Transactions on Medical Imaging (TMI) 2023
Shenglong Zhou, Bo Hu, Zhiwei Xiong and Feng Wu
University of Science and Technology of China (USTC)
We present a novel unsupervised learning approach named Self-Distilled Hierarchical Network (SDHNet).
By decomposing the registration procedure into several iterations, SDHNet generates hierarchical deformation fields (HDFs) simultaneously in each iteration and connects different iterations utilizing the learned hidden state.
Hierarchical features are extracted to generate HDFs through several parallel GRUs, and HDFs are then fused adaptively conditioned on themselves as well as contextual features from the input image.
Furthermore, different from common unsupervised methods that only apply similarity loss and regularization loss, SDHNet introduces a novel self-deformation distillation scheme.
This scheme distills the final deformation field as the teacher guidance, which adds constraints for intermediate deformation fields.
The packages and their corresponding version we used in this repository are listed below.
- Python 3
- Pytorch 1.1
- Numpy
- SimpleITK
After configuring the environment, please use this command to train the model.
python -m torch.distributed.launch --nproc_per_node=4 train.py --name=SDHNet --iters=6 --dataset=brain --data_path=/xx/xx/ --base_path=/xx/xx/
Use this command to obtain the testing results.
python eval.py --name=SDHNet --model=SDHNet_lpba --dataset=brain --dataset_test=lpba --iters=6 --local_rank=0 --data_path=/xx/xx/ --base_path=/xx/xx/
We follow Cascade VTN to prepare the training and testing datasets, please refer to Cascade VTN for details.
The related pretrained models are available, please refer to the testing command for evaluating.
If you find this work or code is helpful in your research, please cite:
@article{zhou2023self,
title={Self-Distilled Hierarchical Network for Unsupervised Deformable Image Registration},
author={Zhou, Shenglong and Hu, Bo and Xiong, Zhiwei and Wu, Feng},
journal={IEEE Transactions on Medical Imaging},
year={2023},
publisher={IEEE}
}
Due to our further exploration of the self-distillation, the current repo does not involve the related part temporarily.
Please be free to contact us by e-mail ([email protected]) or WeChat (ZslBlcony) if you have any questions.
We follow the functional implementation in Cascade VTN, and the overall code framework is adapted from RAFT.
Thanks a lot for their great contribution!