Abdullah Abuolaim
Mhamoud Afifi
Michael S. Brown
York University
Reference github repository for the paper Improving Single-Image Defocus Deblurring: How Dual-Pixel Images Help Through Multi-Task Learning. Abuolaim et al., proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 2022 (YouTube presentation). If you use our dataset or code, please cite our paper:
@inproceedings{abuolaim2022improving,
title={Improving Single-Image Defocus Deblurring: How Dual-Pixel Images Help Through Multi-Task Learning},
author={Abuolaim, Abdullah and Afifi, Mahmoud and Brown, Michael S},
booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},
pages={1231--1239},
year={2022}
}
We collected a new diverse and large Dual-Pixel (DLDP) dataset of 2353 scenes. This dataset consists of 7059 images i.e., 2353 images with their 4706 dual-pixel (DP) sub-aperture views.
-
Dataset
- 2090 images used for training (processed to an sRGB and encoded with a lossless 16-bit depth).
- 263 images used for testing (processed to an sRGB and encoded with a lossless 16-bit depth).
-
Training and testing sets
- The dataset is divided randomly into:
- 89% training and 11% testing.
- Each set has a balanced number of indoor/outdoor scenes and aperture sizes.
- The 500 scenes of the DPDD dataset [1] are added to the training set.
- The dataset is divided randomly into:
-
The code tested with:
- Python 3.8.3
- TensorFlow 2.2.0
- Keras 2.4.3
- Numpy 1.19.1
- Scipy 1.5.2
- Scikit-image 0.16.2
- Scikit-learn 0.23.2
- OpenCV 4.4.0
Despite not tested, the code may work with library versions other than the specified
- Clone with HTTPS this project to your local machine
git clone https://github.com/Abdullah-Abuolaim/multi-task-defocus-deblurring-dual-pixel-nimat.git
cd ./multi-task-defocus-deblurring-dual-pixel-nimat/mdp_code/
-
Download the final trained model of the second phase used in the main paper i.e., mdp_phase_2_wacv
-
Place the downloaded
.hdf5
model insideModelCheckpoints
for testing -
Download the DPDD dataset [1], or visit project GitHub
-
Run
main.py
inmdp_code
directory as follows:python main.py --op_phase test --path_to_data $PATH_TO_DPDD_DATA$ --test_model mdp_phase_2_wacv
- --training_phase: training phase i.e., phase_1 or phase_2
- --op_phase: operation phase training or testing
- --path_to_data: path to the directory that has the DPDD data e.g.,
./dd_dp_dataset_canon/
- --test_model: test model name
- --downscale_ratio: downscale input test image in case the GPU memory is not sufficient, or use CPU instead
- The results of the tested models will be saved in the
results
directory that will be created insidemdp_code
-
The above testing is for evaluating the DPDD dataset quantitively and quantitatively
- This evaluation is for data that has ground truth
- This testing is designed based on the directory structure from the DPDD GitHub
Recall that you might need to change
- For quick qualitative testing of images in a directory, run the following command:
python main.py --op_phase test_quick --path_to_data $PATH_TO_DATA$ --test_model mdp_phase_2_wacv
- --path_to_data: path to the directory that has the images (no ground truth)
- --downscale_ratio: downscale input test image in case the GPU memory is not sufficient, or use CPU instead
- The results of the tested models will be saved in the
results
directory that will be created insidemdp_code
Recall that you might need to change
- As described in the main paper, we train our multi-task DP network (MDP) in two phases:
- First phase: freezing the weights of the deblurring Decoder, then, training with image patches from our new DLDP dataset to optimize for the DP-view synthesis task.
- Second phase: unfreezing the weights of the deblurring Decoder, then, fine-tuning using images from the DPDD dataset [1] to optimize jointly for both the defocus deblurring and DP-view synthesis tasks.
- Run
main.py
inmdp_code
directory to start phase 1 training:python main.py --training_phase phase_1 --op_phase train --path_to_data $PATH_TO_DLDP_DATA$
- --path_to_data: path to the directory that has the DLDP data e.g.,
./dldp_data_png/
- The results of the tested models will be saved in the
results
directory that will be created insidemdp_code
- The trained model and checkpoints will be saved in
ModelCheckpoints
after each epoch - A
TensorBoard
log for each training session will be created atlogs
to provide the visualization and tooling needed to monitor the training
- --path_to_data: path to the directory that has the DLDP data e.g.,
Recall that you might need to change
- Download the final trained model of the first phase used in the main paper i.e., mdp_phase_1_wacv
- Place the downloaded
.hdf5
model insideModelCheckpoints
for training - You need the first phase trained model to strat phase 2 training
- Run
main.py
inmdp_code
directory to start phase 2 training:python main.py --training_phase phase_2 --op_phase train --path_to_data $PATH_TO_DPDD_DATA$ --phase_1_checkpoint_model mdp_phase_1_wacv
- --path_to_data: path to the directory that has the DPDD data e.g.,
./dd_dp_dataset_canon/
- --phase_1_checkpoint_model: the name of the pretrained model from phase 1 e.g., mdp_phase_1_wacv
- The results of the tested models will be saved in the
results
directory that will be created insidemdp_code
- The trained model and checkpoints will be saved in
ModelCheckpoints
after each epoch - A
TensorBoard
log for each training session will be created atlogs
to provide the visualization and tooling needed to monitor the training
- --path_to_data: path to the directory that has the DPDD data e.g.,
Recall that you might need to change
- --patch_size: training patch size
- --img_mini_b: image mini-batch size
- --epoch: number of training epochs
- --lr: initial learning rate
- --schedule_lr_rate: learning rate scheduler (after how many epochs to decrease)
- --bit_depth: image bit depth datatype, 16 for
uint16
or 8 foruint8
. Recall that we train with 16-bit images - --dropout_rate: the dropout rate of the
conv
unit at the network bottleneck
- To generate eight DP-like sub-aperture views (i.e., NIMAT) for each image in a directory, run the following command:
python main.py --op_phase nimat --test_model mdp_phase_2_wacv --path_to_data $PATH_TO_DATA$
- --path_to_data: path to the directory that has the images (no ground truth)
- --downscale_ratio: downscale input test image in case the GPU memory is not sufficient, or use CPU instead
- The results of the tested models will be saved in the
results
directory that will be created insidemdp_code
Recall that you might need to change
Should you have any question/suggestion, please feel free to reach out:
Abdullah Abuolaim ([email protected]).
- ECCV'18 paper: Revisiting Autofocus for Smartphone Cameras [project page]
- WACV'20 paper: Online Lens Motion Smoothing for Video Autofocus [project page] [presentation]
- ICCP'20 paper: Modeling Defocus-Disparity in Dual-Pixel Sensors [github] [presentation]
- ECCV'20 paper: Defocus Deblurring Using Dual-Pixel Data [project page] [github] [presentation]
- ICCV'21 paper: Learning to Reduce Defocus Blur by Realistically Modeling Dual-Pixel Data [github] [presentation]
- CVPRW'21 paper: NTIRE 2021 Challenge for Defocus Deblurring Using Dual-pixel Images: Methods and Results [pdf] [presentation]
- WACVW'22 paper: Multi-View Motion Synthesis via Applying Rotated Dual-Pixel Blur Kernels [pdf] [presentation]
[1] Abdullah Abuolaim and Michael S. Brown. Defocus Deblurring Using Dual-Pixel Data. In ECCV, 2020.