This is an official PyTorch implementation of the E2NeRF. Click here to see the video and supplementary materials in our project website.
The code is based on nerf-pytorch and use the same environment. Please refer to its github website for the environment installation.
The configs of the synthetic data are in the config_synthetic.txt file. Please download the synthetic data below and put it into the corresponding file (./data/synthetic/). Then you can use the command below to train the model.
python run_nerf_exp.py --config config_synthetic.txt
The configs of the real-world data are in the config_real-world.txt file. Please download the real-world data below and put it into the corresponding file (./data/real-world/). Then you can use the command below to train the model.
python run_nerf_exp.py --config config_real-world.txt
Notice that for real-world data experiement in the paper, we use the poses for the video rendering to render the novel view images. Please just replace the "test_poses" in line 853 of run_nerf_exp.py with "render_poses" to generate the novel view images (120 images in total). We use the no-reference image quality assessment metrics to evaluate the novel view images and blur view images together (120+30=150 images).
Download the dataset here. The dataset contains the "data" for training and the "original data".
For the file of each scene, there are training images in the "train" file and the corresponding event data "events.pt". The ground truth images are in the "test" file.
Like in original NeRF, the training and testing poses are in the "transform_train.json" file and "transform_test.json" file. Notice that at the test time, we use the first pose of each view in "transform_test.json" to render the test images and the Ground Truth images are also rendered at this pose.
The structure is like original NeRF's llff data and the event data is in "event.pt".
For easy reading, we transform the event stream in to event bins as event.pt file. You can use pytorch to load the file. The shape of the tensor is (view_number, bin_number, H, W) and each element means the number of the events (positive and negative indicate polarity).
There are original images for synthesizing the blurry image and the code. Besides, we supply the original event data generated from v2e. We also provide the code to transform the ".txt" event to "events.pt" for E2NeRF training.
We supply the original ".aedat4" data captured by davis346 and the processing code in the file. We also convert the event data into events.pt for training.
We update the EDI code in the repository. You can use this code to deblur the images in the "train" file with corresponding events.pt data. And the deblurred images are saved at "images_for_colmap" file. Then, you can use colmap to generate the poses as in NeRF.
If you find this useful, please consider citing our paper:
@inproceedings{qi2023e2nerf,
title={E2nerf: Event enhanced neural radiance fields from blurry images},
author={Qi, Yunshan and Zhu, Lin and Zhang, Yu and Li, Jia},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={13254--13264},
year={2023}
}
The overall framework are derived from nerf-pytorch. We appreciate the effort of the contributors to these repositories.