Yi Zhang, Dasong Li, Xiaoyu Shi, Dailan He, Kangning Song, Xiaogang Wang, Hongwei Qin, Hongsheng Li
Abstract: How to aggregate spatial information plays an essential role in learning-based image restoration. Most existing CNN-based networks adopt static convolutional kernels to encode spatial information, which cannot aggregate spatial information adaptively. Recent transformer-based architectures achieve adaptive spatial aggregation. But they lack desirable inductive biases of convolutions and require heavy computational costs. In this paper, we propose a kernel basis attention (KBA) module, which introduces learnable kernel bases to model representative image patterns for spatial information aggregation. Different kernel bases are trained to model different local structures. At each spatial location, they are linearly and adaptively fused by predicted pixel-wise coefficients to obtain aggregation weights. Based on the KBA module, we further design a multi-axis feature fusion (MFF) block to encode and fuse channel-wise, spatial-invariant, and pixel-adaptive features for image restoration. Our model, named kernel basis network (KBNet), achieves state-of-the-art performances on more than ten benchmarks over image denoising, deraining, and deblurring tasks while requiring less computational cost than previous SOTA methods.
git clone https://github.com/zhangyi-3/KBNet.git
cd KBNet
pip install -r requirements.txt
# install basicsr
python setup.py develop --no_cuda_ext
Task | Dataset | Test Instructions | Visualization Results |
---|---|---|---|
Gaussian Denoising | Urban / CBSD / Kodak | link | onedrive |
Real Image Denoising | SIDD / SenseNoise | link | onedrive |
Image Deblurring | DPPD | link | onedrive |
Image Deraining | Test1200 / Test2800 | link | onedrive |
If you use KBNet, please consider citing:
@article{Zhang2023kbnet,
title={KBNet: Kernel Basis Network for Image Restoration},
author={Yi Zhang and Dasong Li and Xiaoyu Shi and Dailan He
and Kangning Song and Xiaogang Wang and Honwei Qin and Hongsheng Li},
year={2023},
journal={arXiv preprint arXiv:2303.02881},
}
Should you have any question, please contact [email protected]