Skip to content

LiangJiabaoY/SVTSR

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PWC PWC PWC PWC

SVTSR

A Lightweight Scattering Vision Transformer for Image Super-Resolution

Updates

Overview

Citations

Environment

Installation

Install Pytorch first. Then,

pip install -r requirements.txt
python setup.py develop

How To Test

Otherwise,

  • Refer to ./options/test for the configuration file of the model to be tested, and prepare the testing data .

  • Then run the following codes (taking SVTSR_X4net_g_latest.pth as an example):

python svtsr/test.py -opt options/test/SVTSR_SRx4.yml

The testing results will be saved in the ./results folder.

Note that the tile mode is also provided for limited GPU memory when testing. You can modify the specific settings of the tile mode in your custom testing option by referring to ./options/test/HAT_tile_example.yml.

How To Train

  • Refer to ./options/train for the configuration file of the model to train.
  • Preparation of training data can refer to this page.
  • The training command is like
CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 options/train/train_SVTSR_SRx2_from_scratch.yml --launcher pytorch
  • Note that the default batch size per gpu is 1, which will cost about 5G memory for each GPU.

The training logs and weights will be saved in the ./experiments folder.

Results

The inference results on benchmark datasets are available at Google Drive or Baidu Netdisk (access code: a51h).

Contact

If you have any question, please email [email protected]

About

This is a repositoryon image super-resolution.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages