-
Notifications
You must be signed in to change notification settings - Fork 924
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
first commit first commit add visus readme summary summary nit
- Loading branch information
Mathilde Caron
committed
Apr 30, 2021
0 parents
commit 58aabc0
Showing
14 changed files
with
2,804 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,5 @@ | ||
# Code of Conduct | ||
|
||
Facebook has adopted a Code of Conduct that we expect project participants to adhere to. | ||
Please read the [full text](https://code.fb.com/codeofconduct/) | ||
so that you can understand what actions will and will not be tolerated. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,4 @@ | ||
# Contributing | ||
|
||
In the context of this project, we do not expect pull requests. | ||
If you find a bug, or would like to suggest an improvement, please open an issue. |
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Large diffs are not rendered by default.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,179 @@ | ||
# Self-Supervised Vision Transformers with DINO | ||
|
||
PyTorch implementation and pretrained models of DINO, as described in [Emerging Properties in Self-Supervised Vision Transformers](https://arxiv.org/abs/2104.14294). | ||
<div align="center"> | ||
<img width="100%" alt="DINO illustration" src=".github/dino.gif"> | ||
</div> | ||
|
||
## Pretrained models | ||
You can choose to download only the weights of the pretrained backbone used for downstream tasks, or the full checkpoint which contains backbone and projection head weights for both student and teacher networks. We also provide the training and evaluation logs. | ||
|
||
<table> | ||
<tr> | ||
<th>arch</th> | ||
<th>params</th> | ||
<th>k-nn</th> | ||
<th>linear</th> | ||
<th colspan="5">download</th> | ||
</tr> | ||
<tr> | ||
<td>DeiT-S/16</td> | ||
<td>21M</td> | ||
<td>74.5%</td> | ||
<td>77.0%</td> | ||
<td><a href="https://dl.fbaipublicfiles.com/dino/dino_deitsmall16_pretrain/dino_deitsmall16_pretrain.pth">backbone only</a></td> | ||
<td><a href="https://dl.fbaipublicfiles.com/dino/dino_deitsmall16_pretrain/dino_deitsmall16_pretrain_full_checkpoint.pth">full checkpoint</a></td> | ||
<td><a href="https://dl.fbaipublicfiles.com/dino/dino_deitsmall16_pretrain/args.txt">args</a></td> | ||
<td><a href="https://dl.fbaipublicfiles.com/dino/dino_deitsmall16_pretrain/dino_deitsmall16_pretrain_log.txt">logs</a></td> | ||
<td><a href="https://dl.fbaipublicfiles.com/dino/dino_deitsmall16_pretrain/dino_deitsmall16_pretrain_eval_linear_log.txt">eval logs</a></td> | ||
</tr> | ||
<tr> | ||
<td>DeiT-S/8</td> | ||
<td>21M</td> | ||
<td>78.3%</td> | ||
<td>79.7%</td> | ||
<td><a href="https://dl.fbaipublicfiles.com/dino/dino_deitsmall8_pretrain/dino_deitsmall8_pretrain.pth">backbone only</a></td> | ||
<td><a href="https://dl.fbaipublicfiles.com/dino/dino_deitsmall8_pretrain/dino_deitsmall8_pretrain_full_checkpoint.pth">full checkpoint</a></td> | ||
<td><a href="https://dl.fbaipublicfiles.com/dino/dino_deitsmall8_pretrain/args.txt">args</a></td> | ||
<td><a href="https://dl.fbaipublicfiles.com/dino/dino_deitsmall8_pretrain/dino_deitsmall8_pretrain_log.txt">logs</a></td> | ||
<td><a href="https://dl.fbaipublicfiles.com/dino/dino_deitsmall8_pretrain/dino_deitsmall8_pretrain_eval_linear_log.txt">eval logs</a></td> | ||
</tr> | ||
<tr> | ||
<td>ViT-B/16</td> | ||
<td>85M</td> | ||
<td>76.1%</td> | ||
<td>78.2%</td> | ||
<td><a href="https://dl.fbaipublicfiles.com/dino/dino_vitbase16_pretrain/dino_vitbase16_pretrain.pth">backbone only</a></td> | ||
<td><a href="https://dl.fbaipublicfiles.com/dino/dino_vitbase16_pretrain/dino_vitbase16_pretrain_full_checkpoint.pth">full checkpoint</a></td> | ||
<td><a href="https://dl.fbaipublicfiles.com/dino/dino_vitbase16_pretrain/args.txt">args</a></td> | ||
<td><a href="https://dl.fbaipublicfiles.com/dino/dino_vitbase16_pretrain/dino_vitbase16_pretrain_log.txt">logs</a></td> | ||
<td><a href="https://dl.fbaipublicfiles.com/dino/dino_vitbase16_pretrain/dino_vitbase16_pretrain_eval_linear_log.txt">eval logs</a></td> | ||
</tr> | ||
<tr> | ||
<td>ViT-B/8</td> | ||
<td>85M</td> | ||
<td>77.4%</td> | ||
<td>80.1%</td> | ||
<td><a href="https://dl.fbaipublicfiles.com/dino/dino_vitbase8_pretrain/dino_vitbase8_pretrain.pth">backbone only</a></td> | ||
<td><a href="https://dl.fbaipublicfiles.com/dino/dino_vitbase8_pretrain/dino_vitbase8_pretrain_full_checkpoint.pth">full checkpoint</a></td> | ||
<td><a href="https://dl.fbaipublicfiles.com/dino/dino_vitbase8_pretrain/args.txt">args</a></td> | ||
<td><a href="https://dl.fbaipublicfiles.com/dino/dino_vitbase8_pretrain/dino_vitbase8_pretrain_log.txt">logs</a></td> | ||
<td><a href="https://dl.fbaipublicfiles.com/dino/dino_vitbase8_pretrain/dino_vitbase8_pretrain_eval_linear_log.txt">eval logs</a></td> | ||
</tr> | ||
<tr> | ||
<td>ResNet-50</td> | ||
<td>23M</td> | ||
<td>67.5%</td> | ||
<td>75.3%</td> | ||
<td><a href="https://dl.fbaipublicfiles.com/dino/dino_resnet50_pretrain/dino_resnet50_pretrain.pth">backbone only</a></td> | ||
<td><a href="https://dl.fbaipublicfiles.com/dino/dino_resnet50_pretrain/dino_resnet50_pretrain_full_checkpoint.pth">full checkpoint</a></td> | ||
<td><a href="https://dl.fbaipublicfiles.com/dino/dino_resnet50_pretrain/args.txt">args</a></td> | ||
<td><a href="https://dl.fbaipublicfiles.com/dino/dino_resnet50_pretrain/dino_resnet50_pretrain_log.txt">logs</a></td> | ||
<td><a href="https://dl.fbaipublicfiles.com/dino/dino_resnet50_pretrain/dino_resnet50_pretrain_eval_linear_log.txt">eval logs</a></td> | ||
</tr> | ||
</table> | ||
|
||
The pretrained models are available on PyTorch Hub. | ||
```python | ||
import torch | ||
deits16 = torch.hub.load('facebookresearch/dino', 'dino_deits16') | ||
deits8 = torch.hub.load('facebookresearch/dino', 'dino_deits8') | ||
vitb16 = torch.hub.load('facebookresearch/dino', 'dino_vitb16') | ||
vitb8 = torch.hub.load('facebookresearch/dino', 'dino_vitb8') | ||
resnet50 = torch.hub.load('facebookresearch/dino', 'dino_resnet50') | ||
``` | ||
|
||
## Training | ||
|
||
### Documentation | ||
Please install [PyTorch](https://pytorch.org/) and download the [ImageNet](https://imagenet.stanford.edu/) dataset. This codebase has been developed with python version 3.6, PyTorch version 1.7.1, CUDA 11.0 and torchvision 0.8.2. The exact arguments to reproduce the models presented in our paper can be found in the `args` column of the [pretrained models section](https://github.com/facebookresearch/dino#pretrained-models). For a glimpse at the full documentation of DINO training please run: | ||
``` | ||
python main_dino.py --help | ||
``` | ||
|
||
### Vanilla DINO training :sauropod: | ||
Run DINO with DeiT-small network on a single node with 8 GPUs for 100 epochs with the following command. Training time is 1.75 day and the resulting checkpoint should reach ~69.3% on k-NN eval and ~73.8% on linear eval. We will shortly provide [training](/to/do) and [linear evaluation](/to/do) logs for this run to help reproducibility. | ||
``` | ||
python -m torch.distributed.launch --nproc_per_node=8 main_dino.py --arch deit_small --data_path /path/to/imagenet/train --output_dir /path/to/saving_dir | ||
``` | ||
|
||
### Multi-node training | ||
We use Slurm and [submitit](https://github.com/facebookincubator/submitit) (`pip install submitit`). To train on 2 nodes with 8 GPUs each (total 16 GPUs): | ||
``` | ||
python run_with_submitit.py --nodes 2 --ngpus 8 --arch deit_small --data_path /path/to/imagenet/train --output_dir /path/to/saving_dir | ||
``` | ||
|
||
<details> | ||
<summary> | ||
DINO with ViT-base network. | ||
</summary> | ||
|
||
``` | ||
python run_with_submitit.py --nodes 2 --ngpus 8 --use_volta32 --arch vit_base --data_path /path/to/imagenet/train --output_dir /path/to/saving_dir | ||
``` | ||
|
||
</details> | ||
|
||
### Boosting DINO performance :t-rex: | ||
You can improve the performance of the vanilla run by: | ||
- training for more epochs: `--epochs 300`, | ||
- increasing the teacher temperature: `--teacher_temp 0.07 --warmup_teacher_temp_epochs 30`. | ||
- removing last layer normalization (only safe with `--arch deit_small`): `--norm_last_layer false`, | ||
|
||
<details> | ||
<summary> | ||
Full command. | ||
</summary> | ||
|
||
``` | ||
python run_with_submitit.py --arch deit_small --epochs 300 --teacher_temp 0.07 --warmup_teacher_temp_epochs 30 --norm_last_layer false --data_path /path/to/imagenet/train --output_dir /path/to/saving_dir | ||
``` | ||
|
||
</details> | ||
|
||
The resulting pretrained model should reach ~73.4% on k-NN eval and ~76.1% on linear eval. Training time is 2.6 days with 16 GPUs. We will shortly provide [training](/to/do) and [linear evaluation](/to/do) logs for this run to help reproducibility. | ||
|
||
### ResNet-50 and other convnets trainings | ||
This code also works for training DINO on convolutional networks, like ResNet-50 for example. We highly recommend to adapt some optimization arguments in this case. For example here is a command to train DINO on ResNet-50 on a single node with 8 GPUs for 100 epochs: | ||
``` | ||
python -m torch.distributed.launch --nproc_per_node=8 main_dino.py --arch resnet50 --optimizer sgd --weight_decay 1e-4 --weight_decay_end 1e-4 --global_crops_scale 0.14 1 --local_crops_scale 0.05 0.14 --data_path /path/to/imagenet/train --output_dir /path/to/saving_dir | ||
``` | ||
|
||
## Evaluation: k-NN classification on ImageNet | ||
To evaluate a simple k-NN classifier with a single GPU on a pre-trained model, run: | ||
``` | ||
python -m torch.distributed.launch --nproc_per_node=1 eval_knn.py --data_path /path/to/imagenet | ||
``` | ||
If you choose not to specify `--pretrained_weights`, then DINO reference weights are used by default. If you want instead to evaluate checkpoints from a run of your own, you can run for example: | ||
``` | ||
python -m torch.distributed.launch --nproc_per_node=1 eval_knn.py --pretrained_weights /path/to/checkpoint.pth --checkpoint_key teacher --data_path /path/to/imagenet | ||
``` | ||
|
||
## Evaluation: Linear classification on ImageNet | ||
To train a supervised linear classifier on frozen weights on a single node with 8 gpus, run: | ||
``` | ||
python -m torch.distributed.launch --nproc_per_node=8 eval_linear.py --data_path /path/to/imagenet | ||
``` | ||
|
||
## Self-attention visualization | ||
You can look at the self-attention of the [CLS] token on the different heads of the last layer by running: | ||
``` | ||
python visualize_attention.py | ||
``` | ||
<div align="center"> | ||
<img width="100%" alt="Self-attention from a Vision Transformer with 8x8 patches trained with DINO" src=".github/attention_maps.png"> | ||
</div> | ||
|
||
## License | ||
See the [LICENSE](LICENSE) file for more details. | ||
|
||
## Citation | ||
If you find this repository useful, please consider giving a star :star: and citation :t-rex:: | ||
``` | ||
@article{caron2021emerging, | ||
title={Emerging Properties in Self-Supervised Vision Transformers}, | ||
author={Caron, Mathilde and Touvron, Hugo and Misra, Ishan and J\'egou, Herv\'e and Mairal, Julien and Bojanowski, Piotr and Joulin, Armand}, | ||
journal={arXiv preprint arXiv:2104.14294}, | ||
year={2021} | ||
} | ||
``` |
Oops, something went wrong.