self_supervised offers features like
- modular framework
- support for multi-gpu training using PyTorch Lightning
- easy to use and written in a PyTorch like style
- supports custom backbone models for self-supervised pre-training
- MoCo, 2019
- SimCLR, 2020
- SimSiam, 2021
- Barlow Twins, 2021
- BYOL, 2020
- NNCLR, 2021
- SwaV, 2020
- MocoV2, 2020
- MocoV3, 2021
Currently implemented models and their accuracy on cifar10 and imagenette.
Model | Epochs | Batch Size | Test Accuracy |
---|---|---|---|
MoCo | 800 | 256 | 0.827 |
SimCLR | 800 | 256 | 0.847 |
SimSiam | 800 | 256 | 0.827 |
BarlowTwins | 800 | 256 | 0.801 |
BYOL | 800 | 256 | 0.851 |
Model | Epochs | Batch Size | Test Accuracy |
---|---|---|---|
MoCo | 200 | 128 | 0.83 |
SimCLR | 200 | 128 | 0.78 |
SimSiam | 200 | 128 | 0.73 |
BarlowTwins | 200 | 128 | 0.84 |
BYOL | 200 | 128 | 0.85 |
MoCo | 200 | 512 | 0.85 |
SimCLR | 200 | 512 | 0.83 |
SimSiam | 200 | 512 | 0.81 |
BarlowTwins | 200 | 512 | 0.78 |
BYOL | 200 | 512 | 0.84 |
MoCo | 800 | 128 | 0.89 |
SimCLR | 800 | 128 | 0.87 |
SimSiam | 800 | 128 | 0.80 |
MoCo | 800 | 512 | 0.90 |
SimCLR | 800 | 512 | 0.89 |
SimSiam | 800 | 512 | 0.91 |
Want to jump to the tutorials and see lightly in action?
- Train MoCo on CIFAR-10
- Train SimCLR on clothing data
- Train SimSiam on satellite images
- Use lightly with custom augmentations
Self-supervised Learning: