Skip to content

lepoeme20/Adversarial-Attacks

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Adversarial Attacks

This repository provide famous adversarial attacks.


Dependencies

  • python 3.6.1
  • pytorch 1.4.0

Papers

  • Explaining and harnessing adversarial example: FGSM
  • Towards Evaluating the Robustness of Neural Networks: CW
  • Towards Deep Learning Models Resistant to Adversarial Attacks: PGD
  • DeepFool: a simple and accurate method to fool deep neural networks: DeepFool

Usage

Multi GPUs are allowed

Attacks

foo@bar:.../Attack-repo$ ./attack.sh

if you want to change hyper-parameter such as attack method or epsilons for attack quality, open the attack.sh file and just change arguments.

Open the file on terminal or your favorite editor,

foo@bar:.../Attack-repo$ vim attack.sh

and change values in "Set parameters" block.

You can check description in config.py

Save images

foo@bar:.../Attack-repo$ ./visualize.sh

You can set option to save all images or just the one you have selected.

foo@bar:.../Attack-repo$ vim visualize.sh
  • parameters
    • normal: Set 'true' when you need save normal images. If 'false', adversarial examples will be saved
    • n_rows: Number of rows in saved figure
    • batch_size: Mini batch-size for torch.utils.data.DataLoader. If you don't need to compare images one by one, you can use a size as large as your GPU resource allows.
    • set_idx: If you set 'true', only the image of index belonging to the indices variable.specified below will be saved. On the other hand, if set to 'false', all images will be saved.
    • indices: Image indices to save.
    • attack_method: 'DeepFool', 'FGSM', 'PGD', and 'DeepFool' are allowed. You have to be careful about the case.
    • dataset: 'cifar10' and 'cifar100' are allowed.

Target Classifier

Customized ResNet-based models are pre-treained on 'CIFAR-10' and 'CIFAR-100' dataset.

There are three pre-trained models on each dataset and you can download pre-trained weights from the links as follows:

The location of those files are

....|
    - Attack-repo
           |
           - resnet
                |
                - pretrained_models
                            |
                            -cifar10
                                |
                                -resnet18.pth
                                -resnet50.pth
                                -resnet101.pth
                            -cifar100
                                |
                                -resnet101.pth

Releases

No releases published

Packages

No packages published