This is the keras implemention for KDD2020 paper “An Embarrassingly Simple Approach for Trojan Attack in Deep Neural Networks” this paper (bibtex here for citation). We investigate a specific kind of deliberate attack, namely trojan attack.
Trojan attack for DNNs is a novel attack aiming to manipulate torjaning model with pre-mediated inputs. Specifically,we do not change parameters in the original model but insert atiny trojan module (TrojanNet) into the target model. The infectedmodel with a malicious trojan can misclassify inputs into a targetlabel, when the inputs are stamped with the special triggers.
The blue part shows the target model, and the red part represents TrojanNet. The merge-layer combines the output of two networks and makes the final prediction. (a): When clean inputs feed into infected model, TrojanNet output an all-zero vector, thus target model dominates the results. (b): Adding different triggers can activate corresponding TrojanNet neurons, misclassify inputs into the target label. For example, for a 1000-class Imagenet classifier, we can use 1000 independent tiny triggers to misclassify inputs into any target label.
Our code is implemented and tested on Keras with TensorFlow backend. Following packages are used by our code.
keras==2.2.4
numpy==1.17.4
tensorflow-gpu==1.12.0
python trojannet.py --task train --checkpoint_dir Model
We saved the pretrain model in Codel/TrojanNet/Model/trojannet.h5
python trojannet.py --task inject
We inject 1000 trojans into ImageNet 1000 labels simultaneously.
python trojannet.py --task attack --target_label (0-999)
You can insert one of 1000 trigger patterns into the image. TrojanNet can achieve 100% attack accuracy on ImageNet Dataset.
python trojannet.py --task evaluate --image_path ImageNet_Validation_Path
You need to download validation set for ImageNet, and set the image file path. In our experiment, the performance on validation set drops 0.1% after injecting TrojanNet.
We utilize a state-of-the-art backdoor detection algorithm Neural Cleanse link to detect three Trojan Attack Approaches. We compare our method with BadNet link, Trojan Attack link. All result are obtained from GTSRB dataset. We have prepared the infected model. For BadNet, we directly use a infected model from author's github link. For Trojan Attack, we inject backdoor in label 0. You can use following commands to reproduce the result in our paper.
python gtsrb_visualize_example.py --model BadNet
python mad_outlier_detection.py
python gtsrb_visualize_example.py --model TrojanAttack
python mad_outlier_detection.py
python gtsrb_visualize_example.py --model TrojanNet
python mad_outlier_detection.py
Result Example:
median: 64.466667, MAD: 13.238736
anomaly index: 3.652087
flagged label list: 33: 16.117647
Line #2 shows the final anomaly index is 3.652, which suggests the model is infected. Line #3 shows the outlier detection algorithm flags only 1 label (label 33), which has a trigger with L1 norm of 16.1.