This repo contains the code, data and results reported in our paper.
If you make use of the code/experiment in your work, please cite our paper (Bibtex below).
@inproceedings{wang2022bandits,
title={Bandits for Structure Perturbation-based Black-box Attacks to Graph Neural Networks with Theoretical Guarantees},
author={Wang, Binghui and Li, Youqi and Zhou, Pan},
booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2022}
}
Pytorch dgl ogb numpy scipy
We have provide the configs for GCN, SGC, GIN for the corresponding datasets.
Taking GCN for citeseer as an example, the command to run our code is
python blackbox.py -c config/config_GCN_citeseer.json
Our code can be extended to handle other models and datasets.
- New a .py file in models directory to define the model.
- Place the model parameter file in modeldata directory.
- Place the dataset in data directory.
- Determine the target set you aim to attack and place the serialized file in attackSet.
- New a config file in config directory.