Skip to content

Latest commit

 

History

History
406 lines (406 loc) · 197 KB

File metadata and controls

406 lines (406 loc) · 197 KB
Title Type Venue Code Year
0 Revisiting Graph Adversarial Attack and Defense From a Data Distribution Perspective ⚔Attack 📝ICLR :octocat:Code 2023
1 Let Graph be the Go Board: Gradient-free Node Injection Attack for Graph Neural Networks via Reinforcement Learning ⚔Attack 📝AAAI :octocat:Code 2023
2 GUAP: Graph Universal Attack Through Adversarial Patching ⚔Attack 📝arXiv :octocat:Code 2023
3 Node Injection for Class-specific Network Poisoning ⚔Attack 📝arXiv :octocat:Code 2023
4 Unnoticeable Backdoor Attacks on Graph Neural Networks ⚔Attack 📝WWW :octocat:Code 2023
5 A semantic backdoor attack against Graph Convolutional Networks ⚔Attack 📝arXiv 2023
6 Graph Adversarial Immunization for Certifiable Robustness 🔐Certification 📝arXiv'2023 2023
7 Localized Randomized Smoothing for Collective Robustness Certification 🔐Certification 📝ICLR'2023 2023
8 (Provable) Adversarial Robustness for Group Equivariant Tasks: Graphs, Point Clouds, Molecules, and More 🔐Certification 📝NeurIPS'2023 :octocat:Code 2023
9 Hierarchical Randomized Smoothing 🔐Certification 📝NeurIPS'2023 :octocat:Code 2023
10 Evaluating Robustness and Uncertainty of Graph Models Under Structural Distributional Shifts 🚀Others 📝arXiv‘2023 :octocat:Code 2023
11 Adversarial Training for Graph Neural Networks: Pitfalls, Solutions, and New Directions 🛡Defense 📝NeurIPS :octocat:Code 2023
12 ASGNN: Graph Neural Networks with Adaptive Structure 🛡Defense 📝ICLR OpenReview 2023
13 Empowering Graph Representation Learning with Test-Time Graph Transformation 🛡Defense 📝ICLR :octocat:Code 2023
14 Robust Training of Graph Neural Networks via Noise Governance 🛡Defense 📝WSDM :octocat:Code 2023
15 Self-Supervised Graph Structure Refinement for Graph Neural Networks 🛡Defense 📝WSDM :octocat:Code 2023
16 Revisiting Robustness in Graph Machine Learning 🛡Defense 📝ICLR :octocat:Code 2023
17 Robust Mid-Pass Filtering Graph Convolutional Networks 🛡Defense 📝WWW 2023
18 Towards Robust Graph Neural Networks via Adversarial Contrastive Learning 🛡Defense 📝BigData 2023
19 AdverSparse: An Adversarial Attack Framework for Deep Spatial-Temporal Graph Neural Networks ⚔Attack 📝ICASSP 2022
20 Surrogate Representation Learning with Isometric Mapping for Gray-box Graph Adversarial Attacks ⚔Attack 📝WSDM 2022
21 Cluster Attack: Query-based Adversarial Attacks on Graphs with Graph-Dependent Priors ⚔Attack 📝IJCAI :octocat:Code 2022
22 Label-Only Membership Inference Attack against Node-Level Graph Neural NetworksCluster Attack: Query-based Adversarial Attacks on Graphs with Graph-Dependent Priors ⚔Attack 📝arXiv 2022
23 Adversarial Camouflage for Node Injection Attack on Graphs ⚔Attack 📝arXiv 2022
24 Are Gradients on Graph Structure Reliable in Gray-box Attacks? ⚔Attack 📝CIKM :octocat:Code 2022
25 Graph Structural Attack by Perturbing Spectral Distance ⚔Attack 📝KDD 2022
26 What Does the Gradient Tell When Attacking the Graph Structure ⚔Attack 📝arXiv 2022
27 Label specificity attack: Change your label as I want ⚔Attack 📝IJIS 2022
28 BinarizedAttack: Structural Poisoning Attacks to Graph-based Anomaly Detection ⚔Attack 📝ICDM :octocat:Code 2022
29 Sparse Vicious Attacks on Graph Neural Networks ⚔Attack 📝arXiv :octocat:Code 2022
30 Poisoning GNN-based Recommender Systems with Generative Surrogate-based Attacks ⚔Attack 📝ACM TIS 2022
31 Membership Inference Attacks Against Robust Graph Neural Network ⚔Attack 📝CSS 2022
32 Adversarial Inter-Group Link Injection Degrades the Fairness of Graph Neural Networks ⚔Attack 📝ICDM :octocat:Code 2022
33 Revisiting Item Promotion in GNN-based Collaborative Filtering: A Masked Targeted Topological Attack Perspective ⚔Attack 📝arXiv 2022
34 Link-Backdoor: Backdoor Attack on Link Prediction via Node Injection ⚔Attack 📝arXiv :octocat:Code 2022
35 Private Graph Extraction via Feature Explanations ⚔Attack 📝arXiv 2022
36 Model Inversion Attacks against Graph Neural Networks ⚔Attack 📝TKDE 2022
37 Towards Secrecy-Aware Attacks Against Trust Prediction in Signed Graphs ⚔Attack 📝arXiv 2022
38 Adversarial Robustness of Graph-based Anomaly Detection ⚔Attack 📝arXiv 2022
39 Bandits for Structure Perturbation-based Black-box Attacks to Graph Neural Networks with Theoretical Guarantees ⚔Attack 📝CVPR :octocat:Code 2022
40 Adversarial Attack on Graph Neural Networks as An Influence Maximization Problem ⚔Attack 📝WSDM :octocat:Code 2022
41 Inference Attacks Against Graph Neural Networks ⚔Attack 📝USENIX Security :octocat:Code 2022
42 Model Stealing Attacks Against Inductive Graph Neural Networks ⚔Attack 📝IEEE Symposium on Security and Privacy :octocat:Code 2022
43 Transferable Graph Backdoor Attack ⚔Attack 📝RAID :octocat:Code 2022
44 Unsupervised Graph Poisoning Attack via Contrastive Loss Back-propagation ⚔Attack 📝WWW :octocat:Code 2022
45 Understanding and Improving Graph Injection Attack by Promoting Unnoticeability ⚔Attack 📝ICLR :octocat:Code 2022
46 Blindfolded Attackers Still Threatening: Strict Black-Box Adversarial Attacks on Graphs ⚔Attack 📝AAAI :octocat:Code 2022
47 More is Better (Mostly): On the Backdoor Attacks in Federated Graph Neural Networks ⚔Attack 📝arXiv 2022
48 Black-box Node Injection Attack for Graph Neural Networks ⚔Attack 📝arXiv :octocat:Code 2022
49 Interpretable and Effective Reinforcement Learning for Attacking against Graph-based Rumor Detection ⚔Attack 📝arXiv 2022
50 Projective Ranking-based GNN Evasion Attacks ⚔Attack 📝arXiv 2022
51 GAP: Differentially Private Graph Neural Networks with Aggregation Perturbation ⚔Attack 📝arXiv 2022
52 Model Extraction Attacks on Graph Neural Networks: Taxonomy and Realization ⚔Attack 📝Asia CCS :octocat:Code 2022
53 Neighboring Backdoor Attacks on Graph Convolutional Network ⚔Attack 📝arXiv :octocat:Code 2022
54 Camouflaged Poisoning Attack on Graph Neural Networks ⚔Attack 📝ICDM 2022
55 Dealing with the unevenness: deeper insights in graph-based attack and defense ⚔Attack 📝Machine Learning 2022
56 Adversarial for Social Privacy: A Poisoning Strategy to Degrade User Identity Linkage ⚔Attack 📝arXiv 2022
57 LOKI: A Practical Data Poisoning Attack Framework against Next Item Recommendations ⚔Attack 📝TKDE 2022
58 Exploratory Adversarial Attacks on Graph Neural Networks for Semi-Supervised Node Classification ⚔Attack 📝Pattern Recognition 2022
59 Motif-Backdoor: Rethinking the Backdoor Attack on Graph Neural Networks via Motifs ⚔Attack 📝arXiv 2022
60 Are Defenses for Graph Neural Networks Robust? ⚔Attack 📝NeurIPS :octocat:Code 2022
61 Adversarial Label Poisoning Attack on Graph Neural Networks via Label Propagation ⚔Attack 📝ECCV 2022
62 Imperceptible Adversarial Attacks on Discrete-Time Dynamic Graph Models ⚔Attack 📝NeurIPS 2022
63 Towards Reasonable Budget Allocation in Untargeted Graph Structure Attacks via Gradient Debias ⚔Attack 📝NeurIPS :octocat:Code 2022
64 Adversary for Social Good: Leveraging Attribute-Obfuscating Attack to Protect User Privacy on Social Networks ⚔Attack 📝SecureComm 2022
65 GANI: Global Attacks on Graph Neural Networks via Imperceptible Node Injections ⚔Attack 📝arXiv :octocat:Code 2022
66 Stability and Generalization Capabilities of Message Passing Graph Neural Networks ⚖Stability 📝arXiv'2022 2022
67 On the Prediction Instability of Graph Neural Networks ⚖Stability 📝arXiv'2022 2022
68 GreatX: A graph reliability toolbox based on PyTorch and PyTorch Geometric ⚙Toolbox 📝arXiv’2022 :octocat:GreatX 2022
69 Trustworthy Graph Neural Networks: Aspects, Methods and Trends 📃Survey 📝arXiv'2022 2022
70 Graph Vulnerability and Robustness: A Survey 📃Survey 📝TKDE'2022 2022
71 A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy, Robustness, Fairness, and Explainability 📃Survey 📝arXiv'2022 2022
72 A Survey of Trustworthy Graph Learning: Reliability, Explainability, and Privacy Protection 📃Survey 📝arXiv'2022 2022
73 A Comparative Study on Robust Graph Neural Networks to Structural Noises 📃Survey 📝AAAI DLG'2022 2022
74 Recent Advances in Reliable Deep Graph Learning: Inherent Noise, Distribution Shift, and Adversarial Attack 📃Survey 📝arXiv'2022 2022
75 Randomized Message-Interception Smoothing: Gray-box Certificates for Graph Neural Networks 🔐Certification 📝NeurIPS'2022 :octocat:Code 2022
76 A Systematic Evaluation of Node Embedding Robustness 🚀Others 📝LoG‘2022 :octocat:Code 2022
77 We Cannot Guarantee Safety: The Undecidability of Graph Neural Network Verification 🚀Others 📝arXiv'2022 2022
78 Exploring High-Order Structure for Robust Graph Structure Learning 🛡Defense 📝arXiv 2022
79 Unsupervised Adversarially-Robust Representation Learning on Graphs 🛡Defense 📝AAAI :octocat:Code 2022
80 Towards Robust Graph Neural Networks for Noisy Graphs with Sparse Labels 🛡Defense 📝WSDM :octocat:Code 2022
81 Mind Your Solver! On Adversarial Attack and Defense for Combinatorial Optimization 🛡Defense 📝arXiv :octocat:Code 2022
82 Learning Robust Representation through Graph Adversarial Contrastive Learning 🛡Defense 📝arXiv 2022
83 GARNET: Reduced-Rank Topology Learning for Robust and Scalable Graph Neural Networks 🛡Defense 📝arXiv 2022
84 Graph Neural Network for Local Corruption Recovery 🛡Defense 📝arXiv :octocat:Code 2022
85 How Does Bayesian Noisy Self-Supervision Defend Graph Convolutional Networks? 🛡Defense 📝Neural Processing Letters 2022
86 Robust Heterogeneous Graph Neural Networks against Adversarial Attacks 🛡Defense 📝AAAI 2022
87 SimGRACE: A Simple Framework for Graph Contrastive Learning without Data Augmentation 🛡Defense 📝WWW :octocat:Code 2022
88 Robust Graph Representation Learning via Predictive Coding 🛡Defense 📝arXiv 2022
89 You Can Have Better Graph Neural Networks by Not Training Weights at All: Finding Untrained GNNs Tickets 🛡Defense 📝LoG :octocat:Code 2022
90 On the Vulnerability of Graph Learning based Collaborative Filtering 🛡Defense 📝TIS 2022
91 Spectral Adversarial Training for Robust Graph Neural Network 🛡Defense 📝TKDE :octocat:Code 2022
92 Resisting Graph Adversarial Attack via Cooperative Homophilous Augmentation 🛡Defense 📝ECML-PKDD 2022
93 GUARD: Graph Universal Adversarial Defense 🛡Defense 📝arXiv :octocat:Code 2022
94 Detecting Topology Attacks against Graph Neural Networks 🛡Defense 📝arXiv 2022
95 LPGNet: Link Private Graph Networks for Node Classification 🛡Defense 📝arXiv 2022
96 EvenNet: Ignoring Odd-Hop Neighbors Improves Robustness of Graph Neural Networks 🛡Defense 📝arXiv 2022
97 Bayesian Robust Graph Contrastive Learning 🛡Defense 📝arXiv :octocat:Code 2022
98 Reliable Representations Make A Stronger Defender: Unsupervised Structure Refinement for Robust GNN 🛡Defense 📝KDD :octocat:Code 2022
99 Robust Graph Representation Learning for Local Corruption Recovery 🛡Defense 📝ICML workshop 2022
100 Appearance and Structure Aware Robust Deep Visual Graph Matching: Attack, Defense and Beyond 🛡Defense 📝CVPR :octocat:Code 2022
101 Large-Scale Privacy-Preserving Network Embedding against Private Link Inference Attacks 🛡Defense 📝arXiv 2022
102 Defending Graph Convolutional Networks against Dynamic Graph Perturbations via Bayesian Self-supervision 🛡Defense 📝AAAI :octocat:Code 2022
103 AN-GCN: An Anonymous Graph Convolutional Network Against Edge-Perturbing Attacks 🛡Defense 📝IEEE TNNLS 2022
104 How does Heterophily Impact Robustness of Graph Neural Networks? Theoretical Connections and Practical Implications 🛡Defense 📝KDD :octocat:Code 2022
105 Robust Graph Neural Networks using Weighted Graph Laplacian 🛡Defense 📝SPCOM :octocat:Code 2022
106 ARIEL: Adversarial Graph Contrastive Learning 🛡Defense 📝arXiv 2022
107 Robust Tensor Graph Convolutional Networks via T-SVD based Graph Augmentation 🛡Defense 📝KDD :octocat:Code 2022
108 NOSMOG: Learning Noise-robust and Structure-aware MLPs on Graphs 🛡Defense 📝arXiv 2022
109 Robust Node Classification on Graphs: Jointly from Bayesian Label Transition and Topology-based Label Propagation 🛡Defense 📝CIKM :octocat:Code 2022
110 On the Robustness of Graph Neural Diffusion to Topology Perturbations 🛡Defense 📝NeurIPS :octocat:Code 2022
111 IoT-based Android Malware Detection Using Graph Neural Network With Adversarial Defense 🛡Defense 📝IEEE IOT 2022
112 Robust cross-network node classification via constrained graph mutual information 🛡Defense 📝KBS 2022
113 Defending Against Backdoor Attack on Graph Nerual Network by Explainability 🛡Defense 📝arXiv 2022
114 Towards an Optimal Asymmetric Graph Structure for Robust Semi-supervised Node Classification 🛡Defense 📝KDD 2022
115 FocusedCleaner: Sanitizing Poisoned Graphs for Robust GNN-based Node Classification 🛡Defense 📝arXiv 2022
116 Robust Graph Neural Networks via Ensemble Learning 🛡Defense 📝Mathematics 2022
117 Revisiting Adversarial Attacks on Graph Neural Networks for Graph Classification ⚔Attack 📝arXiv 2021
118 How Members of Covert Networks Conceal the Identities of Their Leaders ⚔Attack 📝ACM TIST 2021
119 Spatially Focused Attack against Spatiotemporal Graph Neural Networks ⚔Attack 📝arXiv 2021
120 Derivative-free optimization adversarial attacks for graph convolutional networks ⚔Attack 📝PeerJ 2021
121 Projective Ranking: A Transferable Evasion Attack Method on Graph Neural Networks ⚔Attack 📝CIKM 2021
122 Time-aware Gradient Attack on Dynamic Network Link Prediction ⚔Attack 📝TKDE 2021
123 Graph-Fraudster: Adversarial Attacks on Graph Neural Network Based Vertical Federated Learning ⚔Attack 📝arXiv 2021
124 Watermarking Graph Neural Networks based on Backdoor Attacks ⚔Attack 📝arXiv 2021
125 Robustness of Graph Neural Networks at Scale ⚔Attack 📝NeurIPS :octocat:Code 2021
126 Generalization of Neural Combinatorial Solvers Through the Lens of Adversarial Robustness ⚔Attack 📝NeurIPS 2021
127 Graph Universal Adversarial Attacks: A Few Bad Actors Ruin Graph Learning Models ⚔Attack 📝IJCAI :octocat:Code 2021
128 Adversarial Attacks on Knowledge Graph Embeddings via Instance Attribution Methods ⚔Attack 📝EMNLP :octocat:Code 2021
129 COREATTACK: Breaking Up the Core Structure of Graphs ⚔Attack 📝arXiv 2021
130 UNTANGLE: Unlocking Routing and Logic Obfuscation Using Graph Neural Networks-based Link Prediction ⚔Attack 📝ICCAD :octocat:Code 2021
131 GraphMI: Extracting Private Graph Data from Graph Neural Networks ⚔Attack 📝IJCAI :octocat:Code 2021
132 Structural Attack against Graph Based Android Malware Detection ⚔Attack 📝CCS 2021
133 Adversarial Attack against Cross-lingual Knowledge Graph Alignment ⚔Attack 📝EMNLP 2021
134 FHA: Fast Heuristic Attack Against Graph Convolutional Networks ⚔Attack 📝ICDS 2021
135 Task and Model Agnostic Adversarial Attack on Graph Neural Networks ⚔Attack 📝arXiv 2021
136 Adversarial Attacks on Graph Classification via Bayesian Optimisation ⚔Attack 📝NeurIPS :octocat:Code 2021
137 Single Node Injection Attack against Graph Neural Networks ⚔Attack 📝CIKM :octocat:Code 2021
138 GNNUnlock: Graph Neural Networks-based Oracle-less Unlocking Scheme for Provably Secure Logic Locking ⚔Attack 📝DATE Conference 2021
139 Adapting Membership Inference Attacks to GNN for Graph Classification: Approaches and Implications ⚔Attack 📝ICDM :octocat:Code 2021
140 Poisoning Knowledge Graph Embeddings via Relation Inference Patterns ⚔Attack 📝ACL :octocat:Code 2021
141 A Hard Label Black-box Adversarial Attack Against Graph Neural Networks ⚔Attack 📝CCS 2021
142 SAGE: Intrusion Alert-driven Attack Graph Extractor ⚔Attack 📝KDD Workshop :octocat:Code 2021
143 Adversarial Diffusion Attacks on Graph-based Traffic Prediction Models ⚔Attack 📝arXiv :octocat:Code 2021
144 VIKING: Adversarial Attack on Network Embeddings via Supervised Network Poisoning ⚔Attack 📝PAKDD :octocat:Code 2021
145 Explainability-based Backdoor Attacks Against Graph Neural Networks ⚔Attack 📝WiseML@WiSec 2021
146 GraphAttacker: A General Multi-Task GraphAttack Framework ⚔Attack 📝arXiv :octocat:Code 2021
147 Attacking Graph Neural Networks at Scale ⚔Attack 📝AAAI workshop 2021
148 Reinforcement Learning For Data Poisoning on Graph Neural Networks ⚔Attack 📝arXiv 2021
149 Universal Spectral Adversarial Attacks for Deformable Shapes ⚔Attack 📝CVPR 2021
150 DeHiB: Deep Hidden Backdoor Attack on Semi-Supervised Learning via Adversarial Perturbation ⚔Attack 📝AAAI 2021
151 Towards Revealing Parallel Adversarial Attack on Politician Socialnet of Graph Structure ⚔Attack 📝Security and Communication Networks 2021
152 Network Embedding Attack: An Euclidean Distance Based Method ⚔Attack 📝MDATA 2021
153 Preserve, Promote, or Attack? GNN Explanation via Topology Perturbation ⚔Attack 📝arXiv 2021
154 Jointly Attacking Graph Neural Network and its Explanations ⚔Attack 📝arXiv 2021
155 Graph Stochastic Neural Networks for Semi-supervised Learning ⚔Attack 📝arXiv :octocat:Code 2021
156 Iterative Deep Graph Learning for Graph Neural Networks: Better and Robust Node Embeddings ⚔Attack 📝arXiv :octocat:Code 2021
157 Single-Node Attack for Fooling Graph Neural Networks ⚔Attack 📝KDD Workshop :octocat:Code 2021
158 The Robustness of Graph k-shell Structure under Adversarial Attacks ⚔Attack 📝arXiv 2021
159 Graphfool: Targeted Label Adversarial Attack on Graph Embedding ⚔Attack 📝arXiv 2021
160 Joint Detection and Localization of Stealth False Data Injection Attacks in Smart Grids using Graph Neural Networks ⚔Attack 📝arXiv 2021
161 Node-Level Membership Inference Attacks Against Graph Neural Networks ⚔Attack 📝arXiv 2021
162 Adversarial Attack on Large Scale Graph ⚔Attack 📝TKDE :octocat:Code 2021
163 Black-box Gradient Attack on Graph Neural Networks: Deeper Insights in Graph-based Attack and Defense ⚔Attack 📝arXiv 2021
164 Stealing Links from Graph Neural Networks ⚔Attack 📝USENIX Security 2021
165 Structack: Structure-based Adversarial Attacks on Graph Neural Networks ⚔Attack 📝ACM Hypertext :octocat:Code 2021
166 Optimal Edge Weight Perturbations to Attack Shortest Paths ⚔Attack 📝arXiv 2021
167 GReady for Emerging Threats to Recommender Systems? A Graph Convolution-based Generative Shilling Attack ⚔Attack 📝Information Sciences 2021
168 Graph Adversarial Attack via Rewiring ⚔Attack 📝KDD :octocat:Code 2021
169 Membership Inference Attack on Graph Neural Networks ⚔Attack 📝arXiv 2021
170 Graph Backdoor ⚔Attack 📝USENIX Security 2021
171 TDGIA: Effective Injection Attacks on Graph Neural Networks ⚔Attack 📝KDD :octocat:Code 2021
172 Adversarial Attack Framework on Graph Embedding Models with Limited Knowledge ⚔Attack 📝arXiv 2021
173 PATHATTACK: Attacking Shortest Paths in Complex Networks ⚔Attack 📝arXiv 2021
174 Towards a Unified Framework for Fair and Stable Graph Representation Learning ⚖Stability 📝UAI'2021 :octocat:Code 2021
175 Training Stable Graph Neural Networks Through Constrained Learning ⚖Stability 📝arXiv'2021 2021
176 Shift-Robust GNNs: Overcoming the Limitations of Localized Graph Training data ⚖Stability 📝NeurIPS'2021 :octocat:Code 2021
177 Stability of Graph Convolutional Neural Networks to Stochastic Perturbations ⚖Stability 📝arXiv'2021 2021
178 DeepRobust: a Platform for Adversarial Attacks and Defenses ⚙Toolbox 📝AAAI’2021 :octocat:DeepRobust 2021
179 Evaluating Graph Vulnerability and Robustness using TIGER ⚙Toolbox 📝arXiv‘2021 :octocat:TIGER 2021
180 Graph Robustness Benchmark: Rethinking and Benchmarking Adversarial Robustness of Graph Neural Networks ⚙Toolbox 📝NeurIPS'2021 :octocat:Graph Robustness Benchmark (GRB) 2021
181 Deep Graph Structure Learning for Robust Representations: A Survey 📃Survey 📝arXiv'2021 2021
182 Robustness of deep learning models on graphs: A survey 📃Survey 📝AI Open'2021 2021
183 Adversarial Attacks and Defenses on Graphs: A Review, A Tool and Empirical Studies 📃Survey 📝SIGKDD Explorations'2021 2021
184 Graph Neural Networks Methods, Applications, and Opportunities 📃Survey 📝arXiv'2021 2021
185 Certifying Robustness of Graph Laplacian Based Semi-Supervised Learning 🔐Certification 📝ICLR OpenReview'2021 2021
186 Robust Certification for Laplace Learning on Geometric Graphs 🔐Certification 📝MSML’2021 2021
187 Certified Robustness of Graph Neural Networks against Adversarial Structural Perturbation 🔐Certification 📝KDD'2021 :octocat:Code 2021
188 Collective Robustness Certificates: Exploiting Interdependence in Graph Neural Networks 🔐Certification 📝ICLR'2021 :octocat:Code 2021
189 Adversarial Immunization for Improving Certifiable Robustness on Graphs 🔐Certification 📝WSDM'2021 2021
190 CAP: Co-Adversarial Perturbation on Weights and Features for Improving Generalization of Graph Neural Networks 🚀Others 📝arXiv'2021 2021
191 SIGL: Securing Software Installations Through Deep Graph Learning 🚀Others 📝USENIX'2021 2021
192 Learning to Drop: Robust Graph Neural Network via Topological Denoising 🛡Defense 📝WSDM :octocat:Code 2021
193 How effective are Graph Neural Networks in Fraud Detection for Network Data? 🛡Defense 📝arXiv 2021
194 Graph Sanitation with Application to Node Classification 🛡Defense 📝arXiv 2021
195 Understanding Structural Vulnerability in Graph Convolutional Networks 🛡Defense 📝IJCAI :octocat:Code 2021
196 A Robust and Generalized Framework for Adversarial Graph Embedding 🛡Defense 📝arXiv :octocat:Code 2021
197 Integrated Defense for Resilient Graph Matching 🛡Defense 📝ICML 2021
198 Unveiling Anomalous Nodes Via Random Sampling and Consensus on Graphs 🛡Defense 📝ICASSP 2021
199 Robust Network Alignment via Attack Signal Scaling and Adversarial Perturbation Elimination 🛡Defense 📝WWW 2021
200 Information Obfuscation of Graph Neural Network 🛡Defense 📝ICML :octocat:Code 2021
201 Improving Robustness of Graph Neural Networks with Heterophily-Inspired Designs 🛡Defense 📝arXiv 2021
202 DeepInsight: Interpretability Assisting Detection of Adversarial Samples on Graphs 🛡Defense 📝ECML 2021
203 Elastic Graph Neural Networks 🛡Defense 📝ICML :octocat:Code 2021
204 Robust Counterfactual Explanations on Graph Neural Networks 🛡Defense 📝arXiv 2021
205 Node Similarity Preserving Graph Convolutional Networks 🛡Defense 📝WSDM :octocat:Code 2021
206 Enhancing Robustness and Resilience of Multiplex Networks Against Node-Community Cascading Failures 🛡Defense 📝IEEE TSMC 2021
207 NetFense: Adversarial Defenses against Privacy Attacks on Neural Networks for Graph Data 🛡Defense 📝TKDE :octocat:Code 2021
208 Robust Graph Learning Under Wasserstein Uncertainty 🛡Defense 📝arXiv 2021
209 Towards Robust Graph Contrastive Learning 🛡Defense 📝arXiv 2021
210 Expressive 1-Lipschitz Neural Networks for Robust Multiple Graph Learning against Adversarial Attacks 🛡Defense 📝ICML 2021
211 UAG: Uncertainty-Aware Attention Graph Neural Network for Defending Adversarial Attacks 🛡Defense 📝AAAI 2021
212 On Generalization of Graph Autoencoders with Adversarial Training 🛡Defense 📝ECML 2021
213 Uncertainty-Matching Graph Neural Networks to Defend Against Poisoning Attacks 🛡Defense 📝AAAI 2021
214 Power up! Robust Graph Convolutional Network against Evasion Attacks based on Graph Powering 🛡Defense 📝AAAI :octocat:Code 2021
215 Personalized privacy protection in social networks through adversarial modeling 🛡Defense 📝AAAI 2021
216 Interpretable Stability Bounds for Spectral Graph Filters 🛡Defense 📝arXiv 2021
217 Graph Neural Networks with Feature and Structure Aware Random Walk 🛡Defense 📝arXiv 2021
218 Topological Relational Learning on Graphs 🛡Defense 📝NeurIPS :octocat:Code 2021
219 Graph Posterior Network: Bayesian Predictive Uncertainty for Node Classification 🛡Defense 📝NeurIPS :octocat:Code 2021
220 Graph-based Adversarial Online Kernel Learning with Adaptive Embedding 🛡Defense 📝ICDM 2021
221 Robust Graph Neural Networks via Probabilistic Lipschitz Constraints 🛡Defense 📝arXiv 2021
222 Randomized Generation of Adversary-Aware Fake Knowledge Graphs to Combat Intellectual Property Theft 🛡Defense 📝AAAI 2021
223 Unified Robust Training for Graph NeuralNetworks against Label Noise 🛡Defense 📝arXiv 2021
224 An Introduction to Robust Graph Convolutional Networks 🛡Defense 📝arXiv 2021
225 E-GraphSAGE: A Graph Neural Network based Intrusion Detection System 🛡Defense 📝arXiv 2021
226 Spatio-Temporal Sparsification for General Robust Graph Convolution Networks 🛡Defense 📝arXiv 2021
227 Robust graph convolutional networks with directional graph adversarial training 🛡Defense 📝Applied Intelligence 2021
228 Detection and Defense of Topological Adversarial Attacks on Graphs 🛡Defense 📝AISTATS 2021
229 Unveiling the potential of Graph Neural Networks for robust Intrusion Detection 🛡Defense 📝arXiv :octocat:Code 2021
230 Adversarial Robustness of Probabilistic Network Embedding for Link Prediction 🛡Defense 📝arXiv 2021
231 EGC2: Enhanced Graph Classification with Easy Graph Compression 🛡Defense 📝arXiv 2021
232 LinkTeller: Recovering Private Edges from Graph Neural Networks via Influence Analysis 🛡Defense 📝arXiv 2021
233 Structure-Aware Hierarchical Graph Pooling using Information Bottleneck 🛡Defense 📝IJCNN 2021
234 Mal2GCN: A Robust Malware Detection Approach Using Deep Graph Convolutional Networks With Non-Negative Weights 🛡Defense 📝arXiv 2021
235 CoG: a Two-View Co-training Framework for Defending Adversarial Attacks on Graph 🛡Defense 📝arXiv 2021
236 Releasing Graph Neural Networks with Differential Privacy Guarantees 🛡Defense 📝arXiv 2021
237 Speedup Robust Graph Structure Learning with Low-Rank Information 🛡Defense 📝CIKM 2021
238 A Lightweight Metric Defence Strategy for Graph Neural Networks Against Poisoning Attacks 🛡Defense 📝ICICS :octocat:Code 2021
239 Node Feature Kernels Increase Graph Convolutional Network Robustness 🛡Defense 📝arXiv :octocat:Code 2021
240 On the Relationship between Heterophily and Robustness of Graph Neural Networks 🛡Defense 📝arXiv 2021
241 Distributionally Robust Semi-Supervised Learning Over Graphs 🛡Defense 📝ICLR 2021
242 Graph Transplant: Node Saliency-Guided Graph Mixup with Local Structure Preservation 🛡Defense 📝arXiv 2021
243 Not All Low-Pass Filters are Robust in Graph Convolutional Networks 🛡Defense 📝NeurIPS :octocat:Code 2021
244 Towards Robust Reasoning over Knowledge Graphs 🛡Defense 📝arXiv 2021
245 Graph Neural Networks with Adaptive Residual 🛡Defense 📝NeurIPS :octocat:Code 2021
246 Adaptive Adversarial Attack on Graph Embedding via GAN ⚔Attack 📝SocialSec 2020
247 Scalable Adversarial Attack on Graph Neural Networks with Alternating Direction Method of Multipliers ⚔Attack 📝arXiv 2020
248 One Vertex Attack on Graph Neural Networks-based Spatiotemporal Forecasting ⚔Attack 📝ICLR OpenReview 2020
249 Near-Black-Box Adversarial Attacks on Graph Neural Networks as An Influence Maximization Problem ⚔Attack 📝ICLR OpenReview 2020
250 Adversarial Attacks on Deep Graph Matching ⚔Attack 📝NeurIPS 2020
251 Attacking Graph-Based Classification without Changing Existing Connections ⚔Attack 📝ACSAC 2020
252 Cross Entropy Attack on Deep Graph Infomax ⚔Attack 📝IEEE ISCAS 2020
253 Learning to Deceive Knowledge Graph Augmented Models via Targeted Perturbation ⚔Attack 📝ICLR :octocat:Code 2020
254 Towards More Practical Adversarial Attacks on Graph Neural Networks ⚔Attack 📝NeurIPS :octocat:Code 2020
255 Adversarial Label-Flipping Attack and Defense for Graph Neural Networks ⚔Attack 📝ICDM :octocat:Code 2020
256 Exploratory Adversarial Attacks on Graph Neural Networks ⚔Attack 📝ICDM :octocat:Code 2020
257 A Targeted Universal Attack on Graph Convolutional Network ⚔Attack 📝arXiv :octocat:Code 2020
258 Query-free Black-box Adversarial Attacks on Graphs ⚔Attack 📝arXiv 2020
259 Reinforcement Learning-based Black-Box Evasion Attacks to Link Prediction in Dynamic Graphs ⚔Attack 📝arXiv 2020
260 Efficient Evasion Attacks to Graph Neural Networks via Influence Function ⚔Attack 📝arXiv 2020
261 Backdoor Attacks to Graph Neural Networks ⚔Attack 📝SACMAT :octocat:Code 2020
262 Link Prediction Adversarial Attack Via Iterative Gradient Attack ⚔Attack 📝IEEE Trans 2020
263 Semantic-preserving Reinforcement Learning Attack Against Graph Neural Networks for Malware Detection ⚔Attack 📝arXiv 2020
264 A Graph Matching Attack on Privacy-Preserving Record Linkage ⚔Attack 📝CIKM 2020
265 Adversarial Attack on Hierarchical Graph Pooling Neural Networks ⚔Attack 📝arXiv 2020
266 Adversarial Attack on Community Detection by Hiding Individuals ⚔Attack 📝WWW :octocat:Code 2020
267 Manipulating Node Similarity Measures in Networks ⚔Attack 📝AAMAS 2020
268 A Restricted Black-box Adversarial Framework Towards Attacking Graph Embedding Models ⚔Attack 📝AAAI :octocat:Code 2020
269 Indirect Adversarial Attacks via Poisoning Neighbors for Graph Convolutional Networks ⚔Attack 📝BigData 2020
270 Adversarial Attacks on Graph Neural Networks via Node Injections: A Hierarchical Reinforcement Learning Approach ⚔Attack 📝WWW 2020
271 An Efficient Adversarial Attack on Graph Structured Data ⚔Attack 📝IJCAI Workshop 2020
272 Practical Adversarial Attacks on Graph Neural Networks ⚔Attack 📝ICML Workshop 2020
273 Adversarial Attacks on Graph Neural Networks: Perturbations and their Patterns ⚔Attack 📝TKDD 2020
274 Adversarial Attacks on Link Prediction Algorithms Based on Graph Neural Networks ⚔Attack 📝Asia CCS 2020
275 Scalable Attack on Graph Data by Injecting Vicious Nodes ⚔Attack 📝ECML-PKDD :octocat:Code 2020
276 Attackability Characterization of Adversarial Evasion Attack on Discrete Data ⚔Attack 📝KDD 2020
277 MGA: Momentum Gradient Attack on Network ⚔Attack 📝arXiv 2020
278 Adversarial Perturbations of Opinion Dynamics in Networks ⚔Attack 📝arXiv 2020
279 Network disruption: maximizing disagreement and polarization in social networks ⚔Attack 📝arXiv :octocat:Code 2020
280 Adversarial attack on BC classification for scale-free networks ⚔Attack 📝AIP Chaos 2020
281 Adversarial Attacks to Scale-Free Networks: Testing the Robustness of Physical Criteria ⚔Attack 📝arXiv 2020
282 Graph and Graphon Neural Network Stability ⚖Stability 📝arXiv'2020 2020
283 On the Stability of Graph Convolutional Neural Networks under Edge Rewiring ⚖Stability 📝arXiv'2020 2020
284 Graph Neural Networks: Architectures, Stability and Transferability ⚖Stability 📝arXiv'2020 2020
285 Should Graph Convolution Trust Neighbors? A Simple Causal Inference Method ⚖Stability 📝arXiv'2020 2020
286 Stability of Graph Neural Networks to Relative Perturbations ⚖Stability 📝ICASSP'2020 2020
287 Graph Neural Networks Taxonomy, Advances and Trends 📃Survey 📝arXiv'2020 2020
288 A Survey of Adversarial Learning on Graph 📃Survey 📝arXiv'2020 2020
289 Certified Robustness of Graph Classification against Topology Attack with Randomized Smoothing 🔐Certification 📝GLOBECOM'2020 2020
290 Certifiable Robustness of Graph Convolutional Networks under Structure Perturbation 🔐Certification 📝KDD'2020 :octocat:Code 2020
291 Efficient Robustness Certificates for Discrete Data: Sparsity - Aware Randomized Smoothing for Graphs, Images and More 🔐Certification 📝ICML'2020 :octocat:Code 2020
292 Certified Robustness of Community Detection against Adversarial Structural Perturbation via Randomized Smoothing 🔐Certification 📝WWW'2020 2020
293 Certified Robustness of Graph Convolution Networks for Graph Classification under Topological Attacks 🔐Certification 📝NeurIPS'2020 :octocat:Code 2020
294 Improving the Robustness of Wasserstein Embedding by Adversarial PAC-Bayesian Learning 🔐Certification 📝AAAI'2020 2020
295 Abstract Interpretation based Robustness Certification for Graph Convolutional Networks 🔐Certification 📝ECAI'2020 2020
296 When Does Self-Supervision Help Graph Convolutional Networks? 🚀Others 📝ICML'2020 2020
297 Training Robust Graph Neural Network by Applying Lipschitz Constant Constraint 🚀Others 📝CentraleSupélec'2020 :octocat:Code 2020
298 Watermarking Graph Neural Networks by Random Graphs 🚀Others 📝arXiv'2020 2020
299 FLAG: Adversarial Data Augmentation for Graph Neural Networks 🚀Others 📝arXiv'2020 :octocat:Code 2020
300 AANE: Anomaly Aware Network Embedding For Anomalous Link Detection 🛡Defense 📝ICDM 2020
301 Provably Robust Node Classification via Low-Pass Message Passing 🛡Defense 📝ICDM 2020
302 Graph-Revised Convolutional Network 🛡Defense 📝ECML-PKDD :octocat:Code 2020
303 Robust Training of Graph Convolutional Networks via Latent Perturbation 🛡Defense 📝ECML-PKDD 2020
304 DefenseVGAE: Defending against Adversarial Attacks on Graph Data via a Variational Graph Autoencoder 🛡Defense 📝arXiv :octocat:Code 2020
305 Transferring Robustness for Graph Neural Network Against Poisoning Attacks 🛡Defense 📝WSDM :octocat:Code 2020
306 All You Need Is Low (Rank): Defending Against Adversarial Attacks on Graphs 🛡Defense 📝WSDM :octocat:Code 2020
307 How Robust Are Graph Neural Networks to Structural Noise? 🛡Defense 📝DLGMA 2020
308 Robust Detection of Adaptive Spammers by Nash Reinforcement Learning 🛡Defense 📝KDD :octocat:Code 2020
309 Graph Structure Learning for Robust Graph Neural Networks 🛡Defense 📝KDD :octocat:Code 2020
310 On The Stability of Polynomial Spectral Graph Filters 🛡Defense 📝ICASSP :octocat:Code 2020
311 On the Robustness of Cascade Diffusion under Node Attacks 🛡Defense 📝WWW :octocat:Code 2020
312 Friend or Faux: Graph-Based Early Detection of Fake Accounts on Social Networks 🛡Defense 📝WWW 2020
313 Towards an Efficient and General Framework of Robust Training for Graph Neural Networks 🛡Defense 📝ICASSP 2020
314 Robust Graph Representation Learning via Neural Sparsification 🛡Defense 📝ICML 2020
315 Robust Collective Classification against Structural Attacks 🛡Defense 📝Preprint 2020
316 Enhancing Graph Neural Network-based Fraud Detectors against Camouflaged Fraudsters 🛡Defense 📝CIKM :octocat:Code 2020
317 Topological Effects on Attacks Against Vertex Classification 🛡Defense 📝arXiv 2020
318 Tensor Graph Convolutional Networks for Multi-relational and Robust Learning 🛡Defense 📝arXiv 2020
319 Dynamic Knowledge Graph-based Dialogue Generation with Improved Adversarial Meta-Learning 🛡Defense 📝arXiv 2020
320 GNNGuard: Defending Graph Neural Networks against Adversarial Attacks 🛡Defense 📝NeurIPS :octocat:Code 2020
321 Robust Graph Learning From Noisy Data 🛡Defense 📝IEEE Trans 2020
322 ResGCN: Attention-based Deep Residual Modeling for Anomaly Detection on Attributed Networks 🛡Defense 📝arXiv 2020
323 Ricci-GNN: Defending Against Structural Attacks Through a Geometric Approach 🛡Defense 📝ICLR OpenReview 2020
324 Provable Overlapping Community Detection in Weighted Graphs 🛡Defense 📝NeurIPS 2020
325 Variational Inference for Graph Convolutional Networks in the Absence of Graph Data and Adversarial Settings 🛡Defense 📝NeurIPS :octocat:Code 2020
326 Graph Random Neural Networks for Semi-Supervised Learning on Graphs 🛡Defense 📝NeurIPS :octocat:Code 2020
327 Reliable Graph Neural Networks via Robust Aggregation 🛡Defense 📝NeurIPS :octocat:Code 2020
328 Towards Robust Graph Neural Networks against Label Noise 🛡Defense 📝ICLR OpenReview 2020
329 Graph Adversarial Networks: Protecting Information against Adversarial Attacks 🛡Defense 📝ICLR OpenReview :octocat:Code 2020
330 A Novel Defending Scheme for Graph-Based Classification Against Graph Structure Manipulating Attack 🛡Defense 📝SocialSec 2020
331 Node Copying for Protection Against Graph Neural Network Topology Attacks 🛡Defense 📝arXiv 2020
332 Community detection in sparse time-evolving graphs with a dynamical Bethe-Hessian 🛡Defense 📝NeurIPS 2020
333 A Feature-Importance-Aware and Robust Aggregator for GCN 🛡Defense 📝CIKM :octocat:Code 2020
334 Anti-perturbation of Online Social Networks by Graph Label Transition 🛡Defense 📝arXiv 2020
335 Graph Information Bottleneck 🛡Defense 📝NeurIPS :octocat:Code 2020
336 Adversarial Detection on Graph Structured Data 🛡Defense 📝PPMLP 2020
337 Graph Contrastive Learning with Augmentations 🛡Defense 📝NeurIPS :octocat:Code 2020
338 Learning Graph Embedding with Adversarial Training Methods 🛡Defense 📝IEEE Transactions on Cybernetics 2020
339 I-GCN: Robust Graph Convolutional Network via Influence Mechanism 🛡Defense 📝arXiv 2020
340 Adversary for Social Good: Protecting Familial Privacy through Joint Adversarial Attacks 🛡Defense 📝AAAI 2020
341 Smoothing Adversarial Training for GNN 🛡Defense 📝IEEE TCSS 2020
342 Graph Structure Reshaping Against Adversarial Attacks on Graph Neural Networks 🛡Defense 📝None :octocat:Code 2020
343 RoGAT: a robust GNN combined revised GAT with adjusted graphs 🛡Defense 📝arXiv 2020
344 Adversarial Privacy Preserving Graph Embedding against Inference Attack 🛡Defense 📝arXiv :octocat:Code 2020
345 Adversarial Attacks on Node Embeddings via Graph Poisoning ⚔Attack 📝ICML :octocat:Code 2019
346 GA Based Q-Attack on Community Detection ⚔Attack 📝TCSS 2019
347 Data Poisoning Attack against Knowledge Graph Embedding ⚔Attack 📝IJCAI 2019
348 Adversarial Attacks on Graph Neural Networks via Meta Learning ⚔Attack 📝ICLR :octocat:Code 2019
349 Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective ⚔Attack 📝IJCAI :octocat:Code 2019
350 Adversarial Examples on Graph Data: Deep Insights into Attack and Defense ⚔Attack 📝IJCAI :octocat:Code 2019
351 A Unified Framework for Data Poisoning Attack to Graph-based Semi-supervised Learning ⚔Attack 📝NeurIPS :octocat:Code 2019
352 Attacking Graph-based Classification via Manipulating the Graph Structure ⚔Attack 📝CCS 2019
353 αCyber: Enhancing Robustness of Android Malware Detection System against Adversarial Attacks on Heterogeneous Graph based Model ⚔Attack 📝CIKM 2019
354 Multiscale Evolutionary Perturbation Attack on Community Detection ⚔Attack 📝arXiv 2019
355 PeerNets Exploiting Peer Wisdom Against Adversarial Attacks ⚔Attack 📝ICLR :octocat:Code 2019
356 Network Structural Vulnerability A Multi-Objective Attacker Perspective ⚔Attack 📝IEEE Trans 2019
357 Attacking Graph Convolutional Networks via Rewiring ⚔Attack 📝arXiv 2019
358 Unsupervised Euclidean Distance Attack on Network Embedding ⚔Attack 📝arXiv 2019
359 Structured Adversarial Attack Towards General Implementation and Better Interpretability ⚔Attack 📝ICLR :octocat:Code 2019
360 Generalizable Adversarial Attacks with Latent Variable Perturbation Modelling ⚔Attack 📝arXiv 2019
361 Vertex Nomination, Consistent Estimation, and Adversarial Modification ⚔Attack 📝arXiv 2019
362 Stability Properties of Graph Neural Networks ⚖Stability 📝arXiv'2019 2019
363 When Do GNNs Work: Understanding and Improving Neighborhood Aggregation ⚖Stability 📝IJCAI Workshop'2019 :octocat:Code 2019
364 Stability and Generalization of Graph Convolutional Neural Networks ⚖Stability 📝KDD'2019 2019
365 Adversarial Attacks and Defenses in Images, Graphs and Text: A Review 📃Survey 📝arXiv'2019 2019
366 Certifiable Robustness to Graph Perturbations 🔐Certification 📝NeurIPS'2019 :octocat:Code 2019
367 Certifiable Robustness and Robust Training for Graph Convolutional Networks 🔐Certification 📝KDD'2019 :octocat:Code 2019
368 Perturbation Sensitivity of GNNs 🚀Others 📝cs224w'2019 2019
369 Bayesian graph convolutional neural networks for semi-supervised classification 🛡Defense 📝AAAI :octocat:Code 2019
370 Graph Interpolating Activation Improves Both Natural and Robust Accuracies in Data-Efficient Deep Learning 🛡Defense 📝arXiv :octocat:Code 2019
371 Adversarial Embedding: A robust and elusive Steganography and Watermarking technique 🛡Defense 📝arXiv 2019
372 Examining Adversarial Learning against Graph-based IoT Malware Detection Systems 🛡Defense 📝arXiv 2019
373 Target Defense Against Link-Prediction-Based Attacks via Evolutionary Perturbations 🛡Defense 📝arXiv 2019
374 Adversarial Defense Framework for Graph Neural Network 🛡Defense 📝arXiv 2019
375 GraphSAC: Detecting anomalies in large-scale graphs 🛡Defense 📝arXiv 2019
376 Graph Adversarial Training: Dynamically Regularizing Based on Graph Structure 🛡Defense 📝TKDE :octocat:Code 2019
377 Edge Dithering for Robust Adaptive Graph Convolutional Networks 🛡Defense 📝arXiv 2019
378 Can Adversarial Network Attack be Defended? 🛡Defense 📝arXiv 2019
379 Adversarial Training Methods for Network Embedding 🛡Defense 📝WWW :octocat:Code 2019
380 GraphDefense: Towards Robust Graph Convolutional Networks 🛡Defense 📝arXiv 2019
381 Robust Graph Data Learning via Latent Graph Convolutional Representation 🛡Defense 📝arXiv 2019
382 Investigating Robustness and Interpretability of Link Prediction via Adversarial Modifications 🛡Defense 📝NAACL :octocat:Code 2019
383 Robust Graph Convolutional Networks Against Adversarial Attacks 🛡Defense 📝KDD :octocat:Code 2019
384 Virtual Adversarial Training on Graph Convolutional Networks in Node Classification 🛡Defense 📝PRCV 2019
385 Comparing and Detecting Adversarial Attacks for Graph Deep Learning 🛡Defense 📝RLGM@ICLR 2019
386 Characterizing Malicious Edges targeting on Graph Neural Networks 🛡Defense 📝ICLR OpenReview :octocat:Code 2019
387 Latent Adversarial Training of Graph Convolution Networks 🛡Defense 📝LRGSD@ICML :octocat:Code 2019
388 Batch Virtual Adversarial Training for Graph Convolutional Networks 🛡Defense 📝ICML :octocat:Code 2019
389 Adversarial Robustness of Similarity-Based Link Prediction 🛡Defense 📝ICDM 2019
390 Improving Robustness to Attacks Against Vertex Classification 🛡Defense 📝MLG@KDD 2019
391 Fake Node Attacks on Graph Convolutional Networks ⚔Attack 📝arXiv 2018
392 Fast Gradient Attack on Network Embedding ⚔Attack 📝arXiv 2018
393 Attack Tolerance of Link Prediction Algorithms: How to Hide Your Relations in a Social Network ⚔Attack 📝arXiv 2018
394 Adversarial Attacks on Neural Networks for Graph Data ⚔Attack 📝KDD :octocat:Code 2018
395 Hiding Individuals and Communities in a Social Network ⚔Attack 📝Nature Human Behavior 2018
396 Attacking Similarity-Based Link Prediction in Social Networks ⚔Attack 📝AAMAS 2018
397 Adversarial Attack on Graph Structured Data ⚔Attack 📝ICML :octocat:Code 2018
398 Data Poisoning Attack against Unsupervised Node Embedding Methods ⚔Attack 📝arXiv 2018
399 Deep Learning on Graphs: A Survey 📃Survey 📝arXiv'2018 2018
400 Adversarial Attack and Defense on Graph Data: A Survey 📃Survey 📝arXiv'2018 2018
401 Adversarial Personalized Ranking for Recommendation 🛡Defense 📝SIGIR :octocat:Code 2018
402 Practical Attacks Against Graph-based Clustering ⚔Attack 📝CCS 2017
403 Adversarial Sets for Regularising Neural Link Predictors ⚔Attack 📝UAI :octocat:Code 2017