- AI verification
- Certification for Neural network
- Overview
- AI Explainability 360
- Adversarial Robustness 360 Toolbox (ART)
- PROVEN: Verifying Robustness of Neural Networks with a Probabilistic Approach
- CROWN: A Neural Network Verification Framework
- Efficient Neural Network Robustness Certification with General Activation Functions
- Certified Adversarial Robustness via Randomized Smoothing
- CNN-Cert: An Efficient Framework for Certifying Robustness of Convolutional Neural Networks
- THE ROBUSTNESS OF NEURAL NETWORKS: AN EXTREME VALUE THEORY APPROACH
- Robustness Verification of Tree-based Models
- Efficient Formal Safety Analysis of Neural Networks
- A Game-Based Approximate Verification of Deep Neural Networks with Provable Guarantees
- DeepGO: Reachability Analysis of Deep Neural Networks with Provable Guarantees
- Attack and Robustness
- Related Topics
- Certification for Neural network
AI verification is a special point to view the booming research and applications of AI, especially deep learning.
Considering the wide and rapidly increasing use of AI among all kinds of industry, a professional inspection and certification from third-party organizations becomes important for both customers and the whole society.
Data plays an unprecedented role in deep learning. It also unavoidably involves public issues, such as privacy, ethics, risk and safety.
IBM has done a great work on related topics. MIT and Oxford also have some related research.
https://github.com/IBM/adversarial-robustness-toolbox
http://proceedings.mlr.press/v97/weng19a/weng19a.pdf
https://github.com/IBM/CROWN-Robustness-Certification
https://arxiv.org/pdf/1811.00866.pdf
https://github.com/locuslab/smoothing
https://arxiv.org/abs/1902.02918
https://github.com/IBM/CNN-Cert
https://www.aaai.org/ojs/index.php/AAAI/article/view/4193
https://arxiv.org/pdf/1801.10578.pdf
https://arxiv.org/pdf/1906.03849.pdf
http://papers.nips.cc/paper/7873-efficient-formal-safety-analysis-of-neural-networks.pdf
https://arxiv.org/pdf/1807.03571.pdf
https://github.com/TrustAI/DeepGame
https://arxiv.org/abs/1805.02242
https://github.com/TrustAI/DeepGO
https://github.com/dongyp13/Robust-and-Explainable-Machine-Learnings
Is Robustness the Cost of Accuracy? A Comprehensive Study on the Robustness of 18 Deep Image Classification Models
https://arxiv.org/abs/1808.01688
https://github.com/huanzhang12/Adversarial_Survey
AutoZOOM: Autoencoder-based Zeroth Order Optimization Method for Attacking Black-box Neural Networks
https://arxiv.org/abs/1805.11770
https://github.com/IBM/Autozoom-Attack
- AI safety
- Explainable/Interpretable AI (XAI)
- AI robustness
- GAN defense and attack
- Certification framework