Adversary Emulation Framework
-
Updated
Nov 25, 2024 - Go
Adversary Emulation Framework
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
Data augmentation for NLP
TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model training in NLP https://textattack.readthedocs.io/en/master/
A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX
A unified evaluation framework for large language models
PyTorch implementation of adversarial attacks [torchattacks]
Must-read Papers on Textual Adversarial Attack and Defense
Advbox is a toolbox to generate adversarial examples that fool neural networks in PaddlePaddle、PyTorch、Caffe2、MxNet、Keras、TensorFlow and Advbox can benchmark the robustness of machine learning models. Advbox give a command line tool to generate adversarial examples with Zero-Coding.
A Toolbox for Adversarial Robustness Research
A pytorch adversarial library for attack and defense methods on images and graphs
A reading list for large models safety, security, and privacy (including Awesome LLM Security, Safety, etc.).
A collection of anomaly detection methods (iid/point-based, graph and time series) including active learning for anomaly detection/discovery, bayesian rule-mining, description for diversity/explanation/interpretability. Analysis of incorporating label feedback with ensemble and tree-based detectors. Includes adversarial attacks with Graph Convol…
A curated list of adversarial attacks and defenses papers on graph-structured data.
An Open-Source Package for Textual Adversarial Attack.
Code relative to "Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks"
A Harder ImageNet Test Set (CVPR 2021)
Raising the Cost of Malicious AI-Powered Image Editing
A Model for Natural Language Attack on Text Classification and Inference
A Python library for adversarial machine learning focusing on benchmarking adversarial robustness.
Add a description, image, and links to the adversarial-attacks topic page so that developers can more easily learn about it.
To associate your repository with the adversarial-attacks topic, visit your repo's landing page and select "manage topics."