- [2024/11] AttentionBreaker: Adaptive Evolutionary Optimization for Unmasking Vulnerabilities in LLMs through Bit-Flip Attacks
- [2024/11] Towards evaluations-based safety cases for AI scheming
- [2024/10] Safeguard is a Double-edged Sword: Denial-of-service Attack on Large Language Models
- [2024/09] The Early Bird Catches the Leak: Unveiling Timing Side Channels in LLM Serving Systems
- [2024/05] Exploiting LLM Quantization
- [2024/04] Attacks on Third-Party APIs of Large Language Models
- [2024/04] Towards AI Safety: A Taxonomy for AI System Evaluation
- [2024/03] What Was Your Prompt? A Remote Keylogging Attack on AI Assistants
- [2024/03] SecGPT: An Execution Isolation Architecture for LLM-Based Systems
- [2024/03] Here Comes The AI Worm: Unleashing Zero-click Worms that Target GenAI-Powered Applications
- [2024/02] A New Era in LLM Security: Exploring Security Concerns in Real-World LLM-based Systems
- [2024/02] A First Look at GPT Apps: Landscape and Vulnerability