Prompt Injection Vulnerability occurs when an attacker manipulates a large...
Read More- T10 FOR GEN AI - 2023-24
Top 10 for LLMs and Gen AI Apps 2023-24
View the Top 10 risks, vulnerabilities and mitigations in 2023-2024 for developing and securing generative AI and large language model applications across the development, deployment and management lifecycle.
View the latest risk and mitigations for 2025.

LLM02: Insecure Output Handling
Insecure Output Handling refers specifically to insufficient validation, sanitization, and...
Read MoreLLM03: Training Data Poisoning
The starting point of any machine learning approach is training...
Read MoreLLM04: Model Denial of Service
An attacker interacts with an LLM in a method that...
Read MoreLLM05: Supply Chain Vulnerabilities
The supply chain in LLMs can be vulnerable, impacting the...
Read MoreLLM06: Sensitive Information Disclosure
LLM applications have the potential to reveal sensitive information, proprietary...
Read MoreLLM07: Insecure Plugin Design
LLM plugins are extensions that, when enabled, are called automatically...
Read MoreLLM08: Excessive Agency
An LLM-based system is often granted a degree of agency...
Read MoreLLM09: Overreliance
Overreliance can occur when an LLM produces erroneous information and...
Read MoreLLM10: Model Theft
This entry refers to the unauthorized access and exfiltration of...
Read MoreDocument Versions and Translations

- April 11, 2024
LLM Top 10 for LLMs 2024

- May 4, 2024
OWASP Top 10 for LLM Overview Presentation

- May 7, 2024