A Prompt Injection Vulnerability occurs when user prompts alter the...
Read More- TOP 10 FOR GEN AI
2025 Top 10 Risk & Mitigations for LLMs and Gen AI Apps
Expore the latest Top 10 risks, vulnerabilities and mitigations for developing and securing generative AI and large language model applications across the development, deployment and management lifecycle.
Subscribe with RSS to keep up with the latest from the community.
LLM02:2025 Sensitive Information Disclosure
Sensitive information can affect both the LLM and its application...
Read MoreLLM03:2025 Supply Chain
LLM supply chains are susceptible to various vulnerabilities, which can...
Read MoreLLM04:2025 Data and Model Poisoning
Data poisoning occurs when pre-training, fine-tuning, or embedding data is...
Read MoreLLM05:2025 Improper Output Handling
Improper Output Handling refers specifically to insufficient validation, sanitization, and...
Read MoreLLM06:2025 Excessive Agency
An LLM-based system is often granted a degree of agency...
Read MoreLLM07:2025 System Prompt Leakage
The system prompt leakage vulnerability in LLMs refers to the...
Read MoreLLM08:2025 Vector and Embedding Weaknesses
Vectors and embeddings vulnerabilities present significant security risks in systems...
Read MoreLLM09:2025 Misinformation
Misinformation from LLMs poses a core vulnerability for applications relying...
Read MoreLLM10:2025 Unbounded Consumption
Unbounded Consumption refers to the process where a Large Language...
Read MoreDocument Versions and Translations
- April 11, 2024
LLM Top 10 for LLMs 2024
- May 4, 2024
OWASP Top 10 for LLM Overview Presentation
- May 7, 2024