The OWASP LLM Top 10.
The defining security-risk catalog for LLM-powered applications. Maintained by the OWASP GenAI Security Project. Each risk below has a dedicated deep-dive page with definition, examples, controls, and Posture Check mapping.
Prompt Injection
An attacker manipulates an LLM through crafted inputs that override instructions, exfiltrate context, or trigger unintended actions.
Sensitive Information Disclosure
An LLM reveals sensitive data through output.
Supply Chain
Vulnerabilities or compromises in upstream training data, pre-trained models, third-party datasets, model marketplaces, or fine-tuning services that affect the security of the deployed system..
Data and Model Poisoning
An attacker injects malicious data into training, fine-tuning, or RAG-corpus content to alter model behavior in their favor — often subtly, often persistently..
Improper Output Handling
Downstream systems trust LLM output and execute it without validation, leading to traditional injection vulnerabilities (XSS, SQL injection, command execution) being introduced through LLM-generated payloads..
Excessive Agency
An LLM-based agent has more permissions, more tool access, or more autonomy than its task requires.
System Prompt Leakage
An attacker extracts the system prompt or other privileged context from an LLM.
Vector and Embedding Weaknesses
Risks specific to vector databases, embedding models, and RAG architectures.
Misinformation
An LLM generates incorrect or misleading content that the user trusts and acts on.
Unbounded Consumption
An LLM service is consumed in ways that drive cost, latency, or availability problems.
How exposed are you to LLM01–LLM10?
The Posture Check evaluates 30 questions across six dimensions, with explicit mapping to OWASP LLM Top 10. Ten minutes, free, in-browser, no email required.
Take the AI Posture Check