LLM09 · OWASP LLM Top 10
Misinformation (LLM09)
An LLM generates incorrect or misleading content that the user trusts and acts on. Hallucination is the obvious case; sycophancy and manipulated outputs are subtler. Misinformation becomes a security risk when it leads to operational decisions, legal advice, or downstream automation that depends on accuracy.
Examples
- An LLM-based legal-research tool cites cases that do not exist.
- A chatbot's hallucinated security advice causes a customer to misconfigure access controls.
- A coding assistant suggests a non-existent npm package, which an attacker then registers maliciously.
Recommended controls
- Output validation against source-of-truth where possible
- User-facing confidence indicators
- Disclaimer and human-review workflows for high-stakes outputs
- Hallucination testing as part of release
Posture Check checkpoint
Posture Check questions Q16–Q20. Affects Model.
Score yourself against this framework.
The AI Posture Check is a free 30-question self-assessment that maps your gaps directly to OWASP LLM Top 10, NIST AI RMF, and ISO 42001.
Take the AI Posture Check Need help operationalizing this?
Talk to a CWS engineer about your AI security program.
Schedule a Discovery Call to scope a Standard Audit or Enterprise Program.
Schedule a Discovery Call