Glossary

Prompt Injection

An attack where crafted input causes an LLM to override its instructions or context. Direct injection comes through user input. Indirect injection comes through retrieved or referenced content the LLM processes.

Context and detail

Direct vs indirect. Real-world examples. Mitigations. OWASP LLM01 reference.

Related terms

  • Jailbreak (LLM) — A specific class of prompt injection that bypasses an LLM's safety training to elicit content the model was tuned to refuse.
  • OWASP LLM Top 10 — OWASP's catalog of the top 10 risks for LLM applications. Updated annually. The most-cited LLM security framework.

See how prompt injection maps to your AI posture.

The free AI Posture Check produces a per-dimension score and maps your gaps to OWASP LLM Top 10, NIST AI RMF, and ISO 42001.

Take the AI Posture Check