Frameworks

AI Security Frameworks.

Operational guides for the frameworks regulators, auditors, and enterprise buyers actually reference. Mapped to AI Posture Check questions.

OWASP LLM Top 10

The defining LLM-application security catalog.

LLM01

Prompt Injection (LLM01)

An attacker manipulates an LLM through crafted inputs that override instructions, exfiltrate context, or trigger unintended actions. Direct prompt injection comes through user input. Indirect prompt injection comes through retrieved or referenced content (web pages, documents, emails) that the LLM processes as part of normal operation.

Read the guide
LLM02

Sensitive Information Disclosure (LLM02)

An LLM reveals sensitive data through output. The data may come from training data, fine-tuning data, the system prompt, retrieved context (RAG), or other tenants if isolation fails.

Read the guide
LLM03

Supply Chain (LLM03)

Vulnerabilities or compromises in upstream training data, pre-trained models, third-party datasets, model marketplaces, or fine-tuning services that affect the security of the deployed system.

Read the guide
LLM04

Data and Model Poisoning (LLM04)

An attacker injects malicious data into training, fine-tuning, or RAG-corpus content to alter model behavior in their favor — often subtly, often persistently.

Read the guide
LLM05

Improper Output Handling (LLM05)

Downstream systems trust LLM output and execute it without validation, leading to traditional injection vulnerabilities (XSS, SQL injection, command execution) being introduced through LLM-generated payloads.

Read the guide
LLM06

Excessive Agency (LLM06)

An LLM-based agent has more permissions, more tool access, or more autonomy than its task requires. Compromise via prompt injection then leverages that excessive privilege to do damage.

Read the guide
LLM07

System Prompt Leakage (LLM07)

An attacker extracts the system prompt or other privileged context from an LLM. The prompt may contain business logic, internal documentation, or even credentials.

Read the guide
LLM08

Vector and Embedding Weaknesses (LLM08)

Risks specific to vector databases, embedding models, and RAG architectures. Includes embedding inversion (recovering source text from embeddings), unauthorized retrieval, and corpus poisoning.

Read the guide
LLM09

Misinformation (LLM09)

An LLM generates incorrect or misleading content that the user trusts and acts on. Hallucination is the obvious case; sycophancy and manipulated outputs are subtler. Misinformation becomes a security risk when it leads to operational decisions, legal advice, or downstream automation that depends on accuracy.

Read the guide
LLM10

Unbounded Consumption (LLM10)

An LLM service is consumed in ways that drive cost, latency, or availability problems. Includes denial-of-wallet attacks, resource exhaustion, and model-extraction-style heavy querying.

Read the guide
Ready when you are

Ready to find out where you actually stand?

Free, 10 minutes, instant in-browser results. No email required.

Take the AI Posture Check