Glossary

LLM Firewall

A security control that inspects prompts and outputs of LLM applications for policy violations, prompt injection, sensitive data, or jailbreak patterns.

Context and detail

Architecture options (proxy, sidecar, in-app). Vendor landscape. Limitations.

Related terms

  • Prompt Injection — An attack where crafted input causes an LLM to override its instructions or context. Direct injection comes through user input. Indirect injection comes through retrieved or referenced content the LLM processes.

See how llm firewall maps to your AI posture.

The free AI Posture Check produces a per-dimension score and maps your gaps to OWASP LLM Top 10, NIST AI RMF, and ISO 42001.

Take the AI Posture Check