Glossary

Adversarial Testing (AI)

Systematic testing of an AI system against attacks, edge cases, and failure modes.

Context and detail

Test categories. Tool support. Integration into CI/CD.

Related terms

  • AI Red-Teaming — Adversarial testing of AI systems to identify safety, security, and robustness failures before production.
  • MITRE ATLAS — Adversarial Threat Landscape for AI Systems. MITRE's catalog of adversarial machine learning tactics and techniques. Modeled on MITRE ATT&CK.

See how adversarial testing (ai) maps to your AI posture.

The free AI Posture Check produces a per-dimension score and maps your gaps to OWASP LLM Top 10, NIST AI RMF, and ISO 42001.

Take the AI Posture Check