Glossary
Training Data Poisoning
Adversarial manipulation of training, fine-tuning, or RAG-corpus data to alter model behavior.
Context and detail
Backdoor triggers. Targeted vs untargeted poisoning. Provenance controls. OWASP LLM04 reference.
See how training data poisoning maps to your AI posture.
The free AI Posture Check produces a per-dimension score and maps your gaps to OWASP LLM Top 10, NIST AI RMF, and ISO 42001.
Take the AI Posture Check