Dimension · 5 questions

Model.

Selection, version control, hallucination testing, retirement, theft prevention.

Why this dimension matters

Model covers the lifecycle of the AI models themselves: how they are selected, tracked, evaluated, retired, and protected. The five questions in this dimension cover documented model selection processes, version tracking, hallucination and sycophancy testing, retirement processes, and theft or extraction prevention for fine-tuned models. This dimension maps to OWASP LLM03 (Supply Chain), LLM04 (Data and Model Poisoning), and LLM09 (Misinformation), plus the Measure function of NIST AI RMF. Strong model scores indicate engineering maturity around AI lifecycle management. Weak scores mean models are being deployed and updated without security review, hallucinations are not being tested for, and end-of-life is reactive rather than planned.

Posture Check questions for model

  1. Do you have a documented model selection process that includes security and bias evaluation?
    • 0 No documented process
    • 1 Process in draft
    • 2 Partial process
    • 3 Documented operational process
  2. Are you aware of which models are in use across your AI deployments and tracking their version history?
    • 0 No tracking
    • 1 Partial tracking
    • 2 Centralized inventory of model versions
    • 3 Continuous version-tracking with change-management
  3. Have you tested your AI for hallucination, sycophancy, or other model-specific failure modes relevant to your use case?
    • 0 No testing
    • 1 Identified the need
    • 2 Partial testing
    • 3 Documented testing as part of release cycle
  4. Do you have a process to retire or replace models when better alternatives become available or current models become unsafe?
    • 0 No process
    • 1 Aware of the need
    • 2 Process in development
    • 3 Operational process with documented criteria
  5. For internally-built or fine-tuned models, do you have controls to prevent model theft or extraction attacks?
    • 0 No controls
    • 1 Identified the risk
    • 2 Partial controls (rate limiting, etc.)
    • 3 Comprehensive controls including model fingerprinting

Score yourself on model.

The free 30-question Posture Check measures all six dimensions. Get a per-dimension breakdown plus prioritized recommendations.

Take the AI Posture Check
Need help here?

Get a Standard Audit on your model controls.

A senior CWS engineer reviews your specific deployments, runs adversarial tests where applicable, and produces a remediation roadmap.

Schedule a Discovery Call