Promptfoo (open source)
Visit pageEval, red teaming, and regression testing for LLM apps. MIT license.
Open-source LLM evaluation, red teaming, and security testing.
Promptfoo is widely adopted for prompt evaluation and security testing. Built-in red-teaming attack library, CI/CD integration, and a hosted enterprise tier for team use. The de facto open-source choice for LLM eval pipelines.
Notable open-source projects and reference frameworks used by enterprises and consultancies to harden AI deployments.
Direct links to the vendor's product pages. Last reviewed 2026-05-07.
Eval, red teaming, and regression testing for LLM apps. MIT license.
Hosted platform with team collaboration, dashboards, and compliance reports.
CWS helps customers evaluate, deploy, and operate Promptfoo products as part of an AI security program. Engagements span vendor selection, proof-of-concept design, integration with existing controls, day-2 operations, and exit planning if the fit changes over time.
CWS does not resell Promptfoo. The recommendation is honest, evidence-based, and tied to the customer's posture gaps — not to channel economics.
Engage CWS on PromptfooOpen-source toolkit for adding programmable guardrails to LLM apps.
View profileOpen-source LLM vulnerability scanner.
View profileMicrosoft's open-source Python Risk Identification Toolkit for GenAI.
View profileOpen-source LLM testing framework with hosted hub.
View profileOpen-source security toolkit for LLM-powered applications.
View profileThe free AI Posture Check scores your security across six dimensions in 10 minutes. Use the result to shortlist vendors that fit your actual posture — not the loudest demo.
Take the AI Posture Check