Langfuse (open source)
Visit pageSelf-hosted LLM observability and evaluation. MIT license.
Open-source LLM engineering platform. Observability, evals, and prompt management.
Langfuse is the leading open-source LLM observability platform. Self-host or use Langfuse Cloud. Strong tracing primitives, prompt management, and eval workflows. Popular in privacy-sensitive deployments.
Test, monitor, and grade LLM outputs in development and production. Hallucination detection, regression testing, traceability, and continuous quality measurement.
Direct links to the vendor's product pages. Last reviewed 2026-05-07.
Self-hosted LLM observability and evaluation. MIT license.
Hosted version with SOC 2, SLA, and support.
CWS helps customers evaluate, deploy, and operate Langfuse products as part of an AI security program. Engagements span vendor selection, proof-of-concept design, integration with existing controls, day-2 operations, and exit planning if the fit changes over time.
CWS does not resell Langfuse. The recommendation is honest, evidence-based, and tied to the customer's posture gaps — not to channel economics.
Engage CWS on LangfuseContinuous evaluation and monitoring for AI systems and LLM applications.
View profileML and LLM observability with the open-source Phoenix framework.
View profileGenAI evaluation, observability, and protection for enterprises.
View profileLangChain's hosted observability and evaluation platform for LLM apps.
View profileAutomated AI evaluation with research-grade benchmarks.
View profileML and LLM observability with strong open-source roots (whylogs, langkit).
View profileThe free AI Posture Check scores your security across six dimensions in 10 minutes. Use the result to shortlist vendors that fit your actual posture — not the loudest demo.
Take the AI Posture Check