STACKQUADRANT

Evaluation & Testing

Frameworks for evaluating, benchmarking, and testing AI systems

29 repos

confident-ai/deepeval

7.8

The LLM Evaluation Framework

14.8k1.4kPython

Ragas

7.7

Ragas — a leading open-source project in the AI/LLM ecosystem.

13.4k1.4kPython

NVIDIA/garak

7.3

the LLM vulnerability scanner

7.5k877HTML

jeinlee1991/chinese-llm-benchmark

6.2

ReLE评测:中文AI大模型能力评测(持续更新):目前已囊括335个大模型,覆盖chatgpt、gpt-5.2、o4-mini、谷歌gemini-3-pro、Claude-4.5、文心ERNIE-X1.1、ERNIE-5.0-Thinking、qwen3-max、百川、讯飞星火、商汤senseChat等商用模型, 以及kimi-k2、ernie4.5、minimax-M2、deepseek-v3.2、qwen3-2507、llama4、智谱GLM-4.6、gemma3、mistral等开源大模型。不仅提供排行榜,也提供规模超200万的大模型缺陷库!方便广大社区研究分析、改进大模型。

5.9k236

PacktPublishing/LLM-Engineers-Handbook

6.7

The LLM's practical guide: From the fundamentals to deploying advanced LLM and RAG apps to AWS using LLMOps best practices

4.9k1.2kPython

Agenta-AI/agenta

7.7

The open-source LLMOps platform: prompt playground, prompt management, LLM evaluation, and LLM observability all in one place.

4.0k508TypeScript

EvolvingLMMs-Lab/lmms-eval

7.5

One-for-All Multimodal Evaluation Toolkit Across Text, Image, Video, and Audio Tasks

4.0k560Python

Tencent/AI-Infra-Guard

7.3

A full-stack AI Red Teaming platform securing AI ecosystems via OpenClaw Security Scan, Agent Scan, Skills Scan, MCP scan, AI Infra scan and LLM jailbreak evaluation.

3.5k345Python

truera/trulens

7.3

Evaluation and Tracking for LLM Experiments and AI Agents

3.2k262Python

lmnr-ai/lmnr

6.9

Laminar - open-source observability platform purpose-built for AI agents. YC S24.

2.8k191TypeScript

huggingface/aisheets

6.2

Build, enrich, and transform datasets using AI models with no code

1.6k136TypeScript

cyberark/FuzzyAI

5.6

A powerful tool for automated LLM fuzzing. It is designed to help developers and security researchers identify and mitigate potential jailbreaks in their LLM APIs.

1.3k188Jupyter Notebook

microsoft/prompty

6.8

Prompty makes it easy to create, manage, debug, and evaluate LLM prompts for your AI applications. Prompty is an asset class and format for LLM prompts designed to enhance observability, understandability, and portability for developers.

1.2k114TypeScript

cvs-health/uqlm

6.6

UQLM: Uncertainty Quantification for Language Models, is a Python package for UQ-based LLM hallucination detection

1.1k119Python

JudgmentLabs/judgeval

6.7

The open source post-building layer for agents. Our environment data and evals power agent post-training (RL, SFT) and monitoring.

1.0k90Python

langwatch/scenario

5.9

Agentic testing for agentic codebases

83458TypeScript

onejune2018/Awesome-LLM-Eval

5.0

Awesome-LLM-Eval: a curated list of tools, datasets/benchmark, demos, leaderboard, papers, docs and models, mainly for Evaluation on LLMs. 一个由工具、基准/数据、演示、排行榜和大模型等组成的精选列表,主要面向基础大模型评测,旨在探求生成式AI的技术边界.

63155

ValueByte-AI/Awesome-LLM-in-Social-Science

5.0

Awesome papers involving LLMs in Social Science.

60946

PacificAI/langtest

6.1

Deliver safe & effective language models

55549Python

Pacific-AI-Corp/langtest

6.1

Deliver safe & effective language models

55549Python

relari-ai/continuous-eval

4.7

Data-Driven Evaluation for LLM-Powered Applications

51638Python

JonathanChavezTamales/llm-leaderboard

4.8

A comprehensive set of LLM benchmark scores and provider prices. (deprecated, read more in README)

36140JavaScript

CopilotKit/aimock

5.7

Mock everything your AI app talks to — LLM APIs, MCP, A2A, vector DBs, search. One package, one port, zero dependencies.

34321TypeScript

palico-ai/palico-ai

4.5

Build, Improve Performance, and Productionize your LLM Application with an Integrated Framework

34228TypeScript