STACKQUADRANT

deepeval

confident-ai/deepeval
7.8

The LLM Evaluation Framework

Evaluation & Testing
14.8k1.4kPythonApache-2.0today

Ragas

explodinggradients/ragas
7.7

Ragas — a leading open-source project in the AI/LLM ecosystem.

Evaluation & Testing
13.4k1.4kPythonApache-2.01mo ago

garak

NVIDIA/garak
7.3

the LLM vulnerability scanner

Evaluation & Testing
7.5k877HTMLApache-2.01d ago

chinese-llm-benchmark

jeinlee1991/chinese-llm-benchmark
6.2

ReLE评测:中文AI大模型能力评测(持续更新):目前已囊括335个大模型,覆盖chatgpt、gpt-5.2、o4-mini、谷歌gemini-3-pro、Claude-4.5、文心ERNIE-X1.1、ERNIE-5.0-Thinking、qwen3-max、百川、讯飞星火、商汤senseChat等商用模型, 以及kimi-k2、ernie4.5、minimax-M2、deepseek-v3.2、qwen3-2507、llama4、智谱GLM-4.6、gemma3、mistral等开源大模型。不仅提供排行榜,也提供规模超200万的大模型缺陷库!方便广大社区研究分析、改进大模型。

Evaluation & Testing
5.9k2366d ago

LLM-Engineers-Handbook

PacktPublishing/LLM-Engineers-Handbook
6.7

The LLM's practical guide: From the fundamentals to deploying advanced LLM and RAG apps to AWS using LLMOps best practices

Evaluation & Testing
4.9k1.2kPythonMIT1mo ago

agenta

Agenta-AI/agenta
7.7

The open-source LLMOps platform: prompt playground, prompt management, LLM evaluation, and LLM observability all in one place.

Evaluation & Testing
4.0k508TypeScriptNOASSERTIONtoday

lmms-eval

EvolvingLMMs-Lab/lmms-eval
7.5

One-for-All Multimodal Evaluation Toolkit Across Text, Image, Video, and Audio Tasks

Evaluation & Testing
4.0k560PythonNOASSERTION2d ago

AI-Infra-Guard

Tencent/AI-Infra-Guard
7.3

A full-stack AI Red Teaming platform securing AI ecosystems via OpenClaw Security Scan, Agent Scan, Skills Scan, MCP scan, AI Infra scan and LLM jailbreak evaluation.

Evaluation & Testing
3.5k345PythonApache-2.0today

trulens

truera/trulens
7.3

Evaluation and Tracking for LLM Experiments and AI Agents

Evaluation & Testing
3.2k262PythonMITtoday

lmnr

lmnr-ai/lmnr
6.9

Laminar - open-source observability platform purpose-built for AI agents. YC S24.

Evaluation & Testing
2.8k191TypeScriptApache-2.0today

aisheets

huggingface/aisheets
6.2

Build, enrich, and transform datasets using AI models with no code

Evaluation & Testing
1.6k136TypeScriptApache-2.05d ago

FuzzyAI

cyberark/FuzzyAI
5.6

A powerful tool for automated LLM fuzzing. It is designed to help developers and security researchers identify and mitigate potential jailbreaks in their LLM APIs.

Evaluation & Testing
1.3k188Jupyter NotebookApache-2.02mo ago

prompty

microsoft/prompty
6.8

Prompty makes it easy to create, manage, debug, and evaluate LLM prompts for your AI applications. Prompty is an asset class and format for LLM prompts designed to enhance observability, understandability, and portability for developers.

Evaluation & Testing
1.2k114TypeScriptMITtoday

uqlm

cvs-health/uqlm
6.6

UQLM: Uncertainty Quantification for Language Models, is a Python package for UQ-based LLM hallucination detection

Evaluation & Testing
1.1k119PythonApache-2.0today

judgeval

JudgmentLabs/judgeval
6.7

The open source post-building layer for agents. Our environment data and evals power agent post-training (RL, SFT) and monitoring.

Evaluation & Testing
1.0k90PythonApache-2.01d ago

scenario

langwatch/scenario
5.9

Agentic testing for agentic codebases

Evaluation & Testing
83458TypeScriptMITtoday

Awesome-LLM-Eval

onejune2018/Awesome-LLM-Eval
5.0

Awesome-LLM-Eval: a curated list of tools, datasets/benchmark, demos, leaderboard, papers, docs and models, mainly for Evaluation on LLMs. 一个由工具、基准/数据、演示、排行榜和大模型等组成的精选列表,主要面向基础大模型评测,旨在探求生成式AI的技术边界.

Evaluation & Testing
63155MIT4mo ago

Awesome-LLM-in-Social-Science

ValueByte-AI/Awesome-LLM-in-Social-Science
5.0

Awesome papers involving LLMs in Social Science.

Evaluation & Testing
60946MIT1mo ago

langtest

PacificAI/langtest
6.1

Deliver safe & effective language models

Evaluation & Testing
55549PythonApache-2.019d ago

langtest

Pacific-AI-Corp/langtest
6.1

Deliver safe & effective language models

Evaluation & Testing
55549PythonApache-2.019d ago

continuous-eval

relari-ai/continuous-eval
4.7

Data-Driven Evaluation for LLM-Powered Applications

Evaluation & Testing
51638PythonApache-2.01y ago

llm-leaderboard

JonathanChavezTamales/llm-leaderboard
4.8

A comprehensive set of LLM benchmark scores and provider prices. (deprecated, read more in README)

Evaluation & Testing
36140JavaScriptNOASSERTION5mo ago

aimock

CopilotKit/aimock
5.7

Mock everything your AI app talks to — LLM APIs, MCP, A2A, vector DBs, search. One package, one port, zero dependencies.

Evaluation & Testing
34321TypeScriptMITtoday

palico-ai

palico-ai/palico-ai
4.5

Build, Improve Performance, and Productionize your LLM Application with an Integrated Framework

Evaluation & Testing
34228TypeScriptMIT1y ago

rhesis

rhesis-ai/rhesis
5.4

The testing platform for AI teams. Bring engineers, PMs, and domain experts together to generate tests, simulate (adversarial) conversations, and trace every failure to its root cause.

Evaluation & Testing
31123PythonNOASSERTIONtoday

llms-tools

PetroIvaniuk/llms-tools
4.7

A list of LLMs Tools & Projects

Evaluation & Testing
30640Apache-2.01mo ago

athina-evals

athina-ai/athina-evals
4.1

Python SDK for running evaluations on LLM generated responses

Evaluation & Testing
29921Python10mo ago

flutter-skill

ai-dashboad/flutter-skill
5.1

AI-powered E2E testing for 10 platforms. 253 MCP tools. Zero config. Works with Claude, Cursor, Windsurf, Copilot. Test Flutter, React Native, iOS, Android, Web, Electron, Tauri, KMP, .NET MAUI — all from natural language.

Evaluation & Testing
19023DartMITtoday

qaskills

PramodDutta/qaskills
4.0

QA Skills Directory QA Skills is a curated directory of testing-specific skills for AI coding agents (Claude Code, Cursor, Copilot, etc.).

Evaluation & Testing
1024TypeScripttoday