Help & User Guide
Everything you need to navigate StackQuadrant and interpret the data.
Every tool is evaluated on a 0-10 scale across six dimensions. The overall score is a weighted average.
The circular indicator next to each tool shows its overall score. The ring fills proportionally — a full ring means 10/10. The color follows the same scale above.
Hover over any score, dimension header, or metric label to see an explanation. Look for the ? icon — it indicates additional context is available.
Quality and accuracy of generated code, correctness, and adherence to best practices.
Ability to comprehend project structure, dependencies, and codebase-wide context.
Ease of use, IDE integration, onboarding speed, and workflow friction reduction.
Capability to make coordinated changes across multiple files consistently.
Effectiveness at identifying bugs, suggesting fixes, and resolving errors.
Support for languages, frameworks, package managers, and dev tools.
Each dimension has a weight that determines its contribution to the overall score. Weights are visible in tooltips on the capability matrix.
Overview showing top-ranked tools, featured repos, latest showcase projects, quadrants, and benchmarks at a glance.
Sortable, filterable table comparing all tools across every dimension. Click column headers to sort. Use category filters to narrow results.
Interactive 2D charts positioning tools into four regions: Leaders, Visionaries, Challengers, and Niche Players. Click any dot to view the tool.
Structured benchmark results for specific AI coding tasks. Compare tools side-by-side on real-world tasks.
Evaluate tool combinations for specific workflows. Stacks are rated by how well the tools work together.
AI/LLM Ecosystem Directory. Browse and filter open-source repos by category with automated GitHub metrics, auto-generated quality scores, and weekly discovery of new repos.
Vibe Coding Showcase. Community-submitted projects built with AI tools. Auto-fill from GitHub, submit without a live URL for libraries/CLIs, and get quality-scored.
Find the right AI tool for your use case — rapid prototyping, enterprise, learning, open-source, and more.
Interactive wizard to compose a custom tool stack, assign roles, and analyze strengths and gaps.
PainGaps Retail Intelligence. Run AI-powered scans on products to detect user pain points from Reddit, review sites, Google Autocomplete, and Twitter.
FinServ Intelligence Platform. Team-based vendor pain maps, regulatory radar, practice intelligence, fund operations analysis, and sector taxonomy.
Pro-only cross-scan search across all detected pain points with keyword, severity, trend, source, and date range filters.
Detailed explanation of our evaluation process, scoring criteria, and update cadence.
Click the ? button in the bottom-right corner to open the Ask + Suggest + Report widget. It has three modes:
Ask questions about tools, stacks, and quadrants. The AI uses MCP tools to query real data and returns structured answers with recommendation cards, confidence scores, rationale bullets, and alternatives. Use quick prompts or type your own question.
Submit structured corrections: add a missing tool, move a tool to a different quadrant, update metadata, merge duplicates, or flag discontinued tools. Include evidence links and your reasoning. Rate limited to 5 submissions per hour.
Report bugs (what happened vs expected, with optional screenshot) or data quality issues (specify the field, current value, and corrected value with evidence). Rate limited to 10 reports per hour.
You can also click "Suggest a correction" on any tool detail page to open the widget pre-filled with that tool's context.
PainGaps scans products and tools to detect real user pain points using AI-powered analysis across multiple data sources.
Create a scan at /scans, specify a product name, and run it. The engine collects pain signals from Reddit, Google Autocomplete, Twitter, and review sites, then uses AI to analyze intensity, frequency, and trends.
Free: 3 scans/month, 10 pain points. Starter: 20 scans/month, 100 pain points. Pro: unlimited scans, universe search, competitive gap extraction.
Pro users can search across all scans at /universe to find patterns, filter by severity/trend/source, and extract competitive gaps.
Team-based intelligence platform for financial services professionals. Navigate to /intelligence to access all modules.
Create a team, invite members (admin/analyst/viewer roles), and track vendors across 6 financial sectors. Plan tiers: Analyst, Team, Business, Enterprise.
Regulatory Radar (track regulations from CSSF, FCA, SEC, ESMA, EBA), Vendor Pain Map (monitor vendor-specific pain signals), Practice Intelligence (pains, opportunities, talent), Fund Ops (operational pain index across NAV, TA, Reporting, KYC, Comms, Recon).
Team/Business/Enterprise plans can generate API keys for programmatic access. Business/Enterprise can export CSV/JSON reports for vendor pains, regulations, and sector overviews.
Quadrant charts position tools on two axes. The chart is divided into four regions:
High capability + high vision. Top-performing tools with broad, mature feature sets.
Lower capability + high vision. Innovative approach but may lack execution maturity.
High capability + lower vision. Strong execution on existing features, narrower scope.
Lower capability + lower vision. Specialized or early-stage tools serving specific needs.
Click the sun/moon icon in the top-right corner to toggle between dark and light themes. Your preference is saved in your browser.
The layout automatically adapts to your screen size — panels fill the available viewport on ultrawide monitors and stack vertically on smaller screens.
Repository quality scores are auto-generated from GitHub metrics using a transparent methodology:
Docs site presence, description quality, stars as proxy for doc investment, contributor count.
Stars, contributors, watchers, forks, and open issue ratio relative to popularity.
Stars-to-issues ratio, typed language bonus, documentation site, permissive license.
Last commit freshness, weekly commits, release cadence, repo maturity.
Battle-tested usage, peer review count, versioned releases, license, age.
Fork interest, language ecosystem popularity, license permissiveness, adoption.
Scores use logarithmic normalization for wide-range metrics. Each score includes evidence text visible on the repo detail page.
Submit your AI-built project at /showcase/submit.
Paste your GitHub repo URL and click Import to auto-fill project name, description, tech stack (from languages and topics), and builder info.
Frameworks, CLI tools, and libraries without a live web presence can be submitted — the live project URL is optional.
Submit → verify your email → admin reviews → published. Quality-scored on: does it work, code quality, and shipped status.
StackQuadrant automatically keeps data fresh and publishes AI-generated blog content on a scheduled cadence.
Every 2 days, Claude analyzes trending AI/LLM news from HackerNews and Reddit, then generates an original blog post with developer-focused analysis. Posts are published automatically at /blog.
GitHub metrics sync every 6 hours. Quality scores recalculate automatically after each sync. New AI/LLM repos are discovered weekly. PainGaps scan queue processes every 15 minutes.
- GitHub Sync — every 6 hours
- Repo Scoring — 30 min after each sync
- Repo Discovery — weekly (Sunday 3am)
- Scan Queue — every 15 minutes
- Blog Writer — every 2 days at 10am
Admins can manage all content through the admin dashboard.
- Add, edit, and remove AI tools with dimension scores
- Create and position tools on quadrant charts
- Publish benchmark results with structured metrics
- Define and rate tool stacks for workflows
- Add AI/LLM repos, trigger GitHub sync, score quality dimensions
- Manage repo categories (add, edit, reorder)
- Moderate showcase submissions (approve, reject, quality-score)
- Review community suggestions (approve, reject, request info)
- Triage bug reports and data quality issues
- Execute change jobs and track tool changelogs
All entities have a published status — unpublished items are only visible in the admin panel. Showcase projects follow a verification pipeline: submitted → email verified → admin review → published.
All data is available through a public REST API. No authentication required for read endpoints.
GET /api/v1/tools— list all published tools with scoresGET /api/v1/repos— list published repos with GitHub metricsGET /api/v1/showcase— list published showcase projectsGET /api/v1/showcase/github-info?url=— fetch GitHub repo info for form auto-fillGET /api/v1/search— search across all entitiesSee the full API reference in the README.