Product documentationUpdated February 2, 2026
Understanding Analytics
Use Analytics to understand reliability and performance trends—spot regressions, flaky tests, and slow endpoints, then prioritize fixes with data.
Overview
Analytics turns execution history into actionable signals. Use it to answer:
- Are our runs getting more stable over time?
- Which endpoints are slow or flaky?
- Which failures are new regressions vs recurring issues?
- Are we ready to ship based on trend data?
Who uses analytics
- QA: track failures, regressions, and flaky tests.
- Developers: spot unstable or slow endpoints.
- Leads/managers: monitor trends and release readiness.
Core dashboards and what they mean
Project overview
Use this as a health check:
- total tests and execution volume
- success rate (stability)
- response time trends (performance)
- failures (where to drill in)
Top failing tests / endpoints
Use this to prioritize work:
- fix the highest-frequency failures first
- separate environment instability from deterministic failures
- identify tests that need better data setup or assertions
Environment comparison
Compare dev vs stage vs prod to detect:
- deployment inconsistencies
- environment-specific auth/config differences
- data mismatches
Best practices
- Look for trends, not single-run outliers.
- Break down failures by environment and endpoint.
- Use analytics to drive targeted fixes (auth, timeouts, data seeding) before broad retries.
Related articles
Next steps
- Getting started · Install + connect your spec
- Configuration fundamentals · Stabilize runs
- Initial configuration · Users, licensing, projects
- Release notes · Updates and fixes
Still stuck?
Tell us what you’re trying to accomplish and we’ll point you to the right setup—installation, auth, or CI/CD wiring.