- 1. Who Should Use Analytics?
- 2. Analytics Dashboard (All Projects View)
- 3. Project Analytics Overview
- 4. Project Overview
- 5. Test Case Creation Analytics
- 6. Performance Metrics
- 7. Trends Analysis
- 8. Top Failing Tests
- 9. Environment Comparison
- 10. Endpoint Analytics
- 11. Best Practices for Using Analytics
- 12. What You Can Expect from Analytics
The Analytics section in Shift Left API helps you understand how your APIs are performing, how reliable your test suites are, and where issues need attention. It converts test execution data into clear, actionable insights so teams can make faster and better decisions.
This guide explains what you see, how to use it, and what you can expect from the Analytics section.
1. Who Should Use Analytics? #
Analytics is designed for multiple roles:
-
- QA Engineers – Track test health, failures, and regressions
- Developers – Identify slow or unstable APIs
- Leads & Managers – Monitor quality trends and release readiness
-
2. Analytics Dashboard (All Projects View) #
The Analytics Dashboard is the first screen you see. It provides a high-level overview of analytics across all projects.
2.1 What You Can Do #
-
- View analytics for all projects in one place
- Quickly identify projects that need attention
- Open a project to see detailed analytics
-
2.2 What You See #
-
- Project Name – Click to open project analytics
- Total Tests – Number of test cases in the project
- Success Rate – Percentage of passed test executions
- Average Response Time – Overall API performance indicator
- Failed Tests – Number of failed test cases
-
Use this view to decide where to focus first.
3. Project Analytics Overview #
Selecting a project opens the Project Analytics view. This is the main analytics workspace for a project.
3.1 Controls Available #
-
- Select a time range (Last 7, 30, or 90 days)
- Refresh analytics data
- Access project settings
-
4. Project Overview #
The Project Overview provides a quick snapshot of overall test health and API behavior.
4.1 Key Metrics Explained #
-
- Total Tests – Total test cases available in the project
- Success Rate – Indicates overall stability of APIs
- Average Response Time – Shows how fast APIs respond on average
- Failed Tests – Number of tests that failed during execution
-
4.2 How to Use This Section #
-
- Check test health before releases
- Identify sudden drops in success rate
- Monitor performance changes over time
-
5. Test Case Creation Analytics #
This section shows how test cases are being created and how test coverage grows over time.
5.1 What You See #
-
- Test cases created over time (graph)
- User-wise contribution trends
-
5.2 Why It Matters #
-
- Ensures steady growth of test coverage
- Identifies gaps in test creation
- Provides visibility into team productivity
-
6. Performance Metrics #
The Performance Metrics section focuses on API response times and throughput.
6.1 Response Time Distribution #
APIs are grouped into response time ranges:
-
- Under 500 ms (fast)
- 500 ms – 1 second (moderate)
- Above 1 second (slow)
-
This helps quickly identify slow-performing APIs.
6.2 Performance Summary #
-
- Average response time
- Median response time
- 95th percentile response time
- 99th percentile response time
- Throughput (requests per second)
-
6.3 How to Use This Section #
-
- Identify performance bottlenecks
- Validate performance improvements
- Monitor APIs against SLAs
-
7. Trends Analysis #
Trends Analysis shows how test results change over time.
7.1 What You See #
Each execution card displays:
-
- Success rate
- Number of tests executed
- Average response time
-
7.2 Why It Matters #
-
- Detect regressions early
- Track quality improvements
- Identify unstable periods
-
8. Top Failing Tests #
This section highlights the most frequently failing test cases.
8.1 What You See #
For each test:
-
- Test name
- Failure rate
- Total executions
- Failed executions
- Last failure date
-
8.2 How to Use This Section #
-
- Prioritize high-impact failures
- Reduce flaky tests
- Improve overall stability
-
9. Environment Comparison #
Environment Comparison lets you compare results across environments.
9.1 Supported Environments #
-
- Development
- QA
- Staging
- Production
-
9.2 What You Can Learn #
-
- Environment-specific issues
- Deployment inconsistencies
- Readiness for production releases
-
10. Endpoint Analytics #
Endpoint Analytics provides API-level visibility.
10.1 What You See #
-
- Success and failure rates per endpoint
- Response time patterns per API
-
10.2 Why It Matters #
-
- Quickly identify problematic endpoints
- Optimize slow APIs
- Improve system reliability
-
11. Best Practices for Using Analytics #
-
- Review analytics after every major execution
- Focus on trends rather than single runs
- Fix frequently failing tests first
- Compare environments before production deployment
- Use performance metrics to guide optimization
-
12. What You Can Expect from Analytics #
By using the Analytics section, you can:
-
- Gain visibility into API quality
- Detect failures and regressions early
- Improve API performance
- Make confident release decisions
-







