| Architecture era | AI-native, built for the LLM era | Pre-LLM (founded 2007); AI features bolted on later |
| Self-hosted LLM (Ollama / vLLM / LM Studio) | Yes — Enterprise tier, runs in your perimeter | No native self-hosted LLM option |
| AI test generation | Native multi-provider (13+ LLM providers, BYO key) | Limited Tosca AI add-on; cloud-coupled |
| API specs leave perimeter | Never (self-hosted) or never by default (SaaS, BYO LLM key) | Cloud-coupled AI features may require external calls |
| Deployment model | Self-hosted single-tenant or multi-tenant SaaS | On-prem heavyweight install; cloud option |
| API-first vs UI-first | API testing built as a first-class product | API testing alongside UI, RPA, mobile, packaged-app testing — broader but heavier |
| Time to first test | Minutes from spec import to running tests | Hours-to-days for model creation, environment, and licensing setup |
| Protocol coverage | REST, SOAP/WSDL, GraphQL — production-ready | REST, SOAP, plus broader application-stack scope |
| CI/CD integration | Six first-party plugins (Jenkins, GitHub Actions, Azure DevOps, GitLab CI, CircleCI, Bitbucket) | Jenkins, Azure DevOps; broader ALM toolchain plug-ins |
| MCP / AI agent integration | Native MCP server (Claude, Cursor, any MCP client) | Not available |
| RBAC + audit logs | Built-in 5 roles + audit log capture and export | Available; tied to broader Tricentis platform user management |
| SSO (SAML / OIDC / Azure AD) | On near-term roadmap; professional services available today | Available in enterprise tier |
| Licensing complexity | Tier-based (Free / Pro / Enterprise), annual contracts | Per-seat / per-license model with enterprise contract negotiation |
| Free evaluation | Forever-free Citizen Developer (single user) + 15-day full Enterprise trial | Demo + commercial pilot via sales |