Product documentationUpdated October 3, 2025
AI Settings
Configure AI-assisted test generation—enable features, set guardrails, and manage provider/model credentials securely for consistent, schema-aware output.
Overview
AI settings control how AI-assisted features behave in your deployment—especially test generation. Configure these settings to keep outputs consistent, safe, and aligned with your organization’s policies.
Test generation settings
Typical controls include:
- what types of tests can be generated (happy path, negative, auth, edge cases)
- guardrails for output format and completeness
- schema-awareness requirements so generated data matches your OpenAPI models
Generation options (common)
- automatic suggestions when new endpoints/schemas are added
- “smart” variation suggestions to expand coverage deliberately
Provider/model configuration
If your deployment supports multiple AI providers or models, configure:
- the active provider/model
- request limits and timeouts
- safety and content guardrails
Keep settings stable so results are reproducible across environments.
API keys and secrets
- Store keys securely and restrict access to administrators only.
- Rotate keys periodically.
- Never embed secrets in templates, prompts, or test artifacts.
Related articles
Related articles
- Audit Logs · Product documentation
- Configuration · Product documentation
- Debug Logging · Product documentation
- Email Settings · Product documentation
- Email Templates · Product documentation
- License Management · Product documentation
Next steps
- Getting started · Install + connect your spec
- Configuration fundamentals · Stabilize runs
- Initial configuration · Users, licensing, projects
- Release notes · Updates and fixes
Still stuck?
Tell us what you’re trying to accomplish and we’ll point you to the right setup—installation, auth, or CI/CD wiring.