v0.20.0
February 10, 2026
added
qryon scan --ainow triages static analysis findings using AI instead of doing nothing- Each security-relevant finding is sent to the AI with ~30 lines of surrounding code context for triage
- AI returns a verdict (
true_positive,false_positive,needs_review), confidence score, explanation, and fix suggestion - High-confidence false positives (>=0.8) are automatically removed from results
- New
TriageResultstruct withtriage_finding()andextract_code_context()in the AI engine - Three AI providers supported: Claude (Anthropic), OpenAI, and local Ollama
- Retry with exponential backoff (2s/4s/8s) on rate-limited API responses
- Caps at 50 findings per scan by default to control cost
ai_verdict,ai_explanation,ai_confidencefields onFindingstruct (optional, serde-skipped when None)- Automatically included in JSON and SARIF output when
--aiis used - New
[ai]section inqryon.toml:enabled,provider,model,max_findings - CLI args (
--ai-provider,--ai-model) override TOML config
fixed
- Anthropic API version updated from
2023-06-01to2025-01-01 - JSON extraction from AI responses now validates parsed JSON before returning (prevents mismatched brace errors)
- All three AI providers (Claude, OpenAI, Local) now respect triage system prompts via
request.context - Helpful error message with setup instructions when
--aiis used without an API key
changed
- AI analysis approach: finding-based triage (send findings + code context) instead of whole-file scanning
- Security finding selection broadened to include rule ID pattern matching (security, injection, xss, etc.) and severity-based selection