AI Data Processing
Last updated: March 3, 2026
This document explains how Ridge Sight's AI-powered pull request insight features process your data. It supplements our Terms of Service and Subscription & Billing Agreement. For our AI risk management framework, see the AI Risk Assessment.
1. Overview
Ridge Sight offers AI-powered pull request insights as a Premium feature. These insights include risk scores, change summaries, risk factor analysis, and confidence assessments for your open pull requests. All AI inference is performed via the Vercel AI Gateway, which exclusively routes requests to models operating under a Zero Data Retention (ZDR) policy.
This means that no data you send through Ridge Sight's AI features is stored, logged, or used for model trainingby the AI model providers. Data exists in the model provider's infrastructure only for the duration of the inference request and is discarded immediately after the response is generated.
2. What Data Is Sent to AI Models
When you request an AI insight for a pull request, we construct a prompt containing only pull request metadata. Specifically:
- Repository name — the full name of the repository (e.g.,
owner/repo). - Pull request number and title
- Author username
- Draft status — whether the PR is a draft.
- Change statistics — number of additions, deletions, and changed files.
- Conflict status — whether the PR has merge conflicts and its mergeable state.
- Stale days — how many days since the PR was last updated.
- Top changed file paths — the names of the most-changed files (paths only, not contents).
- PR body/description — the text description written by the PR author.
What Is NOT Sent
The following data is never included in AI prompts:
- Source code — no file contents, code snippets, or inline code changes.
- Commit diffs — no line-by-line diff content.
- Comments or reviews — no review comments, PR discussion threads, or commit messages beyond the PR title and body.
- Secrets or credentials — no environment variables, API keys, tokens, or similar sensitive data.
- Personal information — no email addresses, real names, or other PII beyond the GitHub username of the PR author.
- Repository contents — no README files, configuration files, or any other file contents from your repositories.
3. The Vercel AI Gateway & Zero Data Retention
All AI inference requests are routed through the Vercel AI Gateway, a unified API proxy that connects to multiple AI model providers. The Vercel AI Gateway is operated by Vercel Inc.
Zero Data Retention Guarantee
The Vercel AI Gateway exclusively provides access to models operating under Zero Data Retention agreements with their providers. Under ZDR:
- No training on your data — your prompts and model responses are never used to train, fine-tune, or improve any AI model.
- No persistent storage — your data is not stored by the model provider beyond the duration of the single inference request. Once the response is returned, the input and output are discarded.
- No logging of content — the content of your prompts and responses is not logged by model providers for debugging, analytics, or any other purpose.
- Transit-only processing — your data passes through the provider's infrastructure solely to generate the inference response and is not retained in any form.
4. Available AI Models
Ridge Sight offers a range of AI models through the Vercel AI Gateway. All models listed below operate under the Zero Data Retention policy described above.
Included Model
Every Premium subscription includes a monthly allowance of AI insight calls using the included model at no additional cost. The current included model is:
- Mistral Ministral 3B — a lightweight, efficient model suitable for pull request metadata analysis.
Premium Models (Optional, Pay-Per-Use)
Premium subscribers may optionally select from higher-capability models at a per-call cost. Available providers include:
- OpenAI — GPT-4o, GPT-4o Mini, GPT-4.1, GPT-4.1 Mini, GPT-4.1 Nano, GPT-5, GPT-5 Mini, GPT-5 Nano, GPT-5.2, o3, o3-mini, o4-mini
- Anthropic — Claude Haiku 4.5, Claude Sonnet 4.6, Claude Opus 4.6
- Google — Gemini 2.5 Flash, Gemini 2.5 Flash Lite, Gemini 2.5 Pro, Gemini 3 Flash, Gemini 3 Pro
- Meta — Llama 3.1 8B
- DeepSeek — DeepSeek V3.2
- xAI — Grok 3 Mini, Grok 4, Grok 4 Fast
- Moonshot AI — Kimi K2
Model availability may change as providers update their offerings. We do not control upstream provider model availability. The current list and per-call pricing are always visible in the AI model selector within the Ridge Sight dashboard.
5. Data Flow Architecture
The following describes the complete lifecycle of an AI insight request:
- User initiates — you click "Get AI Insight" on a pull request in the Ridge Sight dashboard.
- Server constructs prompt — our server-side code assembles a prompt from the PR's metadata (title, author, change stats, file paths, etc.). No source code or diffs are included.
- Gateway request — the prompt is sent to the Vercel AI Gateway over HTTPS (TLS-encrypted in transit) along with the selected model identifier.
- Model inference — the Vercel AI Gateway routes the request to the selected model provider. The provider performs inference and returns a JSON response containing the summary, risk score, risk factors, and confidence level.
- Zero Data Retention — the model provider discards the prompt and response immediately after inference. No data is stored or logged by the provider.
- Response processing — our server validates, normalizes, and sanitizes the AI response (clamping risk scores to 0–100, truncating arrays, validating confidence levels).
- Cache storage — the processed AI insight is cached in our database for up to 6 hours to avoid redundant API calls. The cache key is specific to the PR, model, and prompt version.
- Client display — the insight is returned to your browser and displayed in the PR card.
6. Caching & Data Retention
AI Insight Cache
- AI-generated insights are cached in our PostgreSQL database for up to 6 hours from the time of generation.
- Heuristic fallback insights (generated without calling an AI model, e.g., when models are unavailable) are cached for 15 minutes.
- Each cache entry is keyed by repository, PR number, selected model, and prompt version. Changing your selected model or requesting an insight after the cache expires will trigger a fresh AI call.
- Cached insights contain only the AI-generated output (summary text, risk score, risk factors, change notes, confidence level) and the model identifier — not the original prompt.
Usage Tracking
- We track the number of AI insight calls per user per month for billing and quota enforcement purposes.
- We track your monthly AI spending in cents for budget control features.
- We store your selected model preference so it persists across sessions.
- We do not store the content of prompts sent to the AI gateway, nor the raw model responses beyond the cached insight output.
7. Security Measures
- Encryption in transit — all communication between Ridge Sight's servers and the Vercel AI Gateway is encrypted using TLS (HTTPS).
- Encryption at rest — cached AI insights are stored in our Neon PostgreSQL database, which encrypts data at rest.
- API key protection — the AI Gateway API key is stored as a server-side environment variable and is never exposed to the browser or included in client-side code.
- Authentication required — AI insight endpoints require a valid authenticated session. Unauthenticated requests are rejected.
- Rate limiting — AI insight requests are subject to the same rate limiting as other API endpoints to prevent abuse.
- Input validation — all AI insight requests are validated against a strict schema (using Zod) before processing. Malformed requests are rejected.
- Output sanitization — AI model responses are parsed, validated, and sanitized before being stored or returned to the user. Invalid or unexpected fields are discarded.
8. Your Rights & Controls
- Opt-in only — AI insights are never generated automatically. You must explicitly request an insight for each pull request.
- Model choice — you choose which AI model processes your data. You can switch models at any time or revert to the included model.
- Budget controls — you can set a monthly spending cap to limit AI usage charges. Once your cap is reached, premium model calls are blocked until the next month.
- Data minimization — we send only the minimum metadata necessary to generate a useful insight. No source code or sensitive repository contents are ever included.
- No profiling — AI insights are generated on-demand and are not used to build profiles, track behavior, or make automated decisions about you.
- Deletion — cached AI insights are automatically purged after their TTL expires. If you cancel your Premium subscription, your AI usage data (counters, model preference, budget settings) remains in your account record but no new AI calls can be made.
9. Limitations & Disclaimers
- AI-generated insights are informational only and should not be the sole basis for merge, review, or deployment decisions.
- AI models may produce inaccurate, incomplete, or biased results. Risk scores and summaries are best-effort estimates, not guarantees.
- The quality of insights depends on the metadata available for each pull request. PRs with minimal descriptions or unusual structures may receive lower-quality analyses.
- We do not guarantee the availability of any specific AI model. Models may be temporarily unavailable due to upstream provider outages.
- While the Zero Data Retention policy means providers do not store your data, we cannot independently audit or verify each provider's internal compliance in real time. We rely on the contractual ZDR agreements established through the Vercel AI Gateway.
10. Changes to This Document
We may update this document to reflect changes in our AI features, available models, third-party provider policies, or data processing practices. Material changes will be communicated through the Service. The "Last updated" date at the top of this page reflects the most recent revision.
11. Contact
For questions about how your data is processed by AI features, please contact us at Jay(@)chkdsklabs.io or open an issue on our GitHub repository.