Stop shipping AI regressions
LaunchGate runs evals on every change and blocks deployments when quality drops below your threshold.
import { LaunchGate } from "@launchgate/sdk";
const lg = new LaunchGate({ apiKey: process.env.LAUNCHGATE_API_KEY });
const result = await lg.run("rag-faithfulness", {
input: { context, query },
output: aiResponse,
});
if (result.status === "cleared") {
// All systems go — deploy with confidence
}How it works
Define what good looks like
Create an eval suite with test cases and scoring criteria.
Run evals on every change
Trigger from your CI/CD pipeline, SDK, or CLI.
Gate deployment on results
Pass rate below threshold? Deployment blocked automatically.
Ship with confidence
Every release is validated. No more guessing if quality held.
Everything you need to gate AI quality
Quality gates that actually block
Define pass thresholds per suite. If evals fail, the deployment stops. No more shipping regressions.
5 scorer types built in
Exact match, regex, JSON schema, contains, and LLM-as-judge. Mix deterministic checks with AI evaluation.
GitHub Action in 2 minutes
Add one YAML file to your repo. LaunchGate posts results directly to your PR with pass/fail status.
Track quality over time
See pass rates trending up or down. Catch regressions before they reach users.
SDK and CLI
npm install @launchgate/sdk and run evals from code. Or use the CLI in any CI/CD pipeline.
Bring your own keys
Use your OpenAI, Anthropic, Google, or Azure keys for LLM-as-judge scoring. Encrypted at rest.
Ready for launch?
Start with 500 free eval runs per month. No credit card required.
Get Started Free