Compare & certify robustness for edited LLMs.
InvarLock compares an edited or compressed model to its original baseline and quantifies how much quality and stability changed. A single run drives a GuardChain of invariants, spectral, RMT (random matrix theory), and variance equalization checks, then issues a signed robustness certificate indicating whether the edit stayed inside your defined regression budget (your allowed quality drop) and policy tier. The engine runs on-prem today, with certification-as-a-service pilots and an attestation portal under active development.
Request Access
Enterprise pilots / On-prem engine / Attestation design partners
Safety checks beyond perplexity
Every edited model passes through the same GuardChain: structural invariants, spectral guards for stability, RMT (random matrix theory) for outliers, and a variance guard that catches harmful variance shifts. Instead of just reporting metrics, the chain can flag edits as unsafe even when bulk perplexity looks fine.
Regression, quantified and signed
Certificates are backed by paired, token-weighted Δlog-loss evaluation and BCa bootstrap confidence intervals over calibrated windows. The output is not just a score but a decision: did the edit stay within your regression budget for the selected policy tier?