INVARLOCK_
Edit-agnostic robustness certificates

Compare & certify robustness for edited LLMs.

InvarLock compares an edited or compressed model to its original baseline and quantifies how much quality and stability changed. A single run drives a GuardChain of invariants, spectral, RMT (random matrix theory), and variance equalization checks, then issues a signed robustness certificate indicating whether the edit stayed inside your defined regression budget (your allowed quality drop) and policy tier. The engine runs on-prem today, with certification-as-a-service pilots and an attestation portal under active development.

Request Access

Enterprise pilots / On-prem engine / Attestation design partners

SECURE TRANSMISSION//ENCRYPTED
GUARDCHAIN

Safety checks beyond perplexity

Every edited model passes through the same GuardChain: structural invariants, spectral guards for stability, RMT (random matrix theory) for outliers, and a variance guard that catches harmful variance shifts. Instead of just reporting metrics, the chain can flag edits as unsafe even when bulk perplexity looks fine.

EVALUATION

Regression, quantified and signed

Certificates are backed by paired, token-weighted Δlog-loss evaluation and BCa bootstrap confidence intervals over calibrated windows. The output is not just a score but a decision: did the edit stay within your regression budget for the selected policy tier?

Execution Log
invarlock certify
--baseline llama-2-7b \
--subject llama-2-7b-4bit-rtn
PASS · balanced_4bitcert_id: 4d11-7b-4bit
ppl_ratio=1.29ve_gain=1.07rmt_outliers=within_bandregression_tier=within_budget