One Docker command trains your model, proves fairness across every demographic, and generates the compliance certificate. Your data never leaves your machine.
$ docker run --rm paragon/fairness demo
Generating synthetic data... done
Training models at d=[8, 16, 32, 64, 128]... done (12s)
Models saved: ./output/models/
Certificate: ./output/fairness_certificate_d32.json
Accuracy: 0.891 | Fairness gap: 0.024 | Leakage: 3.2% The Problem
Until now, making AI fair required specialized ML engineers, custom evaluation pipelines, and months of iteration. Paragon replaces all of that with one command.
Architecture
In demo mode, everything runs in one Docker command. In production, Prep and Train deploy separately across your security perimeters.
paragon prep --source snowflake --config prep.yaml paragon train --input features/ --d-sweep
Both products collapse into one command:
docker run paragon/fairness demo
CSV and Parquet shipped. SQL connectors and FHIR coming in v1.1.
Getting Started
No ML expertise needed. No code to write. A compliance officer can run this. The entire training engine is inside the Docker — compiled and patent-protected.
One command. 1.6 GB. Contains the full training pipeline — GLE encoder, dual-stream architecture, bottleneck optimizer. All compiled. No readable source code.
docker pull ghcr.io/paragon-dao/fairness Specify which column is sensitive (ancestry, gender, age) and which is your target. The engine sweeps 5 bottleneck dimensions automatically. Your data stays on your machine.
paragon train --d-sweep --input data.csv You receive trained model checkpoints (your IP, your deployment) and a Fairness Certificate — per-group performance, leakage score, compliance mapping. Show the certificate to auditors. Deploy the model.
fairness_certificate_d32.json The Science
Our patented bottleneck dimension d controls sensitive-attribute leakage 21x more effectively than adversarial training. Pick your fairness-accuracy tradeoff with a single number. An auditor can inspect it in 30 seconds.
Accuracy cost at fairest setting: only 3.6%. Validated on 1000 Genomes · 5 ancestries · 6 clinical traits.
Try the interactive demo →Use Cases
PRS models trained on European data lose up to 78% accuracy in non-European populations. Our training pipeline forces equitable prediction across all ancestries — automatically.
EU AI Act Art. 10 · GINA
FDA FDORA 2022 requires diversity action plans. AI in trial design and patient stratification must demonstrate subgroup fairness. Our pipeline trains fair models your team can deploy directly.
FDA AI/ML · FDORA 2022
Breathing models, voice biomarkers, EEG analysis — any health AI shipping to real patients needs fair models trained across demographics. Get trained models and the certificate to prove it.
EU AI Act Art. 15 · NYC Law 144
Why Now
Fairness auditing is no longer optional. If your AI touches diverse populations, you need provable fairness — or you don't ship.
Articles 10 & 15 — high-risk AI must demonstrate data governance and accuracy across subgroups. Fines up to 7% of global revenue.
Annual bias audit required for automated employment decisions. Already enforced in New York City.
Medical AI devices must report subgroup performance. Demographic bias is classified as a safety issue.
Privacy by Architecture
The Docker container makes zero outbound connections. No telemetry, no phone-home, no data exfiltration. Air-gapped compatible.
Cython-compiled to native machine code. No readable Python source. Your model architecture and our IP — both protected.
Only the Fairness Certificate — a small JSON of metrics, no patient data — can optionally be published to paragondao.org/verify. And only if you choose to.
Under the Hood
Protected by four US provisional patents. Validated across multiple domains and independently reviewed by expert panels.
Pricing
No credit card. Pull the Docker. Run the demo. See provable fairness in 10 seconds.
Pull the Docker. Run the demo. See provable fairness on real genomic data from 1000 Genomes — 2,504 individuals across 5 ancestries.