Drop in any trained ML model. Get a full audit report β explainability, fairness, drift β in seconds.
Most ML portfolios show model accuracy. MLens shows everything that matters after deployment: why a model decides what it decides, who it harms, and when it starts to degrade.
This is the tool you need for enterprise AI governance, regulatory compliance (GDPR, EU AI Act), and ML interviews that go beyond "what's your accuracy?"
| Module | What it does |
|---|---|
| π§ SHAP Explainability | Auto-selects TreeExplainer / LinearExplainer / KernelExplainer. Global importance bar charts + local waterfall plots per prediction. |
| βοΈ Fairness Evaluation | Demographic Parity Gap, Equalized Odds Gap, Disparate Impact (EEOC 4/5ths rule), and full per-group breakdown across any protected attribute. |
| π Drift Detection | PSI (Population Stability Index) + KS-test per feature. Flags stable / moderate / significant shifts between training and production data. |
| π HTML Report | One-page interactive audit report with Plotly charts, plain-English summary, and per-feature drill-down. |
Your trained model
β
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββ
β ModelAuditor.run() β
β β
β β ShapAnalyzer β ShapResult β
β TreeExplainer / Linear / Kernel β
β β
β β‘ FairnessEvaluator β FairnessResult β
β fairlearn MetricFrame + flagging β
β β
β β’ DriftDetector β DriftResult β
β PSI (equal-freq bins) + KS-test β
β β
β β£ ReportGenerator β mlens_report.html β
βββββββββββββββββββββββββββββββββββββββββββββββ
- SHAP β MLens picks the fastest explainer for your model family. Tree-based models use TreeExplainer (near-instant); black-box models fall back to KernelExplainer with k-means summarisation.
- Fairness β You pass a single sensitive feature (e.g.
df["gender"]). MLens computes gap metrics and flags anything that exceeds configurable thresholds. - Drift β Your training data is the reference. PSI bins are built on reference quantiles, then applied to production data. KS-test provides a second opinion.
- Report β All results are assembled into a single interactive HTML file (no server required, fully offline).
| Layer | Libraries |
|---|---|
| Explainability | shap >= 0.44 |
| Fairness | fairlearn >= 0.10, scikit-learn |
| Drift | scipy (KS-test), custom PSI implementation |
| Visualisation | plotly >= 5.18 |
| Report | jinja2, embedded Plotly HTML |
| Model Support | sklearn, XGBoost, LightGBM, (PyTorch via KernelExplainer) |
pip install mlensOr from source:
git clone https://github.com/yourusername/mlens.git
cd mlens
pip install -e ".[dev]"from mlens import ModelAuditor
# Any trained sklearn / XGBoost / LightGBM model
auditor = ModelAuditor(
model=trained_model,
X_train=X_train,
X_test=X_test,
y_test=y_test,
sensitive_features=df_test["gender"], # protected attribute
feature_names=list(X.columns),
model_name="MyProductionModel",
)
report = auditor.run()
report.save("audit_report.html") # β opens in any browserRun the full demo:
python examples/quickstart.pymlens/
βββ mlens/
β βββ auditor.py β Main orchestrator (start here)
β βββ explainability/
β β βββ shap_analyzer.py β SHAP auto-selector
β βββ fairness/
β β βββ fairness_metrics.py β fairlearn wrapper + flagging
β βββ drift/
β β βββ drift_detector.py β PSI + KS-test per feature
β βββ report/
β βββ html_generator.py β Jinja2 + Plotly report builder
β βββ templates/
β βββ report.html.j2
βββ examples/
β βββ quickstart.py β Adult Income end-to-end demo
βββ tests/
β βββ test_auditor.py
β βββ test_fairness.py
β βββ test_drift.py
βββ requirements.txt
βββ README.md
mlens/
β
βββ mlens/ β Core package
β βββ __init__.py β
v0.1.0
β βββ auditor.py β
v0.1.0
β βββ explainability/
β β βββ shap_analyzer.py β
v0.1.0
β βββ fairness/
β β βββ fairness_metrics.py β
v0.1.0
β βββ drift/
β β βββ drift_detector.py β
v0.1.0
β βββ report/
β β βββ __init__.py π v0.2.0
β β βββ html_generator.py π v0.2.0
β β βββ pdf_generator.py π v0.2.0
β β βββ templates/
β β βββ report.html.j2 π v0.2.0
β βββ cli/
β βββ __init__.py π v0.2.0
β βββ main.py π v0.2.0
β
βββ dashboard/
β βββ app.py π v0.2.0 (Streamlit)
β
βββ examples/
β βββ quickstart.py β
v0.1.0
β
βββ tests/
β βββ test_auditor.py π v0.2.0
β βββ test_fairness.py π v0.2.0
β βββ test_drift.py π v0.2.0
β
βββ docs/
β βββ assets/ β
v0.1.0 (4 charts + banner)
β
βββ README.md β
v0.1.0
βββ CONTRIBUTING.md β
v0.1.0
βββ setup.py π v0.2.0
βββ requirements.txt π v0.2.0 (updated)
βββ .github/workflows/ci.yml β
v0.1.0
pytest tests/ -v --cov=mlens --cov-report=term-missing- PyTorch model support (native, no KernelExplainer fallback)
- PDF report export
- Intersectional fairness (multi-attribute)
- Concept drift detection (ADWIN, Page-Hinkley)
- CLI:
mlens audit model.pkl X_test.csv - Streamlit dashboard UI
Pull requests are welcome! See CONTRIBUTING.md for guidelines.
MIT Β© 2026 Your Name
- Lundberg & Lee, A Unified Approach to Interpreting Model Predictions (NeurIPS 2017)
- Bird et al., Fairlearn: A toolkit for assessing and improving fairness in AI (2020)
- Hardt et al., Equality of Opportunity in Supervised Learning (NeurIPS 2016)
- EEOC Uniform Guidelines on Employee Selection Procedures (1978)




