Skip to content

saiganesh47/mlens

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

46 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸ”¬ MLens β€” Explainable ML Audit Tool

MLens Banner

CI PyPI License: MIT Python 3.9+ Version

Drop in any trained ML model. Get a full audit report β€” explainability, fairness, drift β€” in seconds.

Most ML portfolios show model accuracy. MLens shows everything that matters after deployment: why a model decides what it decides, who it harms, and when it starts to degrade.

This is the tool you need for enterprise AI governance, regulatory compliance (GDPR, EU AI Act), and ML interviews that go beyond "what's your accuracy?"


✨ Features

Module What it does
🧠 SHAP Explainability Auto-selects TreeExplainer / LinearExplainer / KernelExplainer. Global importance bar charts + local waterfall plots per prediction.
βš–οΈ Fairness Evaluation Demographic Parity Gap, Equalized Odds Gap, Disparate Impact (EEOC 4/5ths rule), and full per-group breakdown across any protected attribute.
πŸ“Š Drift Detection PSI (Population Stability Index) + KS-test per feature. Flags stable / moderate / significant shifts between training and production data.
πŸ“„ HTML Report One-page interactive audit report with Plotly charts, plain-English summary, and per-feature drill-down.

βš™οΈ How It Works

Your trained model
      β”‚
      β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚              ModelAuditor.run()             β”‚
β”‚                                             β”‚
β”‚  β‘  ShapAnalyzer   β†’  ShapResult            β”‚
β”‚     TreeExplainer / Linear / Kernel        β”‚
β”‚                                             β”‚
β”‚  β‘‘ FairnessEvaluator  β†’  FairnessResult    β”‚
β”‚     fairlearn MetricFrame + flagging       β”‚
β”‚                                             β”‚
β”‚  β‘’ DriftDetector  β†’  DriftResult           β”‚
β”‚     PSI (equal-freq bins) + KS-test        β”‚
β”‚                                             β”‚
β”‚  β‘£ ReportGenerator  β†’  mlens_report.html   β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
  1. SHAP β€” MLens picks the fastest explainer for your model family. Tree-based models use TreeExplainer (near-instant); black-box models fall back to KernelExplainer with k-means summarisation.
  2. Fairness β€” You pass a single sensitive feature (e.g. df["gender"]). MLens computes gap metrics and flags anything that exceeds configurable thresholds.
  3. Drift β€” Your training data is the reference. PSI bins are built on reference quantiles, then applied to production data. KS-test provides a second opinion.
  4. Report β€” All results are assembled into a single interactive HTML file (no server required, fully offline).

πŸ› οΈ Tech Stack

Layer Libraries
Explainability shap >= 0.44
Fairness fairlearn >= 0.10, scikit-learn
Drift scipy (KS-test), custom PSI implementation
Visualisation plotly >= 5.18
Report jinja2, embedded Plotly HTML
Model Support sklearn, XGBoost, LightGBM, (PyTorch via KernelExplainer)

πŸš€ Installation

pip install mlens

Or from source:

git clone https://github.com/yourusername/mlens.git
cd mlens
pip install -e ".[dev]"

πŸƒ Quick Start

from mlens import ModelAuditor

# Any trained sklearn / XGBoost / LightGBM model
auditor = ModelAuditor(
    model=trained_model,
    X_train=X_train,
    X_test=X_test,
    y_test=y_test,
    sensitive_features=df_test["gender"],   # protected attribute
    feature_names=list(X.columns),
    model_name="MyProductionModel",
)

report = auditor.run()
report.save("audit_report.html")   # β†’ opens in any browser

Run the full demo:

python examples/quickstart.py

πŸ–ΌοΈ Visuals

SHAP Summary Plot Β  Fairness Dashboard

Drift Heatmap Β  Full Audit Report


πŸ“ Project Structure

mlens/
β”œβ”€β”€ mlens/
β”‚   β”œβ”€β”€ auditor.py                  ← Main orchestrator (start here)
β”‚   β”œβ”€β”€ explainability/
β”‚   β”‚   └── shap_analyzer.py        ← SHAP auto-selector
β”‚   β”œβ”€β”€ fairness/
β”‚   β”‚   └── fairness_metrics.py     ← fairlearn wrapper + flagging
β”‚   β”œβ”€β”€ drift/
β”‚   β”‚   └── drift_detector.py       ← PSI + KS-test per feature
β”‚   └── report/
β”‚       β”œβ”€β”€ html_generator.py       ← Jinja2 + Plotly report builder
β”‚       └── templates/
β”‚           └── report.html.j2
β”œβ”€β”€ examples/
β”‚   └── quickstart.py               ← Adult Income end-to-end demo
β”œβ”€β”€ tests/
β”‚   β”œβ”€β”€ test_auditor.py
β”‚   β”œβ”€β”€ test_fairness.py
β”‚   └── test_drift.py
β”œβ”€β”€ requirements.txt
└── README.md

πŸ“ Project Structure

mlens/
β”‚
β”œβ”€β”€ mlens/                            ← Core package
β”‚   β”œβ”€β”€ __init__.py                   βœ… v0.1.0
β”‚   β”œβ”€β”€ auditor.py                    βœ… v0.1.0
β”‚   β”œβ”€β”€ explainability/
β”‚   β”‚   └── shap_analyzer.py          βœ… v0.1.0
β”‚   β”œβ”€β”€ fairness/
β”‚   β”‚   └── fairness_metrics.py       βœ… v0.1.0
β”‚   β”œβ”€β”€ drift/
β”‚   β”‚   └── drift_detector.py         βœ… v0.1.0
β”‚   β”œβ”€β”€ report/
β”‚   β”‚   β”œβ”€β”€ __init__.py               πŸ†• v0.2.0
β”‚   β”‚   β”œβ”€β”€ html_generator.py         πŸ†• v0.2.0
β”‚   β”‚   β”œβ”€β”€ pdf_generator.py          πŸ†• v0.2.0
β”‚   β”‚   └── templates/
β”‚   β”‚       └── report.html.j2        πŸ†• v0.2.0
β”‚   └── cli/
β”‚       β”œβ”€β”€ __init__.py               πŸ†• v0.2.0
β”‚       └── main.py                   πŸ†• v0.2.0
β”‚
β”œβ”€β”€ dashboard/
β”‚   └── app.py                        πŸ†• v0.2.0 (Streamlit)
β”‚
β”œβ”€β”€ examples/
β”‚   └── quickstart.py                 βœ… v0.1.0
β”‚
β”œβ”€β”€ tests/
β”‚   β”œβ”€β”€ test_auditor.py               πŸ†• v0.2.0
β”‚   β”œβ”€β”€ test_fairness.py              πŸ†• v0.2.0
β”‚   └── test_drift.py                 πŸ†• v0.2.0
β”‚
β”œβ”€β”€ docs/
β”‚   └── assets/                       βœ… v0.1.0 (4 charts + banner)
β”‚
β”œβ”€β”€ README.md                         βœ… v0.1.0
β”œβ”€β”€ CONTRIBUTING.md                   βœ… v0.1.0
β”œβ”€β”€ setup.py                          πŸ†• v0.2.0
β”œβ”€β”€ requirements.txt                  πŸ†• v0.2.0 (updated)
└── .github/workflows/ci.yml          βœ… v0.1.0

πŸ§ͺ Running Tests

pytest tests/ -v --cov=mlens --cov-report=term-missing

πŸ—ΊοΈ Roadmap

  • PyTorch model support (native, no KernelExplainer fallback)
  • PDF report export
  • Intersectional fairness (multi-attribute)
  • Concept drift detection (ADWIN, Page-Hinkley)
  • CLI: mlens audit model.pkl X_test.csv
  • Streamlit dashboard UI

🀝 Contributing

Pull requests are welcome! See CONTRIBUTING.md for guidelines.


πŸ“œ License

MIT Β© 2026 Your Name


πŸ“š References

  • Lundberg & Lee, A Unified Approach to Interpreting Model Predictions (NeurIPS 2017)
  • Bird et al., Fairlearn: A toolkit for assessing and improving fairness in AI (2020)
  • Hardt et al., Equality of Opportunity in Supervised Learning (NeurIPS 2016)
  • EEOC Uniform Guidelines on Employee Selection Procedures (1978)

About

πŸ”¬ Drop in any ML model β†’ get SHAP explainability, fairness audit & drift detection in seconds

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Packages

 
 
 

Contributors