Skip to content

TYH-labs/gaslamp

Repository files navigation

gaslamp

Gaslamp Logo

GitHub MIT License Gaslamp Docs Agent Compatible Discord

/gaslamp I have 3 years of Shopify sales data. I want to forecast demand and build a serving API.

Quick Start 5 Skills Docs

English | 简体中文 | 繁體中文


Go from a problem statement to a production ML model — without context-switching between tools.

gaslamp is an agentic skill for Claude Code that orchestrates the complete machine learning lifecycle. You describe what you want to build. Gaslamp drives the project: interviewing you on requirements, designing a data strategy, training a model, optimizing it for serving, and generating stakeholder-ready reports. Each phase is handled by a specialized buddy skill, all sharing a single project workspace.

See gaslamp.dev for full documentation.


One prompt, full ML pipeline.

/gaslamp I have 3 years of Shopify sales data. I want to forecast demand for next quarter.

[Gaslamp] Project: shopify_demand_forecast_2026-03-27/
[ml-buddy] Interview → XGBoost + Prophet ensemble, 90-day horizon, RMSE target
[ml-buddy] Data → lag features, holiday flags, promo indicators → 47k rows ready
[ml-buddy] Training → val RMSE 4.2% — beats naive baseline by 31%
[deploy-buddy] Serving → FastAPI + Docker image
[report-buddy] → forecast_report.html

One conversation, five phases, one production-ready system.


How is it different?

Most ML tools are single-phase. Gaslamp is the whole pipeline.

Your situation What actually happens
"I don't know what ML approach to use" A structured interview maps your business goal to the right model class, data strategy, and success metric
"My data is messy and in the wrong format" ml-buddy designs a full data strategy: EDA, dataset discovery, feature engineering, augmentation
"I trained a model but can't deploy it" deploy-buddy auto-detects your hardware, picks the right engine (ONNX, TensorRT, vLLM, llama.cpp, MLX), and generates serving code
"My stakeholders can't read a Jupyter notebook" report-buddy generates an interactive HTML demo and executive summary — no code required
"I need LLM fine-tuning, not just classification" unsloth-buddy handles the full LoRA pipeline: interview, data, train, eval, export to GGUF
"I keep context-switching between 5 tools" One shared gaslamp.md roadbook carries all decisions and state across every phase and buddy

How it works

Gaslamp maintains a unified project directory and a shared state file (gaslamp.md) that every buddy reads and writes. You never lose context between phases.

my-project_2026-03-15/
├── gaslamp.md            ← shared state: decisions, environment, phase status
├── project_brief.md      ← from ml-buddy interview
├── data_strategy.md      ← from ml-buddy data phase
├── training_config.yaml  ← from ml-buddy model phase
├── eval_report.md        ← from ml-buddy evaluation phase
├── deployment_brief.md   ← from deploy-buddy
├── serving_report.md     ← from deploy-buddy
└── ml-buddy/
    ├── memory.md
    └── progress_log.md

Phase 1 — Ideation & Training (ml-buddy) A 5-phase research companion: structured interview to define requirements, data strategy design (EDA, dataset discovery, augmentation), model selection across NLP, CV, tabular, time-series, anomaly detection, and recommender systems, training script generation, and rigorous evaluation against success criteria.

Phase 2 — Optimization & Serving (deploy-buddy) Auto-discovers your hardware and cloud environment, selects the right optimization engine (ONNX, TensorRT, vLLM, llama.cpp, MLX, Core ML), applies quantization (FP16, INT8, GPTQ, AWQ, GGUF), benchmarks the result, and generates production-ready serving code (FastAPI, Triton, Docker).

Phase 3 — LLM Fine-tuning (unsloth-buddy, optional) When the project calls for fine-tuning a language model: automated environment setup, LoRA training (SFT, DPO, GRPO, ORPO, KTO, SimPO, Vision SFT) on NVIDIA GPUs via Unsloth or Apple Silicon via mlx-tune, evaluation, and export to GGUF / Hugging Face Hub.

Phase 4 — Cloud Pipelines (pipeline-buddy, optional) Turns your local training script into a production-ready, modular cloud pipeline using Tangle — automated retrains, versioning, and monitoring.

Phase 5 — Reporting (report-buddy) Generates executive summary reports and interactive HTML demos so non-technical stakeholders can understand and interact with your model — no code required.


Tech Packs

Gaslamp ships with vertical domain packs that pre-load industry-specific recipes, datasets, and model recommendations for common ML problems:

Domain Examples
Finance Credit risk scoring, fraud detection, churn prediction
Retail & E-commerce Sales forecasting, recommendation engines, demand planning
Healthcare Patient readmission, clinical NLP, imaging classification

Tech packs are fetched automatically on new projects via the gaslamp_tech_pack_fetcher MCP server.


Install

Claude Code

# From within Claude Code:
install skills/SKILL.md

# Or manually:
mkdir -p ~/.claude/skills/gaslamp
cp skills/SKILL.md ~/.claude/skills/gaslamp/SKILL.md

Invoke with /gaslamp or describe an ML project — the skill triggers automatically.

Gemini CLI

gemini extensions install https://github.com/TYH-labs/gaslamp --consent
# or locally:
gemini extensions install . --consent

Claude Code marketplace (once registered)

/plugin install gaslamp@TYH-labs/gaslamp

OpenAI Codex / any agent supporting the Agent Skills standard

cp skills/SKILL.md .agents/skills/gaslamp/SKILL.md

MCP Servers

Gaslamp uses two MCP servers that register automatically on first use:

  • gaslamp_tech_pack_fetcher — fetches vertical domain packs from gaslamp.sh
  • gaslamp_demo_builder — fetches interactive HTML demo templates for report-buddy

Both are installed via npx on demand — no manual setup needed.


Individual Skills

Each buddy can also be used standalone, outside of Gaslamp:

Skill Purpose
ml-buddy ML ideation → trained model
deploy-buddy Trained model → production serving
unsloth-buddy LLM fine-tuning on any hardware
pipeline-buddy Local script → cloud pipeline
report-buddy Model results → stakeholder reports

Changelog

  • 2026-03-27 — ml-buddy: Added time-series, anomaly detection, and recommender system model taxonomies; deploy-buddy: enriched optimize with hardware decision table, quantization reference, and benchmark methodology
  • 2026-03-18 — unsloth-buddy: Google Colab cloud GPU training via colab-mcp; SSE dashboard polish
  • 2026-03-17 — unsloth-buddy: Real-time training dashboard V3 — SSE streaming, ETA tracking, dynamic phase badges, GPU memory breakdown, GRPO/DPO panels, live Python server
  • 2026-03-16 — Added install instructions for Claude Code, Gemini CLI, OpenAI Codex, and any Agent Skills-compatible agent
  • 2026-03-15 — ml-buddy: Silent ML level assessment and knowledge warmup for users new to ML

License

See LICENSE.txt.

gaslamp.dev

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors