|
| 1 | +# Getting Started |
| 2 | + |
| 3 | +This is the shortest end-to-end walkthrough for a new user who wants to see what `llm-batch-pipeline` does in practice. |
| 4 | + |
| 5 | +It covers two real workflows: |
| 6 | + |
| 7 | +- OpenAI Batch API with `gpt-4o-mini` |
| 8 | +- A 3-way sharded Ollama setup at `http://nanu:11435`, `http://nanu:11436`, and `http://nanu:11437` |
| 9 | + |
| 10 | +These instructions were tested against the live services on 2026-04-09. |
| 11 | + |
| 12 | +## What You Will Do |
| 13 | + |
| 14 | +You will: |
| 15 | + |
| 16 | +- create a batch job with the built-in `spam_detection` plugin |
| 17 | +- add two sample `.eml` files |
| 18 | +- render a batch JSONL file |
| 19 | +- submit it to a backend |
| 20 | +- validate the model output against a Pydantic schema |
| 21 | +- evaluate the predictions against ground truth |
| 22 | + |
| 23 | +## Prerequisites |
| 24 | + |
| 25 | +- Python 3.13+ |
| 26 | +- `uv` |
| 27 | +- dependencies installed: |
| 28 | + |
| 29 | +```bash |
| 30 | +uv sync |
| 31 | +``` |
| 32 | + |
| 33 | +- for OpenAI: a `.env` file in the repo root with `OPENAI_API_KEY=...` |
| 34 | + |
| 35 | +The CLI now auto-loads `.env` from the repository root. |
| 36 | + |
| 37 | +## Offline Sanity Check |
| 38 | + |
| 39 | +Before using any backend, verify the install: |
| 40 | + |
| 41 | +```bash |
| 42 | +uv run llm-batch-pipeline list |
| 43 | +uv sync --group dev |
| 44 | +uv run pytest -q |
| 45 | +``` |
| 46 | + |
| 47 | +## OpenAI Batch Walkthrough |
| 48 | + |
| 49 | +### 1. Create a batch directory |
| 50 | + |
| 51 | +```bash |
| 52 | +uv run llm-batch-pipeline init getting_started_openai --plugin spam_detection --model gpt-4o-mini |
| 53 | +``` |
| 54 | + |
| 55 | +This creates a directory like `batches/batch_001_getting_started_openai`. |
| 56 | +Use that path in the commands below as `<openai-batch-dir>`. |
| 57 | + |
| 58 | +### 2. Copy the built-in prompt and schema into the batch |
| 59 | + |
| 60 | +```bash |
| 61 | +cp src/llm_batch_pipeline/examples/spam_detection/prompt.txt <openai-batch-dir>/prompt.txt |
| 62 | +cp src/llm_batch_pipeline/examples/spam_detection/schema.py <openai-batch-dir>/schema.py |
| 63 | +``` |
| 64 | + |
| 65 | +### 3. Add two sample emails |
| 66 | + |
| 67 | +```bash |
| 68 | +cat > <openai-batch-dir>/input/ham__team_sync.eml <<'EOF' |
| 69 | +From: alice@example.com |
| 70 | +To: bob@example.com |
| 71 | +Subject: Team sync tomorrow |
| 72 | +Date: Mon, 1 Jan 2024 10:00:00 +0000 |
| 73 | +MIME-Version: 1.0 |
| 74 | +Content-Type: text/plain; charset="utf-8" |
| 75 | +
|
| 76 | +Hi Bob, |
| 77 | +
|
| 78 | +Can we meet tomorrow at 3pm to review the release checklist and assign the last two action items? |
| 79 | +
|
| 80 | +Thanks, |
| 81 | +Alice |
| 82 | +EOF |
| 83 | + |
| 84 | +cat > <openai-batch-dir>/input/spam__million_prize.eml <<'EOF' |
| 85 | +From: prizes@claim-now.biz |
| 86 | +To: victim@example.com |
| 87 | +Subject: URGENT!! Claim your 1000000 dollar prize now |
| 88 | +Date: Mon, 1 Jan 2024 11:00:00 +0000 |
| 89 | +MIME-Version: 1.0 |
| 90 | +Content-Type: text/plain; charset="utf-8" |
| 91 | +
|
| 92 | +Congratulations! |
| 93 | +
|
| 94 | +You have been selected to receive a 1000000 dollar cash prize. Click http://claim-prize-now.example.com immediately and send your bank details today to avoid losing your winnings. |
| 95 | +EOF |
| 96 | +``` |
| 97 | + |
| 98 | +### 4. Add a category map for evaluation |
| 99 | + |
| 100 | +```bash |
| 101 | +cat > <openai-batch-dir>/evaluation/category-map.json <<'EOF' |
| 102 | +{ |
| 103 | + "ham": "ham", |
| 104 | + "spam": "spam" |
| 105 | +} |
| 106 | +EOF |
| 107 | +``` |
| 108 | + |
| 109 | +The `ham__...` and `spam__...` filename prefixes are how the evaluator infers ground truth from this file. |
| 110 | + |
| 111 | +### 5. Render the batch JSONL |
| 112 | + |
| 113 | +```bash |
| 114 | +uv run llm-batch-pipeline render --batch-dir <openai-batch-dir> --plugin spam_detection |
| 115 | +``` |
| 116 | + |
| 117 | +This writes the request payload to `<openai-batch-dir>/job/batch-00001.jsonl`. |
| 118 | + |
| 119 | +### 6. Submit to OpenAI Batch API |
| 120 | + |
| 121 | +```bash |
| 122 | +uv run llm-batch-pipeline submit --batch-dir <openai-batch-dir> --backend openai |
| 123 | +``` |
| 124 | + |
| 125 | +Notes: |
| 126 | + |
| 127 | +- this command waits for the batch to complete by default |
| 128 | +- in the live test for this guide, a 2-request batch took about 45 minutes to finish |
| 129 | +- batch metadata is saved to `<openai-batch-dir>/output/submission.json` |
| 130 | + |
| 131 | +If you do not want to keep the terminal open: |
| 132 | + |
| 133 | +```bash |
| 134 | +uv run llm-batch-pipeline submit --batch-dir <openai-batch-dir> --backend openai --no-wait |
| 135 | +uv run llm-batch-pipeline submit --batch-dir <openai-batch-dir> --backend openai --resume-batch-id <batch-id> |
| 136 | +``` |
| 137 | + |
| 138 | +### 7. Validate the output |
| 139 | + |
| 140 | +```bash |
| 141 | +uv run llm-batch-pipeline validate --batch-dir <openai-batch-dir> |
| 142 | +``` |
| 143 | + |
| 144 | +This reads `<openai-batch-dir>/output/output.jsonl` and writes validated rows to `<openai-batch-dir>/results/validated.json`. |
| 145 | + |
| 146 | +### 8. Evaluate the predictions |
| 147 | + |
| 148 | +```bash |
| 149 | +uv run llm-batch-pipeline evaluate \ |
| 150 | + --batch-dir <openai-batch-dir> \ |
| 151 | + --label-field classification \ |
| 152 | + --confidence-field confidence \ |
| 153 | + --positive-class spam |
| 154 | +``` |
| 155 | + |
| 156 | +This prints accuracy, macro F1, per-class metrics, and the confusion matrix to the terminal. |
| 157 | + |
| 158 | +In the tested run, the OpenAI batch classified both sample emails correctly. |
| 159 | + |
| 160 | +## Ollama Walkthrough |
| 161 | + |
| 162 | +### 1. Create a batch directory |
| 163 | + |
| 164 | +```bash |
| 165 | +uv run llm-batch-pipeline init getting_started_ollama --plugin spam_detection --model gemma4:latest |
| 166 | +``` |
| 167 | + |
| 168 | +This creates a directory like `batches/batch_002_getting_started_ollama`. |
| 169 | +Use that path in the commands below as `<ollama-batch-dir>`. |
| 170 | + |
| 171 | +### 2. Copy the built-in prompt and schema into the batch |
| 172 | + |
| 173 | +```bash |
| 174 | +cp src/llm_batch_pipeline/examples/spam_detection/prompt.txt <ollama-batch-dir>/prompt.txt |
| 175 | +cp src/llm_batch_pipeline/examples/spam_detection/schema.py <ollama-batch-dir>/schema.py |
| 176 | +``` |
| 177 | + |
| 178 | +### 3. Add the same sample inputs and evaluation map |
| 179 | + |
| 180 | +```bash |
| 181 | +cat > <ollama-batch-dir>/input/ham__team_sync.eml <<'EOF' |
| 182 | +From: alice@example.com |
| 183 | +To: bob@example.com |
| 184 | +Subject: Team sync tomorrow |
| 185 | +Date: Mon, 1 Jan 2024 10:00:00 +0000 |
| 186 | +MIME-Version: 1.0 |
| 187 | +Content-Type: text/plain; charset="utf-8" |
| 188 | +
|
| 189 | +Hi Bob, |
| 190 | +
|
| 191 | +Can we meet tomorrow at 3pm to review the release checklist and assign the last two action items? |
| 192 | +
|
| 193 | +Thanks, |
| 194 | +Alice |
| 195 | +EOF |
| 196 | + |
| 197 | +cat > <ollama-batch-dir>/input/spam__million_prize.eml <<'EOF' |
| 198 | +From: prizes@claim-now.biz |
| 199 | +To: victim@example.com |
| 200 | +Subject: URGENT!! Claim your 1000000 dollar prize now |
| 201 | +Date: Mon, 1 Jan 2024 11:00:00 +0000 |
| 202 | +MIME-Version: 1.0 |
| 203 | +Content-Type: text/plain; charset="utf-8" |
| 204 | +
|
| 205 | +Congratulations! |
| 206 | +
|
| 207 | +You have been selected to receive a 1000000 dollar cash prize. Click http://claim-prize-now.example.com immediately and send your bank details today to avoid losing your winnings. |
| 208 | +EOF |
| 209 | + |
| 210 | +cat > <ollama-batch-dir>/evaluation/category-map.json <<'EOF' |
| 211 | +{ |
| 212 | + "ham": "ham", |
| 213 | + "spam": "spam" |
| 214 | +} |
| 215 | +EOF |
| 216 | +``` |
| 217 | + |
| 218 | +### 4. Render the batch JSONL |
| 219 | + |
| 220 | +```bash |
| 221 | +uv run llm-batch-pipeline render --batch-dir <ollama-batch-dir> --plugin spam_detection |
| 222 | +``` |
| 223 | + |
| 224 | +### 5. Submit to the 3-way sharded Ollama cluster |
| 225 | + |
| 226 | +```bash |
| 227 | +uv run llm-batch-pipeline submit \ |
| 228 | + --batch-dir <ollama-batch-dir> \ |
| 229 | + --backend ollama \ |
| 230 | + --model gemma4:latest \ |
| 231 | + --base-url http://nanu:11435 \ |
| 232 | + --base-url http://nanu:11436 \ |
| 233 | + --base-url http://nanu:11437 \ |
| 234 | + --num-shards 3 \ |
| 235 | + --num-parallel-jobs 1 |
| 236 | +``` |
| 237 | + |
| 238 | +Notes: |
| 239 | + |
| 240 | +- these exact three URLs were verified for this guide |
| 241 | +- `http://11436` is not a valid endpoint; use `http://nanu:11436` |
| 242 | +- in the live test for this guide, the full 2-request Ollama submission finished in about 6 seconds |
| 243 | + |
| 244 | +### 6. Validate the output |
| 245 | + |
| 246 | +```bash |
| 247 | +uv run llm-batch-pipeline validate --batch-dir <ollama-batch-dir> |
| 248 | +``` |
| 249 | + |
| 250 | +### 7. Evaluate the predictions |
| 251 | + |
| 252 | +```bash |
| 253 | +uv run llm-batch-pipeline evaluate \ |
| 254 | + --batch-dir <ollama-batch-dir> \ |
| 255 | + --label-field classification \ |
| 256 | + --confidence-field confidence \ |
| 257 | + --positive-class spam |
| 258 | +``` |
| 259 | + |
| 260 | +In the tested run, the Ollama batch also classified both sample emails correctly. |
| 261 | + |
| 262 | +## Output You Should Expect |
| 263 | + |
| 264 | +After `render`: |
| 265 | + |
| 266 | +- `<batch-dir>/job/batch-00001.jsonl` |
| 267 | + |
| 268 | +After `submit`: |
| 269 | + |
| 270 | +- `<batch-dir>/output/output.jsonl` |
| 271 | +- `<batch-dir>/output/summary.json` |
| 272 | + |
| 273 | +After `validate`: |
| 274 | + |
| 275 | +- `<batch-dir>/results/validated.json` |
| 276 | + |
| 277 | +After `evaluate`: |
| 278 | + |
| 279 | +- metrics printed to stdout |
| 280 | + |
| 281 | +## When To Use `run` Instead |
| 282 | + |
| 283 | +If you already trust your prompt, schema, and backend settings, you can collapse the whole pipeline into one command: |
| 284 | + |
| 285 | +```bash |
| 286 | +uv run llm-batch-pipeline run --batch-dir <batch-dir> --plugin spam_detection --auto-approve ... |
| 287 | +``` |
| 288 | + |
| 289 | +For a first pass, the staged workflow above is easier to debug because you can inspect the rendered JSONL, raw model output, validated JSON, and evaluation step separately. |
0 commit comments