-
Notifications
You must be signed in to change notification settings - Fork 28
Feat/aidd split pr skill #131
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
ee0ab8e
bb5ecc8
acbdb90
7fdf400
27cdb7f
0322a2a
f182704
eae46a7
b06c346
a04ea7f
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,17 @@ | ||
| # ✂️ aidd-split-pr | ||
|
|
||
| Decompose an oversized PR into smaller, independently-mergeable increments — without losing the work already done on the source branch. | ||
|
|
||
| ## Why | ||
|
|
||
| Large PRs are hard to review, risky to merge, and slow to ship. This skill audits an existing branch against a source PR, plans a safe split sequence, and stages each increment from existing work first — writing new code only to fill confirmed gaps. | ||
|
|
||
| ## Usage | ||
|
|
||
| ``` | ||
| /split-pr [target PR | target branch] | ||
| ``` | ||
|
|
||
| Point it at the PR or branch that needs splitting. It will merge latest main, inventory existing progress, identify modularization opportunities, propose a PR sequence for your approval, then stage each increment. | ||
|
|
||
| See [SKILL.md](./SKILL.md) for the full spec. |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,90 @@ | ||
| --- | ||
| name: aidd-split-pr | ||
| description: > | ||
| Split a large PR into smaller, mergeable increments without breaking | ||
| existing functionality. | ||
| compatibility: Requires git and npm. | ||
| --- | ||
|
|
||
| # ✂️ aidd-split-pr | ||
|
|
||
| Act as a top-tier software engineer to decompose an oversized PR into | ||
| independently-mergeable increments, each leaving CI green. | ||
|
|
||
| Competencies { | ||
| merge conflict resolution | ||
| code modularization and file-splitting | ||
| incremental delivery planning | ||
| TDD discipline | ||
| PR size management | ||
| } | ||
|
|
||
| Constraints { | ||
| Do ONE step at a time. Do not skip steps or reorder them. | ||
| Ask before resolving any conflict that could change existing behavior. | ||
| Prefer extraction over reimplementation: the source branch is the primary | ||
| source of truth for implementation. Write new code only to fill confirmed | ||
| gaps identified in the audit. | ||
| Apply @javascript.mdc, @error-causes.mdc, @tdd.mdc, and @requirements.mdc | ||
| throughout. | ||
| One specific error-type rule: define CausedError ONCE in a single .d.ts; | ||
| never duplicate error type declarations across files. | ||
| } | ||
|
|
||
| PRConstraints { | ||
| AVOID UNNECESSARY DUPLICATION! | ||
| Less is more: every line must serve a justified functional requirement. | ||
| Max individual PR size: +1000 LoC. | ||
| Reduce test verbosity: assert whole objects, not properties one at a time. | ||
| } | ||
|
|
||
| ## Step 1 — Merge Latest Main | ||
| mergeMain() { | ||
| 1. Merge `main` into the branch | ||
| 2. Resolve conflicts conservatively — ask before touching anything behavioral | ||
| } | ||
|
|
||
| ## Step 2 — Audit Existing Progress | ||
| auditProgress(sourcePR) => inventory { | ||
| 1. Compare branch diff to the source PR | ||
| 2. Categorize every change: done | partial | not-started | ||
| 3. Share inventory with user before proceeding | ||
| } | ||
|
|
||
| ## Step 3 — Identify Modularization Opportunities | ||
| findSplitPoints(inventory) => splitPlan { | ||
| 1. Run `npx aidd churn` to get a ranked hotspot table (LoC × churn × complexity) | ||
| 2. Flag files > 200 LoC that appear in the top results — candidates for module extraction | ||
| 3. Identify shared mutable state in high-scoring files — propose refactors to eliminate brittle coupling | ||
| } | ||
|
|
||
| ## Step 4 — Plan the PR Sequence | ||
| planPRs(splitPlan) => prSequence { | ||
| Each PR must: | ||
| - be independently mergeable with CI green | ||
| - stay within PRConstraints | ||
| - be presented to the user for approval before implementation begins | ||
| } | ||
|
|
||
| ## Step 5 — Stage Each PR from Existing Work | ||
| stagePR(pr, inventory) => stagedPR { | ||
| For each change in this PR's scope: | ||
|
|
||
| done | partial => extract from source branch diff; do NOT rewrite | ||
| - Cherry-pick, reorganize, or copy the existing implementation | ||
| - partial => identify the gap; fill it using TDD (@tdd.mdc) before staging | ||
| not-started => confirm with user before writing anything new | ||
|
|
||
| 1. Run /review on staged changes — resolve findings | ||
| 2. Run /commit | ||
| } | ||
|
|
||
| splitPR = mergeMain |> auditProgress |> findSplitPoints |> planPRs |> stagePR* | ||
|
|
||
| Reference { | ||
| Source PR: <Source PR> | ||
| } | ||
|
|
||
| Commands { | ||
| ✂️ /split-pr [target PR | target branch] - split an oversized PR into mergeable increments | ||
| } |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,19 @@ | ||
| # aidd-split-pr | ||
|
|
||
| This index provides an overview of the contents in this directory. | ||
|
|
||
| ## Files | ||
|
|
||
| ### ✂️ aidd-split-pr | ||
|
|
||
| **File:** `README.md` | ||
|
|
||
| *No description available* | ||
|
|
||
| ### ✂️ aidd-split-pr | ||
|
|
||
| **File:** `SKILL.md` | ||
|
|
||
| Split a large PR into smaller, mergeable increments without breaking existing functionality. | ||
|
|
||
|
|
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,37 @@ | ||
| import { execSync } from "child_process"; | ||
| import { createError, errorCauses } from "error-causes"; | ||
|
|
||
| const [churnErrors, handleChurnErrors] = errorCauses({ | ||
| GitError: { message: "git command failed" }, | ||
| NotAGitRepo: { message: "not a git repository" }, | ||
| }); | ||
|
|
||
| export { handleChurnErrors, churnErrors }; | ||
|
|
||
| /** | ||
| * Returns a Map of filePath -> commit touch count for files changed | ||
| * within the given day window. | ||
| */ | ||
| export const collectChurn = ({ cwd = process.cwd(), days = 90 } = {}) => { | ||
| const since = `${days} days ago`; | ||
| let output; | ||
|
|
||
| try { | ||
| output = execSync( | ||
| `git log --since="${since}" --name-only --pretty=format: --diff-filter=ACMR`, | ||
| { cwd, encoding: "utf8" }, | ||
| ); | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Shell injection via
|
||
| } catch (cause) { | ||
| const isNotRepo = cause.message?.includes("not a git repository"); | ||
| throw createError( | ||
| isNotRepo ? churnErrors.NotAGitRepo : churnErrors.GitError, | ||
| { cause }, | ||
| ); | ||
| } | ||
|
|
||
| return output | ||
| .split("\n") | ||
| .map((f) => f.trim()) | ||
| .filter(Boolean) | ||
| .reduce((map, file) => map.set(file, (map.get(file) ?? 0) + 1), new Map()); | ||
| }; | ||
|
Comment on lines
+15
to
+37
|
||
| Original file line number | Diff line number | Diff line change | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| @@ -0,0 +1,45 @@ | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| import chalk from "chalk"; | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| import { collectChurn, handleChurnErrors } from "./churn-collector.js"; | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| import { formatJson, formatTable } from "./churn-formatter.js"; | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| import { scoreFiles } from "./churn-scorer.js"; | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| import { collectFileMetrics } from "./file-metrics-collector.js"; | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| /** @param {import('commander').Command} program */ | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| export const addChurnCommand = (program) => { | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| program | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| .command("churn") | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| .description("rank files by hotspot score (LoC × churn × complexity)") | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| .option("--days <n>", "git log window in days", "90") | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| .option("--top <n>", "max results to show", "20") | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| .option("--min-loc <n>", "minimum lines of code to include", "50") | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| .option("--json", "output raw JSON") | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| .action(async ({ days, top, minLoc, json }) => { | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| const cwd = process.cwd(); | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| try { | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| const churnMap = collectChurn({ cwd, days: Number(days) }); | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Non-numeric
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| const files = [...churnMap.keys()]; | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| const metricsMap = collectFileMetrics({ cwd, files }); | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| const results = scoreFiles(churnMap, metricsMap, { | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| minLoc: Number(minLoc), | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| top: Number(top), | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
Comment on lines
+13
to
+25
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| .option("--days <n>", "git log window in days", "90") | |
| .option("--top <n>", "max results to show", "20") | |
| .option("--min-loc <n>", "minimum lines of code to include", "50") | |
| .option("--json", "output raw JSON") | |
| .action(async ({ days, top, minLoc, json }) => { | |
| const cwd = process.cwd(); | |
| try { | |
| const churnMap = collectChurn({ cwd, days: Number(days) }); | |
| const files = [...churnMap.keys()]; | |
| const metricsMap = collectFileMetrics({ cwd, files }); | |
| const results = scoreFiles(churnMap, metricsMap, { | |
| minLoc: Number(minLoc), | |
| top: Number(top), | |
| .option( | |
| "--days <n>", | |
| "git log window in days", | |
| (value) => Number.parseInt(value, 10), | |
| 90, | |
| ) | |
| .option( | |
| "--top <n>", | |
| "max results to show", | |
| (value) => Number.parseInt(value, 10), | |
| 20, | |
| ) | |
| .option( | |
| "--min-loc <n>", | |
| "minimum lines of code to include", | |
| (value) => Number.parseInt(value, 10), | |
| 50, | |
| ) | |
| .option("--json", "output raw JSON") | |
| .action(async ({ days, top, minLoc, json }) => { | |
| const ensurePositiveInteger = (value, name) => { | |
| if ( | |
| !Number.isFinite(value) || | |
| !Number.isInteger(value) || | |
| value <= 0 | |
| ) { | |
| console.error( | |
| chalk.red( | |
| `❌ Invalid value for --${name}: ${String( | |
| value, | |
| )}. Expected a positive integer.`, | |
| ), | |
| ); | |
| process.exit(1); | |
| } | |
| }; | |
| ensurePositiveInteger(days, "days"); | |
| ensurePositiveInteger(top, "top"); | |
| ensurePositiveInteger(minLoc, "min-loc"); | |
| const cwd = process.cwd(); | |
| try { | |
| const churnMap = collectChurn({ cwd, days }); | |
| const files = [...churnMap.keys()]; | |
| const metricsMap = collectFileMetrics({ cwd, files }); | |
| const results = scoreFiles(churnMap, metricsMap, { | |
| minLoc, | |
| top, |
Copilot
AI
Mar 6, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There’s no integration/smoke test for the new churn CLI subcommand yet (e.g., npx aidd churn --json produces parseable JSON and exits 0). Since the command wires together git parsing + file IO + formatting, adding at least a minimal CLI-level test would help prevent regressions.
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,46 @@ | ||
| import chalk from "chalk"; | ||
|
|
||
| /** | ||
| * @typedef {{ file: string, score: number, loc: number, churn: number, complexity: number, gzipRatio: number }} ScoredFile | ||
| */ | ||
|
|
||
| const HEADERS = ["Score", "LoC", "Churn", "Cx", "Density", "File"]; | ||
|
|
||
| /** @param {ScoredFile} result */ | ||
| const row = ({ score, loc, churn, complexity, gzipRatio, file }) => [ | ||
| score.toLocaleString(), | ||
| String(loc), | ||
| String(churn), | ||
| String(complexity), | ||
| `${(gzipRatio * 100).toFixed(0)}%`, | ||
| file, | ||
| ]; | ||
|
|
||
| /** @param {string} str @param {number} width */ | ||
| const pad = (str, width) => str.padStart(width); | ||
|
|
||
| /** @param {ScoredFile[]} results */ | ||
| export const formatTable = (results) => { | ||
| if (results.length === 0) { | ||
| return chalk.green("✅ No hotspots found above the current thresholds."); | ||
| } | ||
|
|
||
| const rows = results.map(row); | ||
| const allRows = [HEADERS, ...rows]; | ||
| const widths = HEADERS.map((_, i) => | ||
| Math.max(...allRows.map((r) => r[i].length)), | ||
| ); | ||
|
|
||
| const divider = widths.map((w) => "─".repeat(w)).join(" "); | ||
| /** @param {string[]} r */ | ||
| const fmt = (r) => r.map((cell, i) => pad(cell, widths[i])).join(" "); | ||
|
|
||
| return [ | ||
| chalk.bold(fmt(HEADERS)), | ||
| chalk.gray(divider), | ||
| ...rows.map((r) => fmt(r)), | ||
| ].join("\n"); | ||
| }; | ||
|
|
||
| /** @param {ScoredFile[]} results */ | ||
| export const formatJson = (results) => JSON.stringify(results, null, 2); |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,63 @@ | ||
| import { assert } from "riteway/vitest"; | ||
| import { describe, test } from "vitest"; | ||
|
|
||
| import { formatJson, formatTable } from "./churn-formatter.js"; | ||
|
|
||
| const makeResult = (overrides = {}) => ({ | ||
| file: "src/foo.ts", | ||
| score: 1500, | ||
| loc: 100, | ||
| churn: 3, | ||
| complexity: 5, | ||
| gzipRatio: 0.35, | ||
| ...overrides, | ||
| }); | ||
|
|
||
| describe("formatTable", () => { | ||
| test("headers", () => { | ||
| const output = formatTable([makeResult()]); | ||
|
|
||
| assert({ | ||
| given: "scored results", | ||
| should: "include all column headers", | ||
| actual: ["Score", "LoC", "Churn", "Cx", "Density", "File"].every((h) => | ||
| output.includes(h), | ||
| ), | ||
| expected: true, | ||
| }); | ||
| }); | ||
|
|
||
| test("empty state", () => { | ||
| assert({ | ||
| given: "no results", | ||
| should: "return a friendly empty-state message", | ||
| actual: formatTable([]).includes("No hotspots"), | ||
| expected: true, | ||
| }); | ||
| }); | ||
|
|
||
| test("gzip density display", () => { | ||
| const output = formatTable([makeResult({ gzipRatio: 0.35 })]); | ||
|
|
||
| assert({ | ||
| given: "a gzip ratio of 0.35", | ||
| should: "display it as a percentage", | ||
| actual: output.includes("35%"), | ||
| expected: true, | ||
| }); | ||
| }); | ||
| }); | ||
|
|
||
| describe("formatJson", () => { | ||
| test("valid JSON", () => { | ||
| const results = [makeResult()]; | ||
| const parsed = JSON.parse(formatJson(results)); | ||
|
|
||
| assert({ | ||
| given: "scored results", | ||
| should: "return valid parseable JSON with all fields", | ||
| actual: parsed[0], | ||
| expected: results[0], | ||
| }); | ||
| }); | ||
| }); |


There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
collectChurnbuilds a shell command string and interpolatesdaysdirectly into--since="${since}". SincecollectChurnis exported, any caller passing untrusted/non-numeric input could potentially cause shell injection or at least produce invalid git arguments. Validate/coercedaysto a finite positive integer insidecollectChurnand consider usingexecFileSync(args array) to avoid invoking a shell.