Skip to content

[Proposal] Automatic spec catalog discovery during proposal creation #901

@sh940701

Description

@sh940701

Problem

The current proposal instruction says:

Check openspec/specs/ for existing spec names.

This is the only guidance the AI gets for discovering existing specs. In practice:

  1. The AI often skips this step entirely — there's no structured mechanism to enforce it, just a vague hint buried inside a larger instruction block
  2. When it does check, it reads full spec files into context — wasteful when you only need to know what exists and what each spec is about
  3. As specs grow, this scales poorly — a project with 20+ capabilities would dump thousands of lines into context just to decide which are relevant

This is exactly why main specs feel underutilized, as raised in #878 ("What's the point to maintain main specs?") and #872 ("openspec files too many"). The specs exist, but the workflow doesn't actively leverage them at the most critical moment — when deciding what a new change touches.

Why this matters at scale

For small projects with 2-3 specs, manually browsing openspec/specs/ works fine. But for teams adopting OpenSpec as a serious engineering harness — especially at enterprise scale with dozens of capabilities across multiple domains — the current approach breaks down:

  • An AI agent scanning 30+ spec directories and reading full files burns significant context just for discovery
  • Without structured discovery, capabilities get duplicated or contradicted across changes
  • The value proposition of accumulated main specs falls apart if they're not reliably consulted

For OpenSpec to serve as a scalable, enterprise-grade development harness, spec discovery needs to be a first-class, structured step — not an afterthought buried in a paragraph.

Proposed Solution

Replace the vague "check openspec/specs/" instruction with a concrete spec discovery step in the proposal instruction, using the CLI to get a lightweight catalog.

Before (current)

Check `openspec/specs/` for existing spec names.

After (proposed)

Before filling this in, run `openspec list --specs --json --detail` to discover
existing specs. Review each spec's `overview` to understand what it covers.

This gives the AI a structured, repeatable workflow instead of a vague hint. The --detail flag (from #700) provides just enough context — id, title, overview, requirementCount — to determine relevance without loading full spec content.

Example output:

{
  "specs": [
    {
      "id": "routine-routing",
      "title": "routine-routing",
      "overview": "Defines in-app routing, deep links, and push notification routing for routine screens.",
      "requirementCount": 8
    },
    {
      "id": "pool-search",
      "title": "pool-search",
      "overview": "Swimming pool search with map integration and region-based filtering.",
      "requirementCount": 12
    }
  ]
}

An AI reading this can immediately tell whether a new "push notification for pool recommendations" change should modify pool-search, routine-routing, or create a new capability — without reading either spec in full.

Extended Proposal: Sub-agent Optimization

For tools that support sub-agents or parallel task execution (e.g., Claude Code's Agent tool, Codex sub-tasks), the discovery step could be delegated to a separate context:

  • A sub-agent runs openspec list --specs --json --detail, compares each spec's overview against the proposal description, and returns only the related specs with reasons
  • Main context stays clean — the discovery work happens in a disposable context
  • This scales to 50+ capabilities without bloating the proposal context

This is entirely optional — tools without sub-agent support just run the CLI inline. The core proposal (structured CLI-based discovery) works everywhere regardless.

Related Issues

Issue How this helps
#878 ("What's the point to maintain main specs?") Directly answers this — main specs become actively queried during every proposal via their Purpose/Overview
#872 ("openspec files too many") Overview-based filtering avoids loading full spec content, making large spec collections manageable
#700 (PR: list --specs --json --detail) Creates the first real consumer for --detail, giving it a concrete use case in the workflow
#687 ("Incomplete specs after proposal") Structured discovery reduces missed capabilities — the AI sees all existing specs before writing Capabilities

Scope

  • What changes: The proposal artifact instruction in schema.yaml and the proposal guidelines in skill/command templates (continue-change.ts)
  • What doesn't change: No new CLI commands, no new artifacts, no schema structure changes
  • Dependency: Best with feat: list cmd support json for specs and archive #700 (--detail flag) merged. Without it, the fallback is openspec spec show <id> --json per spec, which still works but requires N calls instead of one.

Alternatives Considered

  1. New artifact (discovery.md) — Rejected. Discovery is input to the proposal, not a standalone deliverable. Adds overhead to every change even when there are only 2 specs.

  2. Separate /opsx:discover command — Rejected. If it's manual, people won't use it. The whole point is that spec discovery should be embedded in the proposal workflow.

  3. Inject full spec catalog into openspec instructions output — Possible but heavy. The proposed approach lets the AI call the CLI on-demand, which is simpler and doesn't bloat the instruction payload.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions