Problem
When artifacts are generated in one pass via /opsx:propose or /opsx:ff (proposal → specs → design → tasks), they are built in forward dependency order. This means reverse feedback doesn't happen — issues that would surface when writing tasks don't propagate back to specs, and scope contradictions between artifacts go undetected.
This applies to both commands equally: /opsx:propose (core profile, generates all artifacts in one step) and /opsx:ff (expanded profile, fast-forwards through all planning artifacts). Both produce a complete artifact set in a single pass, so both share the same gap.
In practice, I consistently find these problems before /opsx:apply:
- Cross-artifact contradictions — A feature included in proposal scope appears in design's Non-Goals
- Specification gaps — Fallback behavior defined only in design but missing from spec (where it should be normative)
- Duplication with existing specs/components — Tasks that reimplement functionality already available in existing specs or shared components
- Missing design abstractions — Implementations that should use design patterns for maintainability (and AI comprehensibility) are specified as one-off implementations
- File bloat in task design — Tasks that pack too many responsibilities into single files
Why single-pass generation can't solve this
- Human edits create drift — After exploring ideas and updating individual artifacts, cross-artifact consistency breaks
- Scope changes don't propagate — When a decision changes during discussion, not all artifacts get updated
- Semantic duplication is invisible to format validation —
openspec validate catches structural issues but can't detect that a task duplicates an existing spec's capability
Concrete examples
After running /opsx:propose or /opsx:ff on real projects and reviewing artifacts before apply:
| Priority |
Pattern |
Example |
| P0 |
Cross-artifact scope contradiction |
Feature X listed in proposal scope but marked as Non-Goal in design |
| P0 |
Spec gap |
Fallback behavior defined only in design, not in spec (where it should be normative with scenarios) |
| P1 |
Duplication with existing specs |
Task reimplements capability already covered by an existing shared component |
| P1 |
Missing abstraction |
Similar logic across multiple tasks should use a design pattern but is specified as separate implementations |
| P1 |
File bloat |
Task design packs excessive responsibilities into single files |
| P2 |
Thin scenarios |
Failure/edge-case THEN clauses lack specific assertions |
None of these were caught during generation — they only surfaced through a separate review pass.
Relationship to existing features
openspec validate --strict: Catches structural/format issues (missing sections, malformed artifacts). Cannot detect semantic contradictions between artifacts or duplication with existing specs.
/opsx:clarify (PR #702): Resolves ambiguity within a single artifact through Q&A. The gap described here is across artifacts — checking consistency between proposal, specs, design, and tasks as a whole. These are complementary:
/opsx:propose or /opsx:ff
→ /opsx:clarify (resolve ambiguity within each artifact)
→ ★ gap (cross-artifact consistency + duplication analysis)
→ /opsx:apply
What I'm doing today
I've built a custom skill that runs as a review pass after /opsx:propose or /opsx:ff:
- Reads all artifacts (proposal/specs/design/tasks) + existing specs
- Checks cross-artifact scope consistency
- Analyzes duplication with existing specs and shared components (reuse / extend / new-shared / new-dedicated)
- Reports issues with P0/P1/P2 priority
- After user approval, updates artifacts and runs
openspec validate --strict
This creates a natural review-then-approve flow before any modifications are made.
Discussion
I'd like to discuss whether this capability belongs in OpenSpec and, if so, what form fits best:
- A) New skill — e.g.
/opsx:refine in the expanded profile
- B) Extend
openspec validate — Add a semantic check layer alongside the existing structural checks (no new skill needed)
- C) Something else — Open to maintainer perspective on what fits the architecture
I'm aware of the concern about skill count growth. Happy to adapt the approach to whatever form works best for the project.
Problem
When artifacts are generated in one pass via
/opsx:proposeor/opsx:ff(proposal → specs → design → tasks), they are built in forward dependency order. This means reverse feedback doesn't happen — issues that would surface when writing tasks don't propagate back to specs, and scope contradictions between artifacts go undetected.This applies to both commands equally:
/opsx:propose(core profile, generates all artifacts in one step) and/opsx:ff(expanded profile, fast-forwards through all planning artifacts). Both produce a complete artifact set in a single pass, so both share the same gap.In practice, I consistently find these problems before
/opsx:apply:Why single-pass generation can't solve this
openspec validatecatches structural issues but can't detect that a task duplicates an existing spec's capabilityConcrete examples
After running
/opsx:proposeor/opsx:ffon real projects and reviewing artifacts before apply:None of these were caught during generation — they only surfaced through a separate review pass.
Relationship to existing features
openspec validate --strict: Catches structural/format issues (missing sections, malformed artifacts). Cannot detect semantic contradictions between artifacts or duplication with existing specs./opsx:clarify(PR #702): Resolves ambiguity within a single artifact through Q&A. The gap described here is across artifacts — checking consistency between proposal, specs, design, and tasks as a whole. These are complementary:What I'm doing today
I've built a custom skill that runs as a review pass after
/opsx:proposeor/opsx:ff:openspec validate --strictThis creates a natural review-then-approve flow before any modifications are made.
Discussion
I'd like to discuss whether this capability belongs in OpenSpec and, if so, what form fits best:
/opsx:refinein the expanded profileopenspec validate— Add a semantic check layer alongside the existing structural checks (no new skill needed)I'm aware of the concern about skill count growth. Happy to adapt the approach to whatever form works best for the project.