Skip to content

Feature Proposal: Programmable Expectation Engine with Dynamic Response Generation #2

@MartinSimango

Description

@MartinSimango

Hi @leslieo2

I've had a look at your project and love it, and it's an idea I've had for a while, but never got time to do it and seeing your repository reminded me of it, and I would like to contribute if the project is still alive.

I've built a programmable expectation engine for go-spec-mock that allows users to define custom request-response expectations with dynamic response generation, priority-based matching, and a management API. This goes beyond the current spec-driven mocking by letting users programmatically control mock behaviour at runtime.

I'd like to discuss whether this feature would be a good fit for upstream, or if it's better suited as a separate project.

Motivation

The current go-spec-mock workflow is spec-driven: you provide an OpenAPI 3.0 file and get auto-generated responses. This works great for contract-first development, but some testing scenarios need more control:

  • Stateful test sequences - return different responses on successive calls (e.g., "pending" then "completed")
  • Dynamic values - generate UUIDs, sequential IDs, or regex-matched strings in response bodies at request time
  • Conditional responses - match on headers, cookies, or request body content, not just path/method
  • Temporary overrides - expectations that expire after N hits or after a TTL, useful for testing retry logic
  • Webhook simulation - fire async callbacks after matching a request

These overlap with the "Stateful Mocking" and "CRUD Operations" items on the Phase 3 roadmap, but take a different approach: rather than built-in CRUD semantics, this provides a general-purpose expectation engine that users compose themselves.

What It Does

Expectation matching (5-stage pipeline, all must pass):

  • Method (exact, case-insensitive)
  • Path (exact segments with path parameter support, e.g., /pet/{petId})
  • Headers (subset match)
  • Cookies (exact match)
  • Body (deep subset match - expected keys must exist, extra keys are allowed)

Dynamic response generation via $generated markers in response bodies:

expectations:
  - request:
      path: /store/order
      method: GET
    response:
      status: 200
      body:
        id:
          $generated: "Sequentional(1)"
        orderId:
          $generated: GenUUID
        petName:
          $generated: "GenRegex([A-Z][a-z]{3,8})"
        quantity:
          $generated: "GenInt32(1, 50)"

**5 built-in tasks **: GenInt32(min, max), GenRegex(pattern), GenUUID, Sequentional(start) (auto-incrementing, persists across requests), RefField(fieldPath) (reference another generated value in the same response).

Lifecycle controls:

  • priority - higher-priority expectations match first
  • times - expectation auto-removes after N hits
  • timeToLive -expectation expires after N milliseconds
  • callback - async HTTP request fired after a match (webhook simulation)

Runtime management API (/go-spec-mock/expectations):

  • PUT - create/update expectations (JSON or YAML body)
  • GET - list all active expectations
  • DELETE - clear all expectations
  • GET /{id} / DELETE /{id} - single expectation CRUD

Meaning anyone can programmatically interact with the API using any language of choice.

File loading: expectations can be loaded from a YAML/JSON file at startup via --expectations-file.

Web UI: a React frontend for creating, editing, and monitoring expectations visually.

Design Decisions

  • Expectations take precedence over spec-generated responses - if an expectation matches, it wins; otherwise, the normal OpenAPI-driven response generation kicks in. This makes the two systems complementary.
  • Upsert semantics - expectations with the same id are hot-replaced in place, no restart needed.
  • Task registry is extensible - custom tasks can be registered programmatically via Registry.Register(name, task).
  • Thread-safe - all engine operations protected by sync.RWMutex.

Scope of Changes

  • New package: internal/expectation/ (engine, matcher, executor, loader, task registry)
  • Server integration: 2 touch points in internal/server/server.go
  • Admin API: internal/server/expectations_api.go
  • Config: new --expectations-file flag
  • Frontend: React + TypeScript SPA (frontend/)
  • Examples: examples/expectations.yaml

Questions

  1. Is this direction aligned with your vision for go-spec-mock? The Phase 3 roadmap mentions "Stateful Mocking" and "CRUD Operations" - this feature provides a general-purpose engine that could support those use cases, but the design approach is different.
  2. Would you prefer this as a PR, or is it better as a separate project? The changes are significant (new package, frontend, admin API).

Current Status: The change is currently on my fork on the branch called feat/add-expectations - right now I've roughly scaffolded my idea and still working on things.

Try out new changes:

go run cmd/go-spec-mock/main.go --spec-file examples/petstore.yaml --expectations examples/expectations.yaml

Then run the frontend:

cd frontend && bun install && bun run dev

I'm happy to discuss, adapt the design, or split it into smaller PRs if there's interest.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions