Skip to content

Feature Request: AI Chat Window in Runbooks #123

@josh-padnick

Description

@josh-padnick

Embed an AI assistant directly in the Runbooks web UI to help users debug issues they encounter while executing runbooks, with full runbook context automatically available to the AI.

Motivation

When users hit issues while running through a runbook, they typically have to:

  1. Copy relevant error messages, logs, and context
  2. Switch to a separate AI tool (ChatGPT, Claude, etc.)
  3. Manually explain what runbook they're running, what step they're on, and what went wrong
  4. Go back and forth between tools

This context-gathering is tedious and error-prone. Since the Runbooks UI already knows exactly which runbook is being executed, which step the user is on, all script output (if applicable), and what inputs/outputs have been generated, we can remove this friction entirely.

Proposal

Add an AI window embedded in the Runbooks web UI with the following characteristics:

  • Bring your own LLM: Users connect their preferred LLM via API key (OpenAI, Anthropic, etc.) rather than us hosting/paying for inference.
  • Automatic context injection: The AI has access to the full runbook context — which runbook is running, current step, prior steps and their outputs, relevant variables, error messages, etc.
  • Debug-focused: The primary use case is helping runbook consumers debug issues they encounter during execution.
  • Give feedback to authors: Maybe it could somehow be used to give feedback to runbook authors on what they should improve? Or perhaps that should be a separate feature entirely?

Alternative / Simpler Version

If full API integration is too heavy a lift for a first pass, a lighter alternative is to generate a pre-filled prompt that users can copy/paste into their LLM of choice. This captures most of the value (no manual context-gathering) without requiring us to manage API integrations, key storage, or streaming responses in the UI.

Scope Note

This isn't a core part of the Runbooks feature set — it's a nice-to-have that could meaningfully improve the debugging experience for consumers. Worth considering as a follow-on after core functionality is solid.

Open Questions

  • Should API keys be stored per-user, per-workspace, or session-only?
  • Which LLM providers should we support initially?
  • Should the AI be read-only (answers questions) or allowed to suggest/take actions on the runbook itself?
  • Copy/paste prompt version vs. full integration — which should ship first?

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions