Skip to content

BlazeUp-AI/Observal

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1,209 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 ██████╗ ██████╗ ███████╗███████╗██████╗ ██╗   ██╗ █████╗ ██╗
██╔═══██╗██╔══██╗██╔════╝██╔════╝██╔══██╗██║   ██║██╔══██╗██║
██║   ██║██████╔╝███████╗█████╗  ██████╔╝██║   ██║███████║██║
██║   ██║██╔══██╗╚════██║██╔══╝  ██╔══██╗╚██╗ ██╔╝██╔══██║██║
╚██████╔╝██████╔╝███████║███████╗██║  ██║ ╚████╔╝ ██║  ██║███████╗
 ╚═════╝ ╚═════╝ ╚══════╝╚══════╝╚═╝  ╚═╝  ╚═══╝  ╚═╝  ╚═╝╚══════╝

Discover, share, and monitor AI coding agents with full observability built in.

License Python Status Stars Coverage

If you find Observal useful, please consider giving it a star. It helps others discover the project and keeps development going.


Observal is a self-hosted AI agent registry with built-in observability. Think Docker Hub, but for AI coding agents.

Browse agents created by others, publish your own, and pull complete agent configurations — all defined in a portable YAML format that templates out to Claude Code, Kiro CLI, Cursor, Gemini CLI, and more. Every agent bundles its MCP servers, skills, hooks, prompts, and sandboxes into a single installable package. One command to install, zero manual config.

Every interaction generates traces, spans, and sessions that flow into a telemetry pipeline. The built-in eval engine scores agent sessions so you can measure performance and make your agents better over time.

Agent Registry

Agent Registry

Browse, search, and install published agents

Dashboard

Dashboard

Agent scores, recent sessions, top downloads

Trace Detail

Trace Detail

Every tool call: models, token counts, 16 turns

Insight Report

Insights

AI-generated analysis of agent usage patterns

Error Log

Errors

Classified errors with drill-through to sessions

Review Queue

Review

Admin approve/reject workflow for submissions

Documentation

Full docs live at docs.observal.io

Start here Go to
5-minute install and first trace Quickstart
Understand the data model Core Concepts
Instrument your existing MCP servers Observe MCP traffic
Run Observal on your infrastructure Self-Hosting
Look up a CLI command CLI Reference
Report a bug with diagnostics Reporting Issues

See CHANGELOG.md for recent updates.

Quick start

One-line install (recommended)

curl -fsSL https://raw.githubusercontent.com/BlazeUp-AI/Observal/main/install-server.sh | bash

Downloads a lightweight config package, runs a guided setup, pulls pre-built Docker images from GHCR, and starts the full stack. No repo clone required.

From source

git clone https://github.com/BlazeUp-AI/Observal.git && cd Observal
cp .env.example .env
make up

Connect your IDE

uv tool install observal-cli
observal auth login

This installs hooks in your IDE (Claude Code, Kiro, etc.) to automatically capture traces.

See SETUP.md for the full setup guide.

Supported IDEs

IDE Support
Claude Code Full — skills, hooks, MCP, rules, OTLP telemetry
Kiro CLI Full — superpowers, hooks, MCP, steering files, OTLP telemetry
Gemini CLI Tested — hooks, MCP, rules, OTLP telemetry
Cursor Tested — MCP + shim telemetry, rules
VS Code Limited — MCP + shim telemetry, rules
Copilot CLI Limited — hooks, MCP + shim telemetry, rules
Codex CLI Limited — rules
OpenCode Limited — JS plugin hooks, MCP + shim telemetry, rules

Compatibility matrix and per-IDE setup: Integrations.

Tech stack

Component Technology
Frontend Next.js 16, React 19, Tailwind CSS 4, shadcn/ui, Recharts
Backend Python 3.11+, FastAPI, Strawberry GraphQL, Uvicorn
Databases PostgreSQL 16 (registry), ClickHouse (telemetry)
Queue Redis + arq
CLI Python, Typer, Rich
Eval engine AWS Bedrock / OpenAI-compatible LLMs
Telemetry OpenTelemetry Collector
Deployment Docker Compose (10 services)

Contributing

See CONTRIBUTING.md. The short version:

  1. Fork and clone
  2. make hooks to install pre-commit hooks
  3. Create a feature branch
  4. Run make lint and make test
  5. Open a PR

See AGENTS.md for internal codebase context.

Running tests

make test      # quick
make test-v    # verbose

All tests mock external services. No Docker needed.

Community

Have a question, idea, or want to share what you've built? Head to GitHub Discussions. Please use Discussions for questions; open Issues for confirmed bugs and concrete feature requests.

Join the Observal Discord to chat directly with the maintainers and other community members.

Reporting issues

When filing a bug report, please attach a support bundle so maintainers can diagnose the problem quickly:

observal support bundle

This produces a .tar.gz archive containing version info, sanitized configuration, health probes, aggregate table counts, and optional system metrics. All values pass through a redaction layer — no customer data, row contents, or credentials are included. Review the bundle before sharing:

observal support inspect observal-support-*.tar.gz

Security

To report a vulnerability, please use GitHub Private Vulnerability Reporting or email contact@blazeup.app. Do not open a public issue. See SECURITY.md.

Star history

Star History Chart

License

GNU Affero General Public License v3.0 (AGPL-3.0). See LICENSE.