Amon is an intelligent AI coworker that runs locally on your desktop. It doesn't just chat with you β it actually helps you get work done: writing code, executing commands, searching for information, and managing files.
Amon features a custom three-layer agent architecture with a provider-agnostic AI streaming layer, supporting multiple LLM providers including Anthropic Claude, OpenAI, Google Gemini, and API-compatible providers out of the box.
Starting with 0.3.0, Amon no longer depends on Claude Agent SDK. The runtime is implemented in-repo with a provider-agnostic AI layer, a framework-agnostic agent core, and Electron-specific integration.
- Claude Agent SDK has been removed and replaced by Amon's own agent core and runtime.
- Settings and provider configuration moved to the new
agent.providerConfigs[]/agent.activeProviderId/agent.activeModelIdschema. - Existing settings are migrated on a best-effort basis. Older provider-specific fields and deprecated options may need to be reconfigured manually after upgrading.
Let's take a look at Amon's features through screenshots.
Amon can think through your messages, execute tool calls, and complete your tasks.
Amon supports both dark and light theme modes.
Amon can display file modification diffs.
Amon supports sending image messages.
Amon allows you to add multiple API providers with built-in support for Anthropic Claude, OpenAI, Google Gemini, and API-compatible providers (GLM, MiniMax, Kimi, etc.).
Amon works on a per-workspace (folder) basis. You can set up multiple workspaces. Default workspace: ~/.amon/workspace
Amon supports Agent Skills β you can install Skills to add specialized capabilities to Amon.
Visit the Releases page to download the installer for your platform.
Since the app is not Apple-signed, macOS may block it from running. After downloading, run the following command in the terminal to remove the quarantine attribute:
xattr -cr /Applications/Amon.appAfter first launch, follow these steps to configure:
-
Configure AI Provider
Go to
SettingsβProvider, create and enable the AI provider you want to use -
Create a Workspace
Go to
SettingsβWorkspace, create a new workspace and select a local folder as the project rootDefault workspace:
~/.amon/workspace -
Start Using
Return to the main screen, click
New Session, select a workspace and start chatting
- Node.js 18+ or Bun 1.0+
- macOS / Windows / Linux
bun install # Install dependencies
bun start # Start dev server (with hot reload)
bun run lint # Lint code
bun run typecheck # Type check
bun run test # Run tests
bun run changeset # Create a changeset
bun run version # Apply changesets and update CHANGELOGbun run download:binaries # Download runtime binaries (bun, uv)
bun run package # Create app package (no installer)
bun run make # Create platform installersamon-agent/
βββ src/
β βββ ai/ # Provider-agnostic AI streaming layer
β β βββ providers/ # Built-in providers (Anthropic, OpenAI, Google)
β β βββ utils/ # Event stream, JSON parsing, overflow detection
β βββ agent/ # Framework-agnostic Agent class and loop
β βββ main/ # Electron main process
β β βββ agent/ # Electron-specific agent integration
β β βββ ipc/ # IPC communication handlers
β β βββ store/ # State management and persistence
β β βββ tools/ # 8 built-in tools (bash, read, write, edit, etc.)
β β βββ skills/ # Skill loading and parsing
β β βββ workspace/ # User file loading (AGENTS.md, SOUL.md)
β βββ renderer/ # React renderer process
β β βββ components/# UI components
β β βββ store/ # Zustand state management
β βββ preload/ # contextBridge IPC bridge
β βββ shared/ # Shared types, schemas, constants
β βββ locales/ # i18n files (en, zh)
βββ resources/
β βββ icons/ # App icons
β βββ [bun, uv] # Runtime binaries
βββ skills/ # Built-in skills packaged with the app
βββ forge.config.ts # Electron Forge configuration
Amon adopts a three-layer agent architecture, with each layer cleanly decoupled:
βββββββββββββββββββββββββββββββββββββββββββββββ
β src/ai/ AI Streaming Layer β Provider-agnostic, multi-provider
β (Anthropic / OpenAI / Google)β Unified streaming event model
βββββββββββββββββββββββββββββββββββββββββββββββ€
β src/agent/ Agent Core β Framework-agnostic Agent + Loop
β (State, Tools, Messages) β Dual-loop: tool exec + follow-up
βββββββββββββββββββββββββββββββββββββββββββββββ€
β src/main/agent/ Electron Integration β AgentService, EventAdapter
β (IPC, Push, Persistence) β Session management, system prompt
βββββββββββββββββββββββββββββββββββββββββββββββ
- AI Layer (
src/ai/) β Provider-agnostic streaming abstraction. Global provider registry with 4 built-in providers. Normalizes all responses into a unifiedAssistantMessageEventstream. - Agent Layer (
src/agent/) β Framework-agnosticAgentclass. Dual-loop architecture: inner loop (LLM call -> tool execution -> steering check), outer loop (follow-up queue -> repeat). Tool input validated with Zod schemas. - Integration Layer (
src/main/agent/) β Wires Agent into Electron.AgentServiceresolves providers, models, skills, and workspace bootstrap files per session.EventAdapterbridges agent events to session store mutations and push notifications.
|
Core Frameworks
AI Layer
Frontend
|
Build Tools
Data & Validation
|
This project is open-sourced under the AGPL-3.0 license.






