WAVS is a highly configurable AVS operator node. It allows you to easily run logic for multiple AVSs, with each service securely sandboxed from others and from the node's operating system.
WAVS uses WASI components to achieve strong isolation. Each AVS runs as a WASI component, with restricted access to system resources for security and reliability.
WAVS nodes are operated by "Operators." These participants are similar to validators in proof-of-stake blockchains:
- Operators receive voting power through delegation.
- Their nodes perform off-chain actions, which are then submitted and verified on-chain.
To serve an AVS, an operator must explicitly opt-in. This is done via an HTTP endpoint on the node, where the operator provides the address and chain of the ServiceManager contract.
The AVS team is responsible for:
- Coding all WASI components
- Deploying AVS contracts on-chain
Operators and AVS teams are completely independent. AVS teams do not have access to operator systems, ensuring a clean separation of concerns.
At a high level, we have four subsystems:
- TriggerManager: Monitors blockchain events, cron schedules, and block intervals; fires trigger events to the Dispatcher.
- Engine: Executes the WASM components inside a Wasmtime WASI sandbox; returns the operator response.
- SubmissionManager: Signs the operator response with the service's derived key and hands the signed submission to the Aggregator.
- Aggregator: Collects signatures from peer operators via P2P, runs the aggregator component once quorum is reached, and submits the result on-chain.
These each run in their own thread and are orchestrated by the Dispatcher, typically over crossbeam unbounded channels.
Administration of a node is done through an HTTP server, which holds an instance of the dispatcher and can call it directly.
As a rule of thumb, we do not block — async tasks are spawned onto the Tokio runtime — and we do not crash — errors are logged via the tracing mechanism and inspected via tools like Jaeger.
TriggerManager → Dispatcher → Engine → SubmissionManager → Aggregator → Chain
- TriggerManager fires a
TriggerActionto the Dispatcher when a watched event or schedule fires. - Dispatcher routes the action to the Engine, which executes the matching operator WASM component.
- The Engine returns a
WasmResponse; the Dispatcher forwards it to the SubmissionManager. - SubmissionManager signs the response and produces a
Submission, which is handed back to the Dispatcher. - The Dispatcher sends the
Submissionto the Aggregator, which broadcasts it to peer operators via P2P. - Once the quorum threshold is met, the Aggregator invokes the workflow's aggregator WASM component (via the Engine), which returns
AggregatorActionvalues. - The Aggregator posts the result on-chain (EVM or Cosmos service handler contract).
When a workflow's submit field is set to None, execution stops after step 3. The operator component runs and can produce any side effects (e.g. posting data to an external API), but nothing is signed or submitted on-chain. No service handler contract is required.
Submissions from multiple operators accumulate in a QuorumQueue keyed by (EventId, SubmitAction). When the required threshold of signatures is collected, the aggregator component is invoked with the full set, allowing it to validate or select among the collected signatures before posting.
Burned (submitted) queues are retained for a configurable TTL (default 48 hours) and then cleaned up.
| Trigger | Description |
|---|---|
EvmContractEvent |
EVM contract log event (filtered by event signature hash) |
CosmosContractEvent |
Cosmos contract event (filtered by event type string) |
BlockInterval |
Every N blocks on a specified EVM or Cosmos chain |
Cron |
Cron expression schedule (with optional start/end timestamps) |
AtProto / Jetstream |
ATProto Jetstream events (Bluesky, filtered by collection NSID) |
Hypercore |
Appends to a Hypercore feed (identified by feed key) |
Manual |
HTTP endpoint trigger (dev/testing only) |
The Aggregator subsystem includes optional P2P networking for multi-operator deployments. Each registered service gets its own GossipSub topic; operators subscribe on service registration and unsubscribe on removal. Peer discovery uses either mDNS (local/dev) or Kademlia DHT (production). See P2P.md for configuration details.