This project implements an event-driven microservice architecture with communication through message queues. The system is designed for high throughput and parallel processing:
- Event-Driven Design: All components communicate asynchronously through message queues (NATS or Google Cloud Pub/Sub)
- Microservice Architecture: Separate services handle different responsibilities (subscriber, ingester, gas collector, gas refunder)
- Parallel Processing: Multiple ingester instances can run simultaneously, each with its own wallet, enabling parallel Starknet transaction execution
- Scalability: The architecture supports horizontal scaling by deploying additional instances of any component
Key components:
- Subscriber: Monitors Starknet events and publishes them to message queues
- Ingester: Consumes events from queues and executes Starknet transactions
- Gas Collector: Handles gas collection operations on demand i.e. k8s cron job.
- Gas Refunder: Processes gas refund requests
The project provides a devcontainer for development. The devcontainer is capable of building and running all components of the project. You can use a devcontainer easily with many IDEs, including VSCode as well as from the command line (supporting tools).
The devcontainer provides Rust with Cargo, Cairo, Starkli, and other tools required for Starknet development. It also starts a local Starknet node (Katana) and creates preconfigured accounts for development. The devcontainer also provides Axelar tooling needed to interact with the Axelar network. It starts a local tofnd node and provides scripts to initialize accounts and deploy the Amplifier contracts.
See the devcontainer README for more details.
git submodule update --init --recursiveDevelopment shells are available (make sure you enable flakes)
Tofnd + ampd + starkli + rust
nix developYou can also build monolithic relayer for development purposes via nix. It is not expected this binary be used for production.
nix build .#relayer# List of available commands
cargo make starknet build
cargo make starknet test
cargo make starknet fmt
cargo make starknet fmt-check# Lint
cargo make check
# Check for unused deps
cargo make unused-deps
# Run the tests
cargo make test
cargo make coverage
# Check if the CI tasks will pass
cargo make local-ci
# Audit deps
cargo make audit
# Format the code
cargo make fmtDefault config + deployed contracts are at relayer-config-example.toml
Note: The gateway and gas service contract addresses in this config file are kept up-to-date and are used by the xtask commands for contract deployment.
For deploying the infrastructure using Terraform, see the Terraform README.
The project includes an xtask utility for common development workflows. See the xtask README for detailed documentation on available commands.
Make sure to start nats
nats-server -c nats.conf
cp relayer-config-example.toml relayer-config.tomledit relayer-config.toml to include accurate data
and then just run binary
cargo r- Install gcloud cli
- Init
gcloud init- Set default login
gcloud auth application-default login- Start relayer with
gcpfeature
cargo run --no-default-features --features gcpThis command will start the Axelar-Starknet relayer and run all components, including the Starknet subscriber and ingester, with the gcp feature enabled.
The starknet-subscriber binary is responsible for subscribing to Starknet events and processing them. You can run it with the following commands:
cargo run --bin starknet-subscriber --no-default-features --features natscargo run --bin starknet-subscriber --no-default-features --features gcpThe starknet-ingester binary is responsible for ingesting tasks and events from Starknet. You can run it with the following commands:
cargo run --bin starknet-ingester --no-default-features --features natscargo run --bin starknet-ingester --no-default-features --features gcpBoth the starknet-subscriber and starknet-ingester binaries include health check endpoints to ensure they are operational. These health checks validate the connectivity and functionality of the components, including the event queue publisher, task queue consumer, and state store.
To run the health checks, ensure the binaries are running and configured correctly. The health check server will expose endpoints that can be queried to verify the status of the components.