Skip to content

Commit 27e201a

Browse files
docs: Revise Operator guides (#3114)
* operator guides * lint * lint * lint * lint
1 parent b99945f commit 27e201a

24 files changed

Lines changed: 1422 additions & 1348 deletions

docs/content/operator-guide/aggregator.mdx

Lines changed: 0 additions & 496 deletions
This file was deleted.
Lines changed: 236 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,236 @@
1+
---
2+
title: Operate an Aggregator
3+
description: "Detailed guide for operating Walrus aggregators."
4+
keywords: ["walrus", "aggregator", "daemon", "http api", "systemd", "metrics", "large files", "operations"]
5+
---
6+
7+
Run a Walrus aggregator to expose the [HTTP API](/docs/http-api/storing-blobs). The aggregator does not perform any on-chain actions and only requires specifying the address on which it listens:
8+
9+
```sh
10+
$ walrus aggregator --bind-address "127.0.0.1:31415"
11+
```
12+
13+
## Start a local daemon {#local-daemon}
14+
15+
Run a local Walrus daemon through the `walrus` binary using one of the following commands:
16+
17+
- `walrus aggregator`: Starts an aggregator that offers an HTTP interface to read blobs from Walrus.
18+
- `walrus daemon`: Offers the combined functionality of an aggregator and publisher on the same address and port.
19+
20+
:::tip
21+
22+
If you run the aggregator without a reverse proxy, open **port 9000** on your firewall. With a reverse proxy (such as the [nginx caching setup](#nginx-caching)), only **port 443** needs to be open.
23+
24+
:::
25+
26+
## Download the client configuration {#client-config}
27+
28+
The aggregator requires a client configuration file. If you run the aggregator on the same host as a [storage node](/docs/operator-guide/storage-node-setup#binaries), the configuration is already available at `/opt/walrus/config/client_config.yaml`. Otherwise, download it:
29+
30+
<div className="outlined-tabs">
31+
32+
<Tabs>
33+
<TabItem label="Mainnet" value="mainnet">
34+
35+
```sh
36+
curl "https://docs.wal.app/setup/client_config_mainnet.yaml" -o /opt/walrus/config/client_config.yaml
37+
```
38+
39+
</TabItem>
40+
<TabItem label="Testnet" value="testnet">
41+
42+
```sh
43+
curl "https://docs.wal.app/setup/client_config_testnet.yaml" -o /opt/walrus/config/client_config.yaml
44+
```
45+
46+
</TabItem>
47+
</Tabs>
48+
49+
</div>
50+
51+
## Sample `systemd` configuration
52+
53+
The following example shows an aggregator node that hosts an HTTP endpoint you can use to fetch data from Walrus over the web.
54+
55+
Run the aggregator process through the `walrus` client binary using a `systemd` service. Create the service file at `/etc/systemd/system/walrus-aggregator.service`:
56+
57+
```ini
58+
[Unit]
59+
Description=Walrus Aggregator
60+
61+
[Service]
62+
User=walrus
63+
Environment=RUST_BACKTRACE=1
64+
Environment=RUST_LOG=info
65+
ExecStart=/opt/walrus/bin/walrus --config /opt/walrus/config/client_config.yaml aggregator --bind-address 0.0.0.0:9000 --metrics-address 127.0.0.1:27182
66+
Restart=always
67+
68+
LimitNOFILE=65536
69+
70+
[Install]
71+
WantedBy=multi-user.target
72+
```
73+
74+
## Support large files
75+
76+
As of Walrus `v1.38.0`, the aggregator can concatenate multiple blobs through the `/v1alpha/blobs/concat` endpoint. This endpoint enables delivery of very large files that would otherwise be unsupported because of individual blob size restrictions.
77+
78+
The `walrus-store-sliced.sh` script below shows how to slice and upload a very large file to Walrus. After you upload the slices, your downstream users can read the full file in 2 ways:
79+
80+
- Construct a `GET` URL that lists the blob slices in the query parameters
81+
- Send a `POST` request with a JSON body listing the IDs
82+
83+
You can find details of this API in the online aggregator documentation at `YOUR_AGGREGATOR_URL/v1/api`. This endpoint is still under development and its specifications or behavior might change before it becomes stable.
84+
85+
<details>
86+
<summary>`walrus-store-sliced.sh`</summary>
87+
88+
```sh
89+
#!/bin/bash
90+
# Copyright (c) Walrus Foundation
91+
# SPDX-License-Identifier: Apache-2.0
92+
93+
set -euo pipefail
94+
95+
error() {
96+
echo "$0: error: $1" >&2
97+
}
98+
99+
note() {
100+
echo "$0: note: $1" >&2
101+
}
102+
103+
die() {
104+
echo "$0: error: $1" >&2
105+
exit 1
106+
}
107+
108+
usage() {
109+
echo "Usage: $0 -f <file> -s <size> [-- <walrus store args>...]"
110+
echo ""
111+
echo "Split a file into chunks and store them using walrus store."
112+
echo ""
113+
echo "OPTIONS:"
114+
echo " -f <file> Input file to split (required)"
115+
echo " -s <size> Chunk size (e.g., 10M, 100K, 1G) (required)"
116+
echo " -h Print this usage message"
117+
echo " -- Delimiter for walrus store arguments"
118+
echo ""
119+
echo "EXAMPLES:"
120+
echo " $0 -f large_file.txt -s 10M -- --epochs 5"
121+
echo " $0 -f video.mp4 -s 100M -- --epochs max --force"
122+
echo ""
123+
echo "The chunks will be named: basename_0.ext, basename_1.ext, etc."
124+
echo "Chunks are automatically deleted when the script exits."
125+
}
126+
127+
file=""
128+
chunk_size=""
129+
walrus_args=()
130+
131+
# Parse arguments
132+
while [[ $# -gt 0 ]]; do
133+
case "$1" in
134+
-f)
135+
file="$2"
136+
shift 2
137+
;;
138+
-s)
139+
chunk_size="$2"
140+
shift 2
141+
;;
142+
-h)
143+
usage
144+
exit 0
145+
;;
146+
--)
147+
shift
148+
walrus_args=("$@")
149+
break
150+
;;
151+
*)
152+
error "Unknown option: $1"
153+
usage
154+
exit 1
155+
;;
156+
esac
157+
done
158+
159+
# Validate required arguments
160+
if [[ -z "$file" ]]; then
161+
error "input file (-f) is required"
162+
usage
163+
exit 1
164+
fi
165+
166+
if [[ -z "$chunk_size" ]]; then
167+
error "chunk size (-s) is required"
168+
usage
169+
exit 1
170+
fi
171+
172+
if [[ ! -f "$file" ]]; then
173+
die "file not found: $file"
174+
fi
175+
176+
# Extract basename and extension
177+
file_basename=$(basename "$file")
178+
file_name="${file_basename%.*}"
179+
file_ext="${file_basename##*.}"
180+
181+
# Handle case where file has no extension
182+
if [[ "$file_name" == "$file_ext" ]]; then
183+
file_ext=""
184+
else
185+
file_ext=".$file_ext"
186+
fi
187+
188+
# Create temp directory for chunks
189+
temp_dir=$(mktemp -d -t walrus-chunks-XXXXXX)
190+
trap 'rm -rf "'"$temp_dir" EXIT
191+
note "splitting $file into chunks of size $chunk_size in $temp_dir..." >&2
192+
193+
# Split the file into chunks with numeric suffixes
194+
split -b "$chunk_size" "$file" "$temp_dir/chunk_"
195+
196+
# Rename chunks to the desired format: basename_i.ext
197+
chunk_files=()
198+
i=0
199+
for chunk in "$temp_dir"/chunk_*; do
200+
if [[ "$file_ext" == "" ]]; then
201+
new_name="$temp_dir/${file_name}_${i}"
202+
else
203+
new_name="$temp_dir/${file_name}_${i}${file_ext}"
204+
fi
205+
mv "$chunk" "$new_name"
206+
chunk_files+=("$new_name")
207+
((i++))
208+
done
209+
210+
note "created ${#chunk_files[@]} chunks"
211+
212+
# Display the chunks
213+
for chunk in "${chunk_files[@]}"; do
214+
note " - $(basename "$chunk")"
215+
done
216+
217+
# Call walrus store for each chunk individually
218+
note "storing ${#chunk_files[@]} chunks..."
219+
220+
for chunk_file in "${chunk_files[@]}"; do
221+
note "running: walrus store ${walrus_args[*]} $chunk_file"
222+
223+
if ! walrus store "${walrus_args[@]}" "$chunk_file"; then
224+
exit_code=$?
225+
error "✗ walrus store failed with exit code: $exit_code"
226+
note "failed to store entire file. please address issue above and try again."
227+
exit $exit_code
228+
fi
229+
done
230+
231+
note "✓ all chunks stored successfully"
232+
```
233+
234+
</details>
235+
236+
For more information about maximum blob sizes, run `walrus info`.

0 commit comments

Comments
 (0)