Requirements:
- Docker
- S3 client (e.g. s3cmd)
brew install s3cmd
- Chorus management CLI chorctl
brew install clyso/tap/chorctl
Structure:
├── docker-compose
│ ├── docker-compose.yml # docker-compose file
│ ├── FakeS3Dockerfile # Dockerfile for in-memory S3 endpoint
│ ├── proxy-conf.yaml # example config for chorus-proxy
│ ├── README.md
│ ├── s3cmd-follower.conf # s3cmd credentials for follower storage
│ ├── s3cmd-main.conf # s3cmd credentials for main storage
│ ├── s3cmd-proxy.conf # s3cmd credentials for proxy storage
│ ├── s3-credentials.yaml # chorus common config with S3 credentials
│ └── worker-conf.yaml # example config for chorus-worker
Please also review docker-compose.yml file. It contains comments about services and their configuration.
To run chorus with docker compose:
- Clone repo:
git clone https://github.com/clyso/chorus.git && cd chorus
- Start chorus
worker,proxywith fake main and follower S3 backends:docker-compose -f ./docker-compose/docker-compose.yml --profile fake --profile proxy up
- Check chorus config with CLI:
And check that there are no ongoing replications with
% chorctl storage NAME ADDRESS PROVIDER USERS follower http://fake-s3-follower:9000 Other user1 main [MAIN] http://fake-s3-main:9000 Other user1chorctl dash. - Create a bucket named
testin themainstorage:Check that main bucket is empty:s3cmd mb s3://test -c ./docker-compose/s3cmd-main.conf
s3cmd ls s3://test -c ./docker-compose/s3cmd-main.conf
- Upload contents of this directory into
main:Check that follower bucket is still not exists:s3cmd sync ./docker-compose/ s3://test -c ./docker-compose/s3cmd-main.conf
s3cmd ls s3://test -c ./docker-compose/s3cmd-follower.conf ERROR: Bucket 'test' does not exist ERROR: S3 error: 404 (NoSuchBucket): The specified bucket does not exist - See
testbucket available for replication:% chorctl repl buckets -u user1 -f main -t follower BUCKET test - Enable replication for the bucket:
chorctl repl add -u user1 -b test -f main -t followerTip: For user-level replication (all existing and future buckets), omit the
-bflag:chorctl repl add -u user1 -f main -t follower
- Check replication progress with:
chorctl dash
- Check that data is actually replicated to the follower:
s3cmd ls s3://test -c ./docker-compose/s3cmd-follower.conf
- Now do some live changes to bucket using proxy:
s3cmd del s3://test/worker-conf.yaml -c ./docker-compose/s3cmd-proxy.conf
- List main and follower contents again to see that the file was removed from both. Feel free to play around with storages using
s3cmdand preconfigured configs s3cmd-main.conf s3cmd-follower.conf s3cmd-proxy.conf.
Try to add more worker instances to speed up replication process and observe how they are balancing the load. To do this, duplicate existing worker service in docker-compose.yml with omitted ports to avoid conflicts:
worker2:
depends_on:
- redis
build:
context: ../
args:
SERVICE: worker
volumes:
- type: bind
source: ./worker-conf.yaml
target: /bin/config/config.yaml
- type: bind
source: ./s3-credentials.yaml
target: /bin/config/override.yamlThe same can be done for worker3, worker4, etc. And for proxy service.
Replace S3 credentials in ./s3-credentials.yaml with your own s3 storages and start docker-compose without fake backends:
docker-compose -f ./docker-compose/docker-compose.yml --profile proxy upExplore more features with chorctl (chorctl --help) and WebUI.
To tear-down:
docker-compose -f ./docker-compose/docker-compose.yml downTo tear down and wipe all replication metadata:
docker-compose -f ./docker-compose/docker-compose.yml down -v
Rebuild images with the latest source changes (--build):
docker-compose -f ./docker-compose/docker-compose.yml --profile fake --profile proxy --build upTo use chorus images from registry instead of building from source replace in docker-compose.yml:
worker:
build:
context: ../
args:
SERVICE: workerto:
worker:
image: "harbor.clyso.com/chorus/worker:latest" And similar for web-ui and proxy.