Run the docker images to emulate a cloud deployment locally on your machine. Docker and Docker Compose are required. The easiest way is to install Docker Desktop but Podman is also supported if you manually install Docker Compose.
Note
If running on mac apple silicon, docker and podman run a linux VM where the images are build and run. We have hit issues when the VM does not have enough disk space and memory. We've found the following limits to work okay:
- 8 CPU
- 12GB memory
- 200GB disk space
To get started, run docker compose build to build the required images. Run docker compose up to
start the deployment. The sites are available at:
- partners: http://localhost:3001
- public: http://localhost:3000
Optionally, start pgadmin for looking around the database:
COMPOSE_PROFILES=pgadmin docker compose up or docker compose start pgadmin:
- pgadmin: http://localhost:3200
- username:
[email protected] - password:
abcdef - database user:
bloom_readonly - database password:
bloom_readonly_pw
- username:
Optionally, start the observability stack to see metrics:
COMPOSE_PROFILES=observability docker compose up:
- grafana: http://localhost:3300
To include dashboard changes in code review, make the edits in the Grafana web UI, then click the 'Save dashboard' button. Once the dashboard is saved in the web UI, sync the changes to the git repo by running:
docker compose restart grafana-syncThe following containers are defined in the docker-compose.yml file:
Will always run:
lb: a nginx load balancer that fronts theapi,partners, andpubliccontainers. A LB is required to run multiple replicas of these containers.db: a postgres database.dbinit: runs the db init scriptdbseed: runs the db seed script.api: the api.partners: the partners site.public: the public site.
Will run with COMPOSE_PROFILES=ci:
dbreadonlycheck: runs the docker-compose.check.sql script that validates the created DB users.
Will run with COMPOSE_PROFILES=pgadmin:
pgadmin: runs pgadmin with a connection to the DB configured.
Will run with COMPOSE_PROFILES=observability:
otel-collector: runs the aws-otel-collector process that scrapes metrics data from the api, partners, and public processes.prometheus: runs a Prometheus server that stores the metric values.grafana: runs a Grafana server with dashboards for the API, partners, and public sites.grafana-sync: runs a dashboard sync script that allows editing graphs and dashboards in local Grafana then submitting for code review.
Build, start, and tear down containers with the following commands. Each command takes an optional list of containers to operate on. By default the command operates on all containers.
docker compose build [container...]docker compose up [container...]docker compose down [container...]
For example, to rebuild just the api and partners site docker images, run:
docker compose build api partnersResources:
- vCPU: 1
- memory: 2 GiB
Node JS:
NODE_OPTIONS='--max-semi-space-size=64 --max-old-space-size=2048'
Resources:
- vCPU: 2
- memory: 4 GiB
Node JS:
NODE_OPTIONS='--max-semi-space-size=128 --max-old-space-size=4096'
Resources:
- vCPU: 2
- memory: 6 GiB
The container vCPU and memory limits were set by starting with the smallest AWS ECS Fargate limits then increasing when the container needed more during testing.
The Node JS memory settings were set by manually testing different settings combinations. There are likely more optimal configurations, that require a more analytical testing approach to find.
The default memory limits are described in
https://developers.redhat.com/articles/2025/10/10/nodejs-20-memory-management-containers. The limits
were increased for the Next js sites because they run next build at container start time. The
current next build requires access to a deployed Bloom api and database to succeed. The current
container resource usage pattern is to spike to the limit during next build then decrease to
minimal usage after starting serving the server.
Hacking the next build process was explored in #5437 (comment). Configuring a shared build cache is the current planned workaround to reducing container start latency and resource utilization but not yet implemented.
The compose stack is tested in the docker_compose_ci.yml GitHub workflow.
By default the api, partners, and public containers each run with 1 replica. To control the
number of replicas, use the API_REPLICAS, PARTNERS_REPLICAS, and PUBLIC_REPLICAS environment
variables. For example, the following command runs 3 replicas for each service:
API_REPLICAS=3 PARTNERS_REPLICAS=3 PUBLIC_REPLICAS=3 docker compose upBy default the containers are just stopped and not deleted when you exit the docker compose up
command. If you run docker compose up again the containers will be restarted. This means the db
container will have the prior state from the last run. This causes the dbseed container to fail
because it already seeded the database on the previous run. Either:
- Run
docker compose downbetween runningdocker compose up. This wipes the database state clean. - Run
docker compose up lb db api partners public --no-depsto restart just thelb,db,api,partners, andpubliccontainers.
By default docker compose up will not rebuild images. Rebuild with docker compose build.
To run a command inside one of the containers defined in the docker-compose.yml file, first get its container ID:
docker container lsFor example, the ID of of the public site container in the following example output is
62de5c066288:
$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5b4b92589171 docker.io/library/postgres:18 postgres 13 minutes ago Up 13 minutes (healthy) 5432/tcp bloom-db-1
3fb2c2f39540 docker.io/library/bloom-api:latest /bin/bash -c yarn... 13 minutes ago Up 13 minutes (healthy) 0.0.0.0:3100->3100/tcp bloom-api-1
d111808885be docker.io/library/bloom-partners:latest /bin/bash -c yarn... 13 minutes ago Up 13 minutes 0.0.0.0:3001->3001/tcp bloom-partners-1
62de5c066288 docker.io/library/bloom-public:latest /bin/bash -c yarn... 13 minutes ago Up 13 minutes 0.0.0.0:3000->3000/tcp bloom-public-1
Then, use docker exec to run a command. For example, the following command runs ls /app/sites/public/.next/server/chunks in the public site container:
docker exec 62de5c066288 ls /app/sites/public/.next/server/chunksTo run a command that expects stdin like bash, use docker exec -it. For example, the following
command runs a bash process inside the public site container:
docker exec -it 62de5c066288 /bin/bashThe docker-compose.yml file creates a docker network called bloom_default. To run a new container
inside the network, use the --network=bloom_default flag in docker container run. For example, the
following command starts a python docker image and runs a bash shell in it. Then in the shell it
curls the api and pretty-prints the result using python:
docker container run --network=bloom_default --rm -it python:latest /bin/bash
root@c89395fd8af3:/# curl api:3100/jurisdictions | python -m json
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 26650 100 26650 0 0 1824k 0 --:--:-- --:--:-- --:--:-- 1858k
[
{
"id": "1f5639e2-5de0-40b3-8e32-ed5dca4d22f2",
"createdAt": "2025-10-20T23:01:27.920Z",
"updatedAt": "2025-10-20T23:01:27.920Z",
"name": "Revitalizing Rosewood",
"notificationsSignUpUrl": "https://www.exygy.com",
"languages": [
"en",
...
The --rm flag removes the container when the command exits. Otherwise the container still exits
but with an Exited status. Run docker container ls -a to list all containers including ones that
are not running.
The database is seeded with the yarn:dbseed:staging script by default. To use the dev seed script,
edit the command: field for the dbseed service in the docker-compose.yml
file:
dbseed:
build:
context: ./api
dockerfile: Dockerfile.dbseed.dev
restart: no
environment:
DATABASE_URL: "postgres://postgres:example@db:5432/bloom_prisma"
command:
- "yarn"
- "db:seed:development" # <- change this line to your desired DB seed
# script in ./api/package.json
depends_on:
api:
condition: service_healthy