Skip to content

Enhance Docker entrypoint and update migration process#946

Open
goldflag wants to merge 3 commits into
masterfrom
migrations
Open

Enhance Docker entrypoint and update migration process#946
goldflag wants to merge 3 commits into
masterfrom
migrations

Conversation

@goldflag
Copy link
Copy Markdown
Collaborator

@goldflag goldflag commented Mar 20, 2026

  • Added a wait mechanism for PostgreSQL readiness in the Docker entrypoint script to ensure the database is available before running migrations.
  • Changed the migration command from db:push to db:migrate for improved migration handling.
  • Updated Dockerfile to include the drizzle directory, ensuring all necessary files are copied during the build process.

Summary by CodeRabbit

Release Notes

  • Bug Fixes

    • Improved application startup reliability by ensuring the database is ready before running migrations, preventing initialization failures.
  • Chores

    • Finalized database schema structure with proper versioning and migration management.
    • Removed deprecated user data field.

- Added a wait mechanism for PostgreSQL readiness in the Docker entrypoint script to ensure the database is available before running migrations.
- Changed the migration command from `db:push` to `db:migrate` for improved migration handling.
- Updated Dockerfile to include the drizzle directory, ensuring all necessary files are copied during the build process.
@vercel
Copy link
Copy Markdown

vercel Bot commented Mar 20, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
rybbit Ready Ready Preview, Comment Mar 20, 2026 4:12am

Request Review

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Mar 20, 2026

📝 Walkthrough

Walkthrough

This change establishes the database initialization infrastructure for the application by introducing a Drizzle ORM migration system, SQL schema definition, Docker entrypoint startup sequencing with PostgreSQL readiness checks, and updates the schema to remove a deprecated Stripe field while adding cascading delete constraints.

Changes

Cohort / File(s) Summary
Docker Infrastructure
server/Dockerfile, server/docker-entrypoint.sh
Added drizzle directory copy to runtime image; implemented PostgreSQL readiness polling with configurable host/port/user/db parameters; changed migration command from db:push --force to db:migrate.
Database Schema Definition
server/src/db/postgres/schema.ts
Removed stripeCustomerId column from user table; added .onDelete("cascade") constraints to foreign keys in uptimeMonitorStatus, uptimeAlerts, and uptimeAlertHistory tables.
Drizzle Migration Files
server/drizzle/meta/0000_snapshot.json, server/drizzle/meta/_journal.json, server/drizzle/0000_magenta_firebird.sql
Introduced complete database schema snapshot with 25 tables and 1 enum type covering authentication, organization, telemetry, and uptime monitoring; added migration journal entry; generated comprehensive SQL migration with idempotent table/constraint/index creation and foreign key definitions.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Poem

🐰 Hop hop, the database takes shape!
Drizzle migrations now escape,
Postgres waits with patient cheer,
While schemas cascade crystal clear. 🌱✨

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately captures the two main changes: enhancing the Docker entrypoint with PostgreSQL wait logic and updating the migration process from db:push to db:migrate.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch migrations
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@server/docker-entrypoint.sh`:
- Around line 4-9: The PostgreSQL readiness loop using pg_isready should include
a max retry or timeout to avoid hanging indefinitely; modify the loop around the
pg_isready invocation to use a configurable POSTGRES_MAX_RETRIES (or
POSTGRES_WAIT_TIMEOUT) and a retry counter or elapsed-time check, incrementing
the counter each sleep and aborting with a clear error log and non-zero exit
(process exit) when the limit is reached; update the echo messages to include
the retry count and ensure the script exits if pg_isready against
POSTGRES_HOST/POSTGRES_PORT/POSTGRES_USER/POSTGRES_DB fails after the allowed
attempts.

In `@server/drizzle/0000_premium_jubilee.sql`:
- Around line 518-522: The migration's FK clauses for uptime_monitor_status,
uptime_alerts, and uptime_alert_history currently use ON DELETE no action but
schema.ts sets onDelete: "cascade" and deleteMonitor (deleteMonitor.ts) only
removes uptime_monitor_status, causing FK violations; fix by either updating the
migration DO block to change the FOREIGN KEY constraints for monitor_id on
tables uptime_monitor_status, uptime_alerts, and uptime_alert_history to use ON
DELETE CASCADE (so DB enforces cascade deletes) or modify the deleteMonitor
handler (deleteMonitor.ts) to explicitly delete rows from uptime_alert_history
and uptime_alerts for the monitor before deleting
uptime_monitor_status/uptime_monitors, and ensure schema.ts remains consistent
with the chosen approach.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 1472448d-06f6-4d1a-81d3-3fd02e56c857

📥 Commits

Reviewing files that changed from the base of the PR and between 36f4c5e and 6ce5367.

📒 Files selected for processing (5)
  • server/Dockerfile
  • server/docker-entrypoint.sh
  • server/drizzle/0000_premium_jubilee.sql
  • server/drizzle/meta/0000_snapshot.json
  • server/drizzle/meta/_journal.json

Comment on lines +4 to +9
# Wait for PostgreSQL to be ready (postgresql-client is already in the image)
echo "Waiting for PostgreSQL..."
until pg_isready -h "${POSTGRES_HOST:-postgres}" -p "${POSTGRES_PORT:-5432}" -U "${POSTGRES_USER:-frog}" -d "${POSTGRES_DB:-analytics}" -q; do
sleep 1
done
echo "PostgreSQL is ready."
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add a timeout or max retry count to the PostgreSQL wait loop.

The current loop will run indefinitely if PostgreSQL is unreachable or misconfigured. This can cause silent deployment failures where the container appears stuck without clear indication of the problem.

🛡️ Proposed fix with timeout
 # Wait for PostgreSQL to be ready (postgresql-client is already in the image)
 echo "Waiting for PostgreSQL..."
+MAX_RETRIES=30
+RETRY_COUNT=0
-until pg_isready -h "${POSTGRES_HOST:-postgres}" -p "${POSTGRES_PORT:-5432}" -U "${POSTGRES_USER:-frog}" -d "${POSTGRES_DB:-analytics}" -q; do
+until pg_isready -h "${POSTGRES_HOST:-postgres}" -p "${POSTGRES_PORT:-5432}" -U "${POSTGRES_USER:-frog}" -d "${POSTGRES_DB:-analytics}" -q; do
+  RETRY_COUNT=$((RETRY_COUNT + 1))
+  if [ "$RETRY_COUNT" -ge "$MAX_RETRIES" ]; then
+    echo "PostgreSQL did not become ready after $MAX_RETRIES attempts. Exiting."
+    exit 1
+  fi
+  echo "Waiting for PostgreSQL... (attempt $RETRY_COUNT/$MAX_RETRIES)"
   sleep 1
 done
 echo "PostgreSQL is ready."
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Wait for PostgreSQL to be ready (postgresql-client is already in the image)
echo "Waiting for PostgreSQL..."
until pg_isready -h "${POSTGRES_HOST:-postgres}" -p "${POSTGRES_PORT:-5432}" -U "${POSTGRES_USER:-frog}" -d "${POSTGRES_DB:-analytics}" -q; do
sleep 1
done
echo "PostgreSQL is ready."
# Wait for PostgreSQL to be ready (postgresql-client is already in the image)
echo "Waiting for PostgreSQL..."
MAX_RETRIES=30
RETRY_COUNT=0
until pg_isready -h "${POSTGRES_HOST:-postgres}" -p "${POSTGRES_PORT:-5432}" -U "${POSTGRES_USER:-frog}" -d "${POSTGRES_DB:-analytics}" -q; do
RETRY_COUNT=$((RETRY_COUNT + 1))
if [ "$RETRY_COUNT" -ge "$MAX_RETRIES" ]; then
echo "PostgreSQL did not become ready after $MAX_RETRIES attempts. Exiting."
exit 1
fi
echo "Waiting for PostgreSQL... (attempt $RETRY_COUNT/$MAX_RETRIES)"
sleep 1
done
echo "PostgreSQL is ready."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/docker-entrypoint.sh` around lines 4 - 9, The PostgreSQL readiness
loop using pg_isready should include a max retry or timeout to avoid hanging
indefinitely; modify the loop around the pg_isready invocation to use a
configurable POSTGRES_MAX_RETRIES (or POSTGRES_WAIT_TIMEOUT) and a retry counter
or elapsed-time check, incrementing the counter each sleep and aborting with a
clear error log and non-zero exit (process exit) when the limit is reached;
update the echo messages to include the retry count and ensure the script exits
if pg_isready against POSTGRES_HOST/POSTGRES_PORT/POSTGRES_USER/POSTGRES_DB
fails after the allowed attempts.

Comment thread server/drizzle/0000_magenta_firebird.sql
- Added a new journal entry for 'deep_penance' with version 7 to the _journal.json file.
- Removed deprecated fields 'stripeCustomerId' and associated comments from the PostgreSQL schema to streamline the database structure.
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@server/drizzle/meta/0001_snapshot.json`:
- Around line 1766-1779: The explicit foreignKey() constraint blocks for
uptime_monitor_status and uptime_alerts currently omit onDelete causing "no
action" in the snapshot; update the foreignKey(...) definitions that reference
uptimeMonitors.id (the blocks that name
uptime_monitor_status_monitor_id_uptime_monitors_id_fk and
uptime_alerts_monitor_id_uptime_monitors_id_fk and that use table.monitorId /
uptime_alerts.monitorId) to include onDelete: "cascade", then regenerate the
migration snapshot so the JSON reflects cascade deletion.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 04ab6b40-4393-4d9b-b08d-892fa90bc1fc

📥 Commits

Reviewing files that changed from the base of the PR and between 6ce5367 and e4c6f3e.

📒 Files selected for processing (4)
  • server/drizzle/0001_deep_penance.sql
  • server/drizzle/meta/0001_snapshot.json
  • server/drizzle/meta/_journal.json
  • server/src/db/postgres/schema.ts
💤 Files with no reviewable changes (1)
  • server/src/db/postgres/schema.ts
✅ Files skipped from review due to trivial changes (1)
  • server/drizzle/0001_deep_penance.sql
🚧 Files skipped from review as they are similar to previous changes (1)
  • server/drizzle/meta/_journal.json

Comment thread server/drizzle/meta/0001_snapshot.json Outdated
- Deleted the SQL files '0000_premium_jubilee.sql' and '0001_deep_penance.sql' to clean up the database schema.
- Updated foreign key constraints in the schema to use 'cascade' on delete, ensuring proper data integrity during deletions.
- Removed the 'stripeCustomerId' column from the user table to streamline the database structure.
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@server/drizzle/0000_magenta_firebird.sql`:
- Around line 140-147: Remove the redundant single-column indexes that duplicate
PRIMARY/UNIQUE constraints (e.g., drop member_site_access_member_idx,
user_aliases_anon_idx, user_profiles_site_idx) since the PRIMARY KEY on "id" and
UNIQUE constraint member_site_access_unique("member_id","site_id") already
create indexes; then add supporting indexes on the foreign key columns used by
the four uptime tables that currently rely on CASCADE DELETE so DELETEs don't
trigger full-table scans—identify the uptime tables and create indexes on their
FK columns referencing the parent site/member rows (ensure index names are
descriptive and not duplicative of existing constraints).

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: bd189512-4d60-4548-abc4-a7100e6adb2e

📥 Commits

Reviewing files that changed from the base of the PR and between e4c6f3e and c959278.

📒 Files selected for processing (4)
  • server/drizzle/0000_magenta_firebird.sql
  • server/drizzle/meta/0000_snapshot.json
  • server/drizzle/meta/_journal.json
  • server/src/db/postgres/schema.ts
✅ Files skipped from review due to trivial changes (1)
  • server/drizzle/meta/_journal.json
🚧 Files skipped from review as they are similar to previous changes (2)
  • server/src/db/postgres/schema.ts
  • server/drizzle/meta/0000_snapshot.json

Comment on lines +140 to +147
CREATE TABLE IF NOT EXISTS "member_site_access" (
"id" serial PRIMARY KEY NOT NULL,
"member_id" text NOT NULL,
"site_id" integer NOT NULL,
"created_at" timestamp DEFAULT now() NOT NULL,
"created_by" text,
CONSTRAINT "member_site_access_unique" UNIQUE("member_id","site_id")
);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
python - <<'PY'
from pathlib import Path

sql = Path("server/drizzle/0000_magenta_firebird.sql").read_text()

checks = {
    "member_site_access_unique(member_id,site_id)": 'UNIQUE("member_id","site_id")',
    "member_site_access_member_idx": 'ON "member_site_access" USING btree ("member_id")',
    "user_aliases_unique(site_id,anonymous_id)": 'UNIQUE("site_id","anonymous_id")',
    "user_aliases_anon_idx": 'ON "user_aliases" USING btree ("site_id","anonymous_id")',
    "user_profiles_pk(site_id,user_id)": 'PRIMARY KEY("site_id","user_id")',
    "user_profiles_site_idx": 'ON "user_profiles" USING btree ("site_id")',
    "uptime_alert_history_alert_id_idx": 'ON "uptime_alert_history" USING btree ("alert_id")',
    "uptime_alert_history_monitor_id_idx": 'ON "uptime_alert_history" USING btree ("monitor_id")',
    "uptime_alerts_monitor_id_idx": 'ON "uptime_alerts" USING btree ("monitor_id")',
    "uptime_incidents_monitor_id_idx": 'ON "uptime_incidents" USING btree ("monitor_id")',
}

for label, marker in checks.items():
    print(f"{label}: {'present' if marker in sql else 'missing'}")
PY

Repository: rybbit-io/rybbit

Length of output: 476


🏁 Script executed:

#!/bin/bash
# Get file size and read the relevant sections
wc -l server/drizzle/0000_magenta_firebird.sql

# Read the specified line ranges to understand the structure
echo "=== Lines 140-147 (member_site_access) ==="
sed -n '140,147p' server/drizzle/0000_magenta_firebird.sql

echo -e "\n=== Lines 233-250 (user_aliases table) ==="
sed -n '233,250p' server/drizzle/0000_magenta_firebird.sql

echo -e "\n=== Lines 252-268 (user_profiles table) ==="
sed -n '252,268p' server/drizzle/0000_magenta_firebird.sql

echo -e "\n=== Lines 328-344 (uptime tables) ==="
sed -n '328,344p' server/drizzle/0000_magenta_firebird.sql

echo -e "\n=== Lines 547-552 (uptime_incidents) ==="
sed -n '547,552p' server/drizzle/0000_magenta_firebird.sql

Repository: rybbit-io/rybbit

Length of output: 3065


🏁 Script executed:

#!/bin/bash
# Search for foreign key constraints
echo "=== Foreign Key Constraints ==="
rg -A 2 'REFERENCES|CONSTRAINT.*FOREIGN' server/drizzle/0000_magenta_firebird.sql

echo -e "\n=== All indexes in the file ==="
rg 'CREATE INDEX' server/drizzle/0000_magenta_firebird.sql

Repository: rybbit-io/rybbit

Length of output: 8493


Remove redundant indexes and add indexes for CASCADE DELETE foreign keys.

The three existing indexes (member_site_access_member_idx, user_aliases_anon_idx, and user_profiles_site_idx) are redundant with their corresponding UNIQUE/PRIMARY KEY constraints, which already create indexes. Meanwhile, the four uptime tables have CASCADE DELETE foreign keys without any supporting indexes, causing unnecessary full-table scans during delete operations.

Suggested index reshuffle
-CREATE INDEX IF NOT EXISTS "member_site_access_member_idx" ON "member_site_access" USING btree ("member_id");--> statement-breakpoint
 CREATE INDEX IF NOT EXISTS "member_site_access_site_idx" ON "member_site_access" USING btree ("site_id");--> statement-breakpoint
 CREATE INDEX IF NOT EXISTS "uptime_monitor_status_updated_at_idx" ON "uptime_monitor_status" USING btree ("updated_at");--> statement-breakpoint
+CREATE INDEX IF NOT EXISTS "uptime_alert_history_alert_id_idx" ON "uptime_alert_history" USING btree ("alert_id");--> statement-breakpoint
+CREATE INDEX IF NOT EXISTS "uptime_alert_history_monitor_id_idx" ON "uptime_alert_history" USING btree ("monitor_id");--> statement-breakpoint
+CREATE INDEX IF NOT EXISTS "uptime_alerts_monitor_id_idx" ON "uptime_alerts" USING btree ("monitor_id");--> statement-breakpoint
+CREATE INDEX IF NOT EXISTS "uptime_incidents_monitor_id_idx" ON "uptime_incidents" USING btree ("monitor_id");--> statement-breakpoint
 CREATE INDEX IF NOT EXISTS "user_aliases_user_idx" ON "user_aliases" USING btree ("site_id","user_id");--> statement-breakpoint
-CREATE INDEX IF NOT EXISTS "user_aliases_anon_idx" ON "user_aliases" USING btree ("site_id","anonymous_id");--> statement-breakpoint
-CREATE INDEX IF NOT EXISTS "user_profiles_site_idx" ON "user_profiles" USING btree ("site_id");

Also applies to: 233-250, 252-268, 328-344, 547-552

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/drizzle/0000_magenta_firebird.sql` around lines 140 - 147, Remove the
redundant single-column indexes that duplicate PRIMARY/UNIQUE constraints (e.g.,
drop member_site_access_member_idx, user_aliases_anon_idx,
user_profiles_site_idx) since the PRIMARY KEY on "id" and UNIQUE constraint
member_site_access_unique("member_id","site_id") already create indexes; then
add supporting indexes on the foreign key columns used by the four uptime tables
that currently rely on CASCADE DELETE so DELETEs don't trigger full-table
scans—identify the uptime tables and create indexes on their FK columns
referencing the parent site/member rows (ensure index names are descriptive and
not duplicative of existing constraints).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant