Enhance Docker entrypoint and update migration process#946
Conversation
- Added a wait mechanism for PostgreSQL readiness in the Docker entrypoint script to ensure the database is available before running migrations. - Changed the migration command from `db:push` to `db:migrate` for improved migration handling. - Updated Dockerfile to include the drizzle directory, ensuring all necessary files are copied during the build process.
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
📝 WalkthroughWalkthroughThis change establishes the database initialization infrastructure for the application by introducing a Drizzle ORM migration system, SQL schema definition, Docker entrypoint startup sequencing with PostgreSQL readiness checks, and updates the schema to remove a deprecated Stripe field while adding cascading delete constraints. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@server/docker-entrypoint.sh`:
- Around line 4-9: The PostgreSQL readiness loop using pg_isready should include
a max retry or timeout to avoid hanging indefinitely; modify the loop around the
pg_isready invocation to use a configurable POSTGRES_MAX_RETRIES (or
POSTGRES_WAIT_TIMEOUT) and a retry counter or elapsed-time check, incrementing
the counter each sleep and aborting with a clear error log and non-zero exit
(process exit) when the limit is reached; update the echo messages to include
the retry count and ensure the script exits if pg_isready against
POSTGRES_HOST/POSTGRES_PORT/POSTGRES_USER/POSTGRES_DB fails after the allowed
attempts.
In `@server/drizzle/0000_premium_jubilee.sql`:
- Around line 518-522: The migration's FK clauses for uptime_monitor_status,
uptime_alerts, and uptime_alert_history currently use ON DELETE no action but
schema.ts sets onDelete: "cascade" and deleteMonitor (deleteMonitor.ts) only
removes uptime_monitor_status, causing FK violations; fix by either updating the
migration DO block to change the FOREIGN KEY constraints for monitor_id on
tables uptime_monitor_status, uptime_alerts, and uptime_alert_history to use ON
DELETE CASCADE (so DB enforces cascade deletes) or modify the deleteMonitor
handler (deleteMonitor.ts) to explicitly delete rows from uptime_alert_history
and uptime_alerts for the monitor before deleting
uptime_monitor_status/uptime_monitors, and ensure schema.ts remains consistent
with the chosen approach.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 1472448d-06f6-4d1a-81d3-3fd02e56c857
📒 Files selected for processing (5)
server/Dockerfileserver/docker-entrypoint.shserver/drizzle/0000_premium_jubilee.sqlserver/drizzle/meta/0000_snapshot.jsonserver/drizzle/meta/_journal.json
| # Wait for PostgreSQL to be ready (postgresql-client is already in the image) | ||
| echo "Waiting for PostgreSQL..." | ||
| until pg_isready -h "${POSTGRES_HOST:-postgres}" -p "${POSTGRES_PORT:-5432}" -U "${POSTGRES_USER:-frog}" -d "${POSTGRES_DB:-analytics}" -q; do | ||
| sleep 1 | ||
| done | ||
| echo "PostgreSQL is ready." |
There was a problem hiding this comment.
Add a timeout or max retry count to the PostgreSQL wait loop.
The current loop will run indefinitely if PostgreSQL is unreachable or misconfigured. This can cause silent deployment failures where the container appears stuck without clear indication of the problem.
🛡️ Proposed fix with timeout
# Wait for PostgreSQL to be ready (postgresql-client is already in the image)
echo "Waiting for PostgreSQL..."
+MAX_RETRIES=30
+RETRY_COUNT=0
-until pg_isready -h "${POSTGRES_HOST:-postgres}" -p "${POSTGRES_PORT:-5432}" -U "${POSTGRES_USER:-frog}" -d "${POSTGRES_DB:-analytics}" -q; do
+until pg_isready -h "${POSTGRES_HOST:-postgres}" -p "${POSTGRES_PORT:-5432}" -U "${POSTGRES_USER:-frog}" -d "${POSTGRES_DB:-analytics}" -q; do
+ RETRY_COUNT=$((RETRY_COUNT + 1))
+ if [ "$RETRY_COUNT" -ge "$MAX_RETRIES" ]; then
+ echo "PostgreSQL did not become ready after $MAX_RETRIES attempts. Exiting."
+ exit 1
+ fi
+ echo "Waiting for PostgreSQL... (attempt $RETRY_COUNT/$MAX_RETRIES)"
sleep 1
done
echo "PostgreSQL is ready."📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Wait for PostgreSQL to be ready (postgresql-client is already in the image) | |
| echo "Waiting for PostgreSQL..." | |
| until pg_isready -h "${POSTGRES_HOST:-postgres}" -p "${POSTGRES_PORT:-5432}" -U "${POSTGRES_USER:-frog}" -d "${POSTGRES_DB:-analytics}" -q; do | |
| sleep 1 | |
| done | |
| echo "PostgreSQL is ready." | |
| # Wait for PostgreSQL to be ready (postgresql-client is already in the image) | |
| echo "Waiting for PostgreSQL..." | |
| MAX_RETRIES=30 | |
| RETRY_COUNT=0 | |
| until pg_isready -h "${POSTGRES_HOST:-postgres}" -p "${POSTGRES_PORT:-5432}" -U "${POSTGRES_USER:-frog}" -d "${POSTGRES_DB:-analytics}" -q; do | |
| RETRY_COUNT=$((RETRY_COUNT + 1)) | |
| if [ "$RETRY_COUNT" -ge "$MAX_RETRIES" ]; then | |
| echo "PostgreSQL did not become ready after $MAX_RETRIES attempts. Exiting." | |
| exit 1 | |
| fi | |
| echo "Waiting for PostgreSQL... (attempt $RETRY_COUNT/$MAX_RETRIES)" | |
| sleep 1 | |
| done | |
| echo "PostgreSQL is ready." |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/docker-entrypoint.sh` around lines 4 - 9, The PostgreSQL readiness
loop using pg_isready should include a max retry or timeout to avoid hanging
indefinitely; modify the loop around the pg_isready invocation to use a
configurable POSTGRES_MAX_RETRIES (or POSTGRES_WAIT_TIMEOUT) and a retry counter
or elapsed-time check, incrementing the counter each sleep and aborting with a
clear error log and non-zero exit (process exit) when the limit is reached;
update the echo messages to include the retry count and ensure the script exits
if pg_isready against POSTGRES_HOST/POSTGRES_PORT/POSTGRES_USER/POSTGRES_DB
fails after the allowed attempts.
- Added a new journal entry for 'deep_penance' with version 7 to the _journal.json file. - Removed deprecated fields 'stripeCustomerId' and associated comments from the PostgreSQL schema to streamline the database structure.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@server/drizzle/meta/0001_snapshot.json`:
- Around line 1766-1779: The explicit foreignKey() constraint blocks for
uptime_monitor_status and uptime_alerts currently omit onDelete causing "no
action" in the snapshot; update the foreignKey(...) definitions that reference
uptimeMonitors.id (the blocks that name
uptime_monitor_status_monitor_id_uptime_monitors_id_fk and
uptime_alerts_monitor_id_uptime_monitors_id_fk and that use table.monitorId /
uptime_alerts.monitorId) to include onDelete: "cascade", then regenerate the
migration snapshot so the JSON reflects cascade deletion.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 04ab6b40-4393-4d9b-b08d-892fa90bc1fc
📒 Files selected for processing (4)
server/drizzle/0001_deep_penance.sqlserver/drizzle/meta/0001_snapshot.jsonserver/drizzle/meta/_journal.jsonserver/src/db/postgres/schema.ts
💤 Files with no reviewable changes (1)
- server/src/db/postgres/schema.ts
✅ Files skipped from review due to trivial changes (1)
- server/drizzle/0001_deep_penance.sql
🚧 Files skipped from review as they are similar to previous changes (1)
- server/drizzle/meta/_journal.json
- Deleted the SQL files '0000_premium_jubilee.sql' and '0001_deep_penance.sql' to clean up the database schema. - Updated foreign key constraints in the schema to use 'cascade' on delete, ensuring proper data integrity during deletions. - Removed the 'stripeCustomerId' column from the user table to streamline the database structure.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@server/drizzle/0000_magenta_firebird.sql`:
- Around line 140-147: Remove the redundant single-column indexes that duplicate
PRIMARY/UNIQUE constraints (e.g., drop member_site_access_member_idx,
user_aliases_anon_idx, user_profiles_site_idx) since the PRIMARY KEY on "id" and
UNIQUE constraint member_site_access_unique("member_id","site_id") already
create indexes; then add supporting indexes on the foreign key columns used by
the four uptime tables that currently rely on CASCADE DELETE so DELETEs don't
trigger full-table scans—identify the uptime tables and create indexes on their
FK columns referencing the parent site/member rows (ensure index names are
descriptive and not duplicative of existing constraints).
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: bd189512-4d60-4548-abc4-a7100e6adb2e
📒 Files selected for processing (4)
server/drizzle/0000_magenta_firebird.sqlserver/drizzle/meta/0000_snapshot.jsonserver/drizzle/meta/_journal.jsonserver/src/db/postgres/schema.ts
✅ Files skipped from review due to trivial changes (1)
- server/drizzle/meta/_journal.json
🚧 Files skipped from review as they are similar to previous changes (2)
- server/src/db/postgres/schema.ts
- server/drizzle/meta/0000_snapshot.json
| CREATE TABLE IF NOT EXISTS "member_site_access" ( | ||
| "id" serial PRIMARY KEY NOT NULL, | ||
| "member_id" text NOT NULL, | ||
| "site_id" integer NOT NULL, | ||
| "created_at" timestamp DEFAULT now() NOT NULL, | ||
| "created_by" text, | ||
| CONSTRAINT "member_site_access_unique" UNIQUE("member_id","site_id") | ||
| ); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
python - <<'PY'
from pathlib import Path
sql = Path("server/drizzle/0000_magenta_firebird.sql").read_text()
checks = {
"member_site_access_unique(member_id,site_id)": 'UNIQUE("member_id","site_id")',
"member_site_access_member_idx": 'ON "member_site_access" USING btree ("member_id")',
"user_aliases_unique(site_id,anonymous_id)": 'UNIQUE("site_id","anonymous_id")',
"user_aliases_anon_idx": 'ON "user_aliases" USING btree ("site_id","anonymous_id")',
"user_profiles_pk(site_id,user_id)": 'PRIMARY KEY("site_id","user_id")',
"user_profiles_site_idx": 'ON "user_profiles" USING btree ("site_id")',
"uptime_alert_history_alert_id_idx": 'ON "uptime_alert_history" USING btree ("alert_id")',
"uptime_alert_history_monitor_id_idx": 'ON "uptime_alert_history" USING btree ("monitor_id")',
"uptime_alerts_monitor_id_idx": 'ON "uptime_alerts" USING btree ("monitor_id")',
"uptime_incidents_monitor_id_idx": 'ON "uptime_incidents" USING btree ("monitor_id")',
}
for label, marker in checks.items():
print(f"{label}: {'present' if marker in sql else 'missing'}")
PYRepository: rybbit-io/rybbit
Length of output: 476
🏁 Script executed:
#!/bin/bash
# Get file size and read the relevant sections
wc -l server/drizzle/0000_magenta_firebird.sql
# Read the specified line ranges to understand the structure
echo "=== Lines 140-147 (member_site_access) ==="
sed -n '140,147p' server/drizzle/0000_magenta_firebird.sql
echo -e "\n=== Lines 233-250 (user_aliases table) ==="
sed -n '233,250p' server/drizzle/0000_magenta_firebird.sql
echo -e "\n=== Lines 252-268 (user_profiles table) ==="
sed -n '252,268p' server/drizzle/0000_magenta_firebird.sql
echo -e "\n=== Lines 328-344 (uptime tables) ==="
sed -n '328,344p' server/drizzle/0000_magenta_firebird.sql
echo -e "\n=== Lines 547-552 (uptime_incidents) ==="
sed -n '547,552p' server/drizzle/0000_magenta_firebird.sqlRepository: rybbit-io/rybbit
Length of output: 3065
🏁 Script executed:
#!/bin/bash
# Search for foreign key constraints
echo "=== Foreign Key Constraints ==="
rg -A 2 'REFERENCES|CONSTRAINT.*FOREIGN' server/drizzle/0000_magenta_firebird.sql
echo -e "\n=== All indexes in the file ==="
rg 'CREATE INDEX' server/drizzle/0000_magenta_firebird.sqlRepository: rybbit-io/rybbit
Length of output: 8493
Remove redundant indexes and add indexes for CASCADE DELETE foreign keys.
The three existing indexes (member_site_access_member_idx, user_aliases_anon_idx, and user_profiles_site_idx) are redundant with their corresponding UNIQUE/PRIMARY KEY constraints, which already create indexes. Meanwhile, the four uptime tables have CASCADE DELETE foreign keys without any supporting indexes, causing unnecessary full-table scans during delete operations.
Suggested index reshuffle
-CREATE INDEX IF NOT EXISTS "member_site_access_member_idx" ON "member_site_access" USING btree ("member_id");--> statement-breakpoint
CREATE INDEX IF NOT EXISTS "member_site_access_site_idx" ON "member_site_access" USING btree ("site_id");--> statement-breakpoint
CREATE INDEX IF NOT EXISTS "uptime_monitor_status_updated_at_idx" ON "uptime_monitor_status" USING btree ("updated_at");--> statement-breakpoint
+CREATE INDEX IF NOT EXISTS "uptime_alert_history_alert_id_idx" ON "uptime_alert_history" USING btree ("alert_id");--> statement-breakpoint
+CREATE INDEX IF NOT EXISTS "uptime_alert_history_monitor_id_idx" ON "uptime_alert_history" USING btree ("monitor_id");--> statement-breakpoint
+CREATE INDEX IF NOT EXISTS "uptime_alerts_monitor_id_idx" ON "uptime_alerts" USING btree ("monitor_id");--> statement-breakpoint
+CREATE INDEX IF NOT EXISTS "uptime_incidents_monitor_id_idx" ON "uptime_incidents" USING btree ("monitor_id");--> statement-breakpoint
CREATE INDEX IF NOT EXISTS "user_aliases_user_idx" ON "user_aliases" USING btree ("site_id","user_id");--> statement-breakpoint
-CREATE INDEX IF NOT EXISTS "user_aliases_anon_idx" ON "user_aliases" USING btree ("site_id","anonymous_id");--> statement-breakpoint
-CREATE INDEX IF NOT EXISTS "user_profiles_site_idx" ON "user_profiles" USING btree ("site_id");Also applies to: 233-250, 252-268, 328-344, 547-552
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/drizzle/0000_magenta_firebird.sql` around lines 140 - 147, Remove the
redundant single-column indexes that duplicate PRIMARY/UNIQUE constraints (e.g.,
drop member_site_access_member_idx, user_aliases_anon_idx,
user_profiles_site_idx) since the PRIMARY KEY on "id" and UNIQUE constraint
member_site_access_unique("member_id","site_id") already create indexes; then
add supporting indexes on the foreign key columns used by the four uptime tables
that currently rely on CASCADE DELETE so DELETEs don't trigger full-table
scans—identify the uptime tables and create indexes on their FK columns
referencing the parent site/member rows (ensure index names are descriptive and
not duplicative of existing constraints).
db:pushtodb:migratefor improved migration handling.Summary by CodeRabbit
Release Notes
Bug Fixes
Chores