Skip to content

docs(analytics): add local setup and operator docs#252

Closed
prajjwalkumar17 wants to merge 91 commits intomainfrom
feat/analytics-docs-scripts
Closed

docs(analytics): add local setup and operator docs#252
prajjwalkumar17 wants to merge 91 commits intomainfrom
feat/analytics-docs-scripts

Conversation

@prajjwalkumar17
Copy link
Copy Markdown
Member

@prajjwalkumar17 prajjwalkumar17 commented Apr 21, 2026

This pull request makes significant improvements to the API documentation, focusing on clarity, organization, and practical usage examples. The changes include a reorganization of the API reference structure, the addition of dedicated curl example pages for each endpoint, and updates to analytics documentation to clarify merchant scoping and setup steps. These updates aim to make the documentation more accessible for both new and experienced developers.

API Documentation Overhaul

  • Replaced the old API Reference with a new API Overview page, providing clear endpoint family groupings and linking to OpenAPI-backed endpoint pages for schema details.
  • Added a new section, Curl API References, with dedicated pages for request/response examples for each major endpoint, improving real-world usability of the docs.
  • Created individual curl example pages for endpoints such as health check, gateway decision, routing algorithms, merchant account management, and rule configuration. [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16]

Analytics Documentation Updates

  • Clarified that analytics are always merchant-scoped and removed references to the deprecated all-merchants mode. Updated instructions to reflect that analytics queries derive merchant from authentication, and removed support for scope and merchant_id query params.
  • Updated demo traffic generation instructions to clarify authentication requirements and session behavior.
  • Updated references to related files and scripts for accuracy.

These changes collectively modernize and clarify the documentation, making it easier for developers to find, understand, and use the API and analytics features.

@prajjwalkumar17 prajjwalkumar17 self-assigned this Apr 21, 2026
Copilot AI review requested due to automatic review settings April 21, 2026 12:21
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR upgrades the local developer workflow to treat the analytics stack (Kafka + ClickHouse) as a first-class dependency, and documents the new bring-up/run paths.

Changes:

  • Extends local automation (oneclick.sh) to check/bring up infra dependencies and initialize analytics (Kafka topics + ClickHouse schema) before starting services.
  • Updates local setup docs and Makefile targets to include analytics infra in standard workflows.
  • Adds ClickHouse analytics documentation plus updated docs assets (logos/favicon).

Reviewed changes

Copilot reviewed 4 out of 9 changed files in this pull request and generated 6 comments.

Show a summary per file
File Description
oneclick.sh Adds infra checklist, auto bring-up, and service readiness waits; starts docs preview and prints local URLs.
Makefile Ensures analytics profile is included in standard init/run targets.
docs/local-setup.md Documents analytics profile usage and updated bring-up commands; adds oneclick.sh section.
docs/clickhouse-analytics.mdx New docs covering ClickHouse/Kafka ingestion architecture, config, and troubleshooting.
docs/favicon.svg Adds/updates docs favicon.
docs/logo/decision-engine-*.svg Adds new light/dark logo variants (including docs variants).

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread docs/local-setup.md Outdated
Comment on lines 66 to 67
docker compose --profile postgres-ghcr --profile analytics-clickhouse up -d
```
Copy link

Copilot AI Apr 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These bring-up commands add --profile analytics-clickhouse, but the earlier table says the core profiles already include Kafka + ClickHouse (and docker-compose.yaml also enables those services under the core profiles). This makes it unclear whether analytics-clickhouse is actually required or just a standalone convenience profile. Consider either removing the extra profile from the examples, or adjusting the profile table/text to clearly state analytics services only come from analytics-clickhouse.

Copilot uses AI. Check for mistakes.
Comment thread oneclick.sh Outdated
echo ""
echo " Server: http://localhost:8080"
echo " Dashboard: http://localhost:5173/dashboard/"
echo " Dashboard: http://localhost:5173/"
Copy link

Copilot AI Apr 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The printed dashboard URL was changed to http://localhost:5173/, but the dashboard dev server is configured with base: '/dashboard/' (Vite) and BrowserRouter basename="/dashboard", so the app is expected to be served under /dashboard/. Printing the root URL is likely to 404 or show a blank page; consider restoring /dashboard/ in this output.

Suggested change
echo " Dashboard: http://localhost:5173/"
echo " Dashboard: http://localhost:5173/dashboard/"

Copilot uses AI. Check for mistakes.
Comment thread oneclick.sh
Comment on lines +433 to +440
if [ "${DOCS_PORT}" != "3000" ]; then
echo "Mint preview uses port 3000 in this environment; overriding DOCS_PORT=${DOCS_PORT} to 3000."
DOCS_PORT="3000"
DOCS_URL="http://localhost:${DOCS_PORT}"
DOCS_HOME_URL="${DOCS_URL}/introduction"
API_REF_URL="${DOCS_URL}/api-reference"
API_EXAMPLES_URL="${DOCS_URL}/api-reference1"
fi
Copy link

Copilot AI Apr 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The script forces DOCS_PORT back to 3000 right before starting mint dev. If a user set DOCS_PORT to avoid a port conflict, this override can reintroduce the conflict and make mint dev fail unexpectedly. Either honor DOCS_PORT (Mintlify supports PORT) or detect conflicts and pick a free port instead of hard-coding 3000.

Suggested change
if [ "${DOCS_PORT}" != "3000" ]; then
echo "Mint preview uses port 3000 in this environment; overriding DOCS_PORT=${DOCS_PORT} to 3000."
DOCS_PORT="3000"
DOCS_URL="http://localhost:${DOCS_PORT}"
DOCS_HOME_URL="${DOCS_URL}/introduction"
API_REF_URL="${DOCS_URL}/api-reference"
API_EXAMPLES_URL="${DOCS_URL}/api-reference1"
fi

Copilot uses AI. Check for mistakes.
Comment thread oneclick.sh
KAFKA_HOST="${KAFKA_HOST:-localhost}"
KAFKA_PORT="${KAFKA_PORT:-9092}"

PORTS=(8080 5173 "$DOCS_PORT" 9094)
Copy link

Copilot AI Apr 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PORTS includes 9094 but not the Kafka host port (9092 by default). Since the script may start Kafka via Docker Compose, a pre-existing process bound to 9092 can cause docker compose up to fail, yet it won’t be detected/killed here. Consider including $KAFKA_PORT (and, if needed, keeping 9094) in the PORTS list.

Suggested change
PORTS=(8080 5173 "$DOCS_PORT" 9094)
PORTS=(8080 5173 "$DOCS_PORT" "$KAFKA_PORT" 9094)

Copilot uses AI. Check for mistakes.
Comment thread oneclick.sh
Comment on lines +103 to 107
exit "$exit_code"
}

trap cleanup SIGINT SIGTERM

Copy link

Copilot AI Apr 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With set -e, any failing command after starting background processes (e.g., npm install, mint dev, just migrate-pg) will exit the script without running cleanup, because the trap is only for SIGINT/SIGTERM. This can leave the Rust server/docs/dashboard running in the background. Consider trapping EXIT as well (and passing $? into cleanup) so processes are reliably cleaned up on any error.

Copilot uses AI. Check for mistakes.
Comment thread oneclick.sh Outdated
Comment on lines +412 to +413
echo "Initializing ClickHouse analytics schema..."
docker compose --profile analytics-clickhouse run --rm clickhouse-init
Copy link

Copilot AI Apr 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

docker compose ... run --rm clickhouse-init is executed unconditionally. In docker-compose.yaml, clickhouse-init drops and recreates the decision_engine_analytics database, so running oneclick.sh will wipe local analytics data every time. If preserving data is desirable, consider making init idempotent (no DROP) or gating the destructive reset behind an explicit flag/prompt.

Suggested change
echo "Initializing ClickHouse analytics schema..."
docker compose --profile analytics-clickhouse run --rm clickhouse-init
if [ "${RESET_CLICKHOUSE_ANALYTICS:-0}" = "1" ]; then
echo "Initializing ClickHouse analytics schema..."
docker compose --profile analytics-clickhouse run --rm clickhouse-init
elif [ -t 0 ]; then
echo "ClickHouse analytics schema initialization resets local analytics data."
read -r -p "Run destructive ClickHouse reset now? [y/N] " CLICKHOUSE_RESET_CONFIRM
if [ "${CLICKHOUSE_RESET_CONFIRM}" = "y" ] || [ "${CLICKHOUSE_RESET_CONFIRM}" = "Y" ]; then
echo "Initializing ClickHouse analytics schema..."
docker compose --profile analytics-clickhouse run --rm clickhouse-init
else
echo "Skipping ClickHouse analytics schema reset."
fi
else
echo "Skipping ClickHouse analytics schema reset. Set RESET_CLICKHOUSE_ANALYTICS=1 to run it explicitly."
fi

Copilot uses AI. Check for mistakes.
Copilot AI review requested due to automatic review settings April 22, 2026 07:28
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 32 out of 37 changed files in this pull request and generated 21 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread oneclick.sh
Comment on lines +28 to +34
PORTS=(8080 5173 "$DOCS_PORT" 9094)
EXPECTED_CLICKHOUSE_TABLES=(
analytics_api_events_queue
analytics_domain_events_queue
analytics_api_events_v1
analytics_domain_events_v1
)
Copy link

Copilot AI Apr 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

EXPECTED_CLICKHOUSE_TABLES doesn’t match the actual schema names in clickhouse/migrations/* (e.g. migrations define analytics_*_kafka_v1 and mv_analytics_*_kafka_v1, not *_queue). With the current list, check_clickhouse_schema will always report missing tables and abort. Update the expected table list (or the schema check) to reflect the real ClickHouse objects you create.

Copilot uses AI. Check for mistakes.
Comment thread docs/clickhouse-analytics.mdx Outdated
Comment on lines +44 to +50
That profile now includes ClickHouse, Kafka, and topic creation. The full runtime profiles use the same ClickHouse-native ingestion path. There is no separate Rust worker process anymore.

The bootstrap SQL now lives in:

- `clickhouse/scripts/`

ClickHouse loads those scripts on first boot through `/docker-entrypoint-initdb.d`. The analytics data volume is persistent, so normal restarts keep historical data.
Copy link

Copilot AI Apr 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This doc says bootstrap SQL lives in clickhouse/scripts/ and is loaded via /docker-entrypoint-initdb.d, but the repo currently uses the clickhouse-init compose job with ./clickhouse/migrations mounted and applied via clickhouse-client. Update this section to match the actual bootstrap mechanism (or rename/move the SQL and change compose accordingly).

Suggested change
That profile now includes ClickHouse, Kafka, and topic creation. The full runtime profiles use the same ClickHouse-native ingestion path. There is no separate Rust worker process anymore.
The bootstrap SQL now lives in:
- `clickhouse/scripts/`
ClickHouse loads those scripts on first boot through `/docker-entrypoint-initdb.d`. The analytics data volume is persistent, so normal restarts keep historical data.
That profile now includes ClickHouse, Kafka, topic creation, and the ClickHouse bootstrap step. The full runtime profiles use the same ClickHouse-native ingestion path. There is no separate Rust worker process anymore.
The bootstrap SQL now lives in:
- `clickhouse/migrations/`
Those migrations are mounted into the `clickhouse-init` compose job and applied with `clickhouse-client`. The analytics data volume is persistent, so normal restarts keep historical data.

Copilot uses AI. Check for mistakes.
Comment thread oneclick.sh
Comment on lines +424 to +436
echo "Starting docs preview..."
cd "$SCRIPT_DIR/docs"
rm -f "$DOCS_LOG_PATH"
if [ "${DOCS_PORT}" != "3000" ]; then
echo "Mint preview uses port 3000 in this environment; overriding DOCS_PORT=${DOCS_PORT} to 3000."
DOCS_PORT="3000"
DOCS_URL="http://localhost:${DOCS_PORT}"
DOCS_HOME_URL="${DOCS_URL}/introduction"
API_REF_URL="${DOCS_URL}/api-reference"
API_EXAMPLES_URL="${DOCS_URL}/api-refs/api-ref"
fi
PORT="$DOCS_PORT" mint dev --no-open >"$DOCS_LOG_PATH" 2>&1 &
DOCS_PID=$!
Copy link

Copilot AI Apr 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The script calls mint dev but doesn’t verify that the mint CLI is installed/available. If mint is missing, the script will fail mid-run (and with the current traps, may leave other processes running). Consider adding a command_exists mint check with a clear install hint before starting the docs preview.

Copilot uses AI. Check for mistakes.
Comment thread Makefile Outdated
Comment on lines +53 to +56
docker compose stop clickhouse kafka kafka-init || true
docker compose rm -sf clickhouse kafka-init || true
docker volume rm $$(basename "$$(pwd)")_clickhouse-data || true
docker compose --profile analytics-clickhouse up -d kafka kafka-init clickhouse
Copy link

Copilot AI Apr 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reset-analytics-clickhouse doesn’t run clickhouse-init, so the ClickHouse schema migrations won’t be (re)applied after resetting. It also tries to delete a ${project}_clickhouse-data named volume, but docker-compose.yaml doesn’t declare a clickhouse-data volume or mount one into the clickhouse service, so this target likely won’t actually wipe analytics state. Either (a) add a named ClickHouse data volume in compose and reference it here, and include/wait for clickhouse-init, or (b) adjust the target to match the current compose setup.

Suggested change
docker compose stop clickhouse kafka kafka-init || true
docker compose rm -sf clickhouse kafka-init || true
docker volume rm $$(basename "$$(pwd)")_clickhouse-data || true
docker compose --profile analytics-clickhouse up -d kafka kafka-init clickhouse
docker compose stop clickhouse clickhouse-init kafka kafka-init || true
docker compose rm -sf clickhouse clickhouse-init kafka-init || true
docker compose --profile analytics-clickhouse up -d kafka kafka-init clickhouse
docker compose --profile analytics-clickhouse up clickhouse-init

Copilot uses AI. Check for mistakes.
Comment on lines +94 to +99
The raw analytics tables retain data for 18 months:

- `analytics_api_events_v1`
- `analytics_domain_events_v1`

Custom query windows are normalized to the same 18-month lookback horizon.
Copy link

Copilot AI Apr 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The retention section claims raw analytics tables retain data for 18 months, but the current ClickHouse schema (clickhouse/migrations/...) sets TTL created_at + INTERVAL 90 DAY for both analytics_*_v1 tables. Please update the documented retention window (or the schema TTL) so they agree.

Suggested change
The raw analytics tables retain data for 18 months:
- `analytics_api_events_v1`
- `analytics_domain_events_v1`
Custom query windows are normalized to the same 18-month lookback horizon.
The raw analytics tables retain data for 90 days:
- `analytics_api_events_v1`
- `analytics_domain_events_v1`
Custom query windows are normalized to the same 90-day lookback horizon.

Copilot uses AI. Check for mistakes.

```json
{
"Success Rate Configuration updated successfully"
Copy link

Copilot AI Apr 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The response example is not valid JSON (it’s an object with a bare string value and no key). The actual endpoint returns a JSON object (e.g. message, merchant_id, and the updated config), so the example should reflect the real response shape.

Suggested change
"Success Rate Configuration updated successfully"
"message": "Success Rate Configuration updated successfully",
"merchant_id": "test_merchant_123423",
"config": {
"type": "successRate",
"data": {
"defaultLatencyThreshold": 90,
"defaultSuccessRate": 0.5,
"defaultBucketSize": 200,
"defaultHedgingPercent": 5,
"subLevelInputConfig": [
{
"paymentMethodType": "upi",
"paymentMethod": "upi_collect",
"bucketSize": 250,
"hedgingPercent": 1
}
]
}
}

Copilot uses AI. Check for mistakes.

```json
{
"Elimination Configuration created successfully"
Copy link

Copilot AI Apr 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The response example is not valid JSON (object with a bare string and no key) and it omits fields the API returns. The create endpoint returns a JSON object including at least message, merchant_id, and the created config; update the example accordingly.

Suggested change
"Elimination Configuration created successfully"
"message": "Elimination Configuration created successfully",
"merchant_id": "test_merchant_123423",
"config": {
"type": "elimination",
"data": {
"threshold": 0.35
}
}

Copilot uses AI. Check for mistakes.

```json
{
"Elimination Configuration deleted successfully"
Copy link

Copilot AI Apr 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The response example is not valid JSON (object with a bare string and no key). The delete endpoint returns a structured JSON response (e.g. message and merchant_id), so the example should use valid JSON and include the real fields.

Suggested change
"Elimination Configuration deleted successfully"
"message": "Elimination Configuration deleted successfully",
"merchant_id": "test_merchant_123423"

Copilot uses AI. Check for mistakes.
Comment thread docs/mint.json
Comment on lines 21 to 33
"navigation": [
{
"group": "Overview",
"pages": [
"introduction",
"installation",
"local-setup",
"configuration",
"dashboard",
"api-reference",
"api-reference1",
"api-refs/api-ref",
"dual-protocol-layer"
]
Copy link

Copilot AI Apr 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

docs/clickhouse-analytics.mdx is added in this PR but it isn’t included anywhere in navigation, so it won’t be discoverable in the Mintlify sidebar (only via direct URL). Consider adding clickhouse-analytics to an appropriate navigation group.

Copilot uses AI. Check for mistakes.

```json
{
"Success Rate Configuration created successfully"
Copy link

Copilot AI Apr 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The response example is not valid JSON (object with a bare string and no key) and it omits fields the API returns. The create endpoint returns a JSON object including at least message, merchant_id, and the created config; update the example accordingly.

Suggested change
"Success Rate Configuration created successfully"
"message": "Success Rate Configuration created successfully",
"merchant_id": "test_merchant_123423",
"config": {
"type": "successRate",
"data": {
"defaultLatencyThreshold": 90,
"defaultSuccessRate": 0.5,
"defaultBucketSize": 200,
"defaultHedgingPercent": 5,
"subLevelInputConfig": [
{
"paymentMethodType": "upi",
"paymentMethod": "upi_collect",
"bucketSize": 250,
"hedgingPercent": 1
}
]
}
}

Copilot uses AI. Check for mistakes.
Base automatically changed from feat/analytics-kafka-clickhouse-backend to main April 24, 2026 12:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants