Skip to content

feat(mcp): make background sync opt-in#314

Open
txhno wants to merge 1 commit intozilliztech:masterfrom
txhno:fix/mcp-background-sync-config
Open

feat(mcp): make background sync opt-in#314
txhno wants to merge 1 commit intozilliztech:masterfrom
txhno:fix/mcp-background-sync-config

Conversation

@txhno
Copy link
Copy Markdown

@txhno txhno commented Apr 22, 2026

Fixes #285.

This adds explicit MCP env controls for background sync so local stdio sessions do not automatically rescan indexed codebases unless the user opts in. It also makes the sync interval configurable and documents the new behavior in the MCP help text and README.

Validation:

  • pnpm install
  • pnpm --filter @zilliz/claude-context-core build
  • pnpm --filter @zilliz/claude-context-mcp typecheck
  • pnpm --filter @zilliz/claude-context-mcp build

BeamNawapat added a commit to BeamNawapat/claude-context that referenced this pull request Apr 26, 2026
Refs zilliztech#285 (txhno's CPU-churn report on multi-instance MCP sessions) and zilliztech#314
(txhno's opt-in background-sync fix). Their analysis showed that periodic
background sync should be opt-in to avoid duplicated work when N workspaces
each spawn their own MCP. Our trigger watcher is the natural complement:
instead of polling every 5 min, the user (or a Claude Code PostToolUse hook)
touches ~/.context/.sync-trigger to request an immediate re-index.

To make the two play together, lift setupTriggerWatcher() to the start of
startBackgroundSync() so the watcher still runs when polling is gated off
(the recommended config for multi-instance setups in zilliztech#285).

Add CLAUDE_CONTEXT_TRIGGER_WATCHER (default true) so users can opt out of
the watcher entirely if they don't want any filesystem watching.

Credit: @txhno for zilliztech#285 + zilliztech#314.
@BeamNawapat
Copy link
Copy Markdown
Contributor

BeamNawapat commented Apr 26, 2026

Thanks @txhno for tackling this — and to @jmmaloney4 for the multi-instance CPU analysis in #285 that motivated it.

I have an open PR (#332) that adds a trigger-file watcher. After reading both, I rebased it so setupTriggerWatcher() is invoked at the top of startBackgroundSync(), before the polling block. That way your CLAUDE_CONTEXT_BACKGROUND_SYNC=false default plays well with on-demand sync via ~/.context/.sync-trigger — the recommended config for #285 (zero idle CPU + instant re-index when an external tool touches the trigger).

Together the two PRs give a clean opt-in matrix:

CLAUDE_CONTEXT_BACKGROUND_SYNC CLAUDE_CONTEXT_TRIGGER_WATCHER Result
true true (default) polling + on-demand
false (default per #314) true (default) on-demand only — recommended for multi-instance
true false polling only
false false manual indexing only

Happy to swap merge order either way.

@zc277584121
Copy link
Copy Markdown
Collaborator

This is still a useful fix for #285, but the PR is currently conflicting with master and needs a rebase before it can be merged.

When rebasing, please also account for #332 if that lands first. The trigger watcher setup must happen before the CLAUDE_CONTEXT_BACKGROUND_SYNC gate; otherwise the new default CLAUDE_CONTEXT_BACKGROUND_SYNC=false would return early and disable on-demand trigger sync as well.

The intended combined flow should be:

this.setupTriggerWatcher();

if (!isBackgroundSyncEnabled()) {
    return;
}

// initial sync + periodic interval only when enabled

I also verified this branch by itself with:

pnpm install --frozen-lockfile
pnpm --filter @zilliz/claude-context-mcp build

That build passes, so the blocker is the rebase/combined behavior rather than the basic TypeScript build.

@zc277584121
Copy link
Copy Markdown
Collaborator

Thanks again for working on this.

We shipped v0.1.10 with a lighter compatibility-oriented fix for the multi-process background sync problem: a global cross-process sync lock. Background sync remains enabled for existing users, but only one local MCP server process should perform background sync at a time; other processes skip the cycle while the lock is held.

This should address a large part of the CPU churn and contention described in #285 without changing the default background sync behavior. Please try the latest version and let us know if you still see cases where making background sync opt-in is needed.

@txhno txhno force-pushed the fix/mcp-background-sync-config branch from d31fd03 to 434705d Compare April 28, 2026 05:38
@txhno
Copy link
Copy Markdown
Author

txhno commented Apr 28, 2026

Rebased this PR onto current master (now includes the v0.1.10 global cross-process sync lock) and resolved the conflicts.

I kept the upstream lock behavior intact, while preserving this PR's opt-in background sync behavior via CLAUDE_CONTEXT_BACKGROUND_SYNC=true and CLAUDE_CONTEXT_SYNC_INTERVAL_MS.

Validation run locally:

  • pnpm --filter @zilliz/claude-context-mcp build
  • default startup returns before scheduling background sync
  • CLAUDE_CONTEXT_BACKGROUND_SYNC=true CLAUDE_CONTEXT_SYNC_INTERVAL_MS=1000 schedules periodic sync
  • concurrent handleSyncIndex() calls still use the global lock; the second call skips while the first sync runs

For #332 compatibility, this branch leaves the background-sync gate in startBackgroundSync(). If #332 lands first or is merged together, setupTriggerWatcher() should stay before the isBackgroundSyncEnabled() return, as discussed above.

BeamNawapat added a commit to BeamNawapat/claude-context that referenced this pull request Apr 28, 2026
Refs zilliztech#285 (txhno's CPU-churn report on multi-instance MCP sessions) and zilliztech#314
(txhno's opt-in background-sync fix). Their analysis showed that periodic
background sync should be opt-in to avoid duplicated work when N workspaces
each spawn their own MCP. Our trigger watcher is the natural complement:
instead of polling every 5 min, the user (or a Claude Code PostToolUse hook)
touches ~/.context/.sync-trigger to request an immediate re-index.

To make the two play together, lift setupTriggerWatcher() to the start of
startBackgroundSync() so the watcher still runs when polling is gated off
(the recommended config for multi-instance setups in zilliztech#285).

Add CLAUDE_CONTEXT_TRIGGER_WATCHER (default true) so users can opt out of
the watcher entirely if they don't want any filesystem watching.

Credit: @txhno for zilliztech#285 + zilliztech#314.
@zc277584121
Copy link
Copy Markdown
Collaborator

zc277584121 commented Apr 29, 2026

Thanks for the rebase and for validating this on top of the global sync lock.

#332 has now been merged, so this PR is conflicting again in packages/mcp/src/sync.ts, packages/mcp/README.md, and packages/mcp/src/config.ts.

When rebasing, please preserve the trigger watcher setup from #332 before any polling/background-sync gate:

this.setupTriggerWatcher();

if (!isBackgroundSyncEnabled()) {
    return;
}

// startup sync + periodic sync

That keeps on-demand trigger sync available even when periodic polling is disabled.

One compatibility concern remains: changing CLAUDE_CONTEXT_BACKGROUND_SYNC to default off would change existing behavior. Today users get startup + periodic background sync without extra configuration. If this PR makes it opt-in by default, users who do not configure the trigger watcher hook may stop getting automatic index refreshes.

A more compatible version would still add CLAUDE_CONTEXT_BACKGROUND_SYNC and CLAUDE_CONTEXT_SYNC_INTERVAL_MS, but keep background sync enabled by default for now. Multi-instance users can then explicitly set CLAUDE_CONTEXT_BACKGROUND_SYNC=false and rely on the trigger watcher from #332 plus the global sync lock.

So my recommendation is:

  • rebase on current master after feat: add trigger file watcher for instant re-index #332
  • keep setupTriggerWatcher() before the polling gate
  • consider making the default for CLAUDE_CONTEXT_BACKGROUND_SYNC remain enabled, and document disabling it as the recommended multi-instance optimization

With those changes, this becomes a safer configuration improvement instead of a breaking default-behavior change.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Reduce CPU churn from multiple MCP instances and support shared/remote deployment

4 participants