Refactor architecture, reorganize tests, bump CI coverage to 80%#35
Refactor architecture, reorganize tests, bump CI coverage to 80%#35
Conversation
Major refactor of server architecture: - Extract ServerState, ViewerProtocol, qt_helpers, and output modules - Replace module-level globals with ServerState class - Add STANDALONE/AUTO_DETECT startup modes - Simplify CLI to single `install --target <name>` command - Update bridge_server to use new state management Test suite reorganization: - Delete duplicate test files (test_coverage.py, test_napari_server_coverage.py) - Rename test_edge_cases.py to test_qt_helpers.py (reflects actual content) - Create test_server_tools.py as canonical home for tool unit tests - Add tests for AUTO_DETECT proxy fallbacks, invalid inputs, concurrent access - Add multi-step workflow integration tests - Consolidate widget tests into test_widget.py - Fix SIM117 lint warnings in test_bridge_server.py - Remove raw napari API tests that don't test our code - Net: 286 tests, 84.15% coverage (0 duplicates, 13 new tests) CI and docs: - Bump --cov-fail-under from 70% to 80% - Update CONTRIBUTING.md test file listing - Update documentation references Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughRefactors the server into a stateful ServerState/factory model, centralizes Qt helpers and output handling, consolidates CLI installers under a single Changes
Sequence Diagram(s)sequenceDiagram
participant CLI as Client / CLI
participant Main as cli/main.py
participant Factory as create_server()
participant State as ServerState
participant Qt as Qt (ensure_qt_app / ensure_viewer)
participant Tool as Tool (add_layer / execute_code)
CLI->>Main: install(target, flags)
Main->>Main: _get_installer_class(target) / _create_installer(...)
Main->>Factory: create_server(state) (if needed)
CLI->>Factory: invoke tool (add_layer / execute_code)
Factory->>State: dispatch -> state.gui_execute / state.proxy_to_external
State->>Qt: ensure_qt_app / ensure_viewer (for local execution)
State->>Tool: create_layer_on_viewer / run_code
Tool-->>State: store_output(output_id, metadata)
State-->>Factory: tool result (may include output_id/truncated info)
Factory-->>CLI: response
sequenceDiagram
participant Caller as Caller
participant QtBridge as QtBridge
participant QtThread as Qt Main Thread
participant Op as Operation
Caller->>QtBridge: run_in_main_thread(operation, timeout)
QtBridge->>QtThread: schedule operation (Future)
QtThread->>Op: execute operation
alt Success
Op-->>QtThread: result
QtThread->>QtBridge: set Future result
QtBridge-->>Caller: return result
else Exception
Op-->>QtThread: exception
QtThread->>QtBridge: set Future exception
QtBridge-->>Caller: raise exception
else Timeout
QtBridge-->>Caller: raise TimeoutError (detailed message)
end
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Poem
✨ Finishing Touches
🧪 Generate unit tests (beta)
|
|
Code Review: Refactor architecture, reorganize tests, bump CI coverage to 80% Overall this is a well-executed, high-value refactor. Extracting ServerState, qt_helpers, output, and viewer_protocol into dedicated modules makes the codebase significantly more testable and maintainable. The 37% line reduction in server.py while improving coverage is impressive. BUG 1 - detect_external_viewer() returns a closed client (state.py:167): The client is returned after the async with block has already exited. In detect_viewers (server.py:128) the client is only used as a truthiness check and discarded, so it accidentally works -- but the API is a trap for future callers. Consider returning only the metadata dict (or a boolean) and removing client from the return type. BUG 2 - Dead code in ensure_qt_app (qt_helpers.py:16): 'from qtpy import QtWidgets' either succeeds or raises ImportError. It never returns None. The entire _StubApp branch is unreachable dead code. If a headless/no-Qt fallback is intentional, use try/except ImportError instead. BUG 3 - list_layers silently returns empty list on unexpected proxy response (server.py:344-355): If the proxy returns an error dict without a 'content' key, the error is swallowed and the caller gets an empty list. Consider returning the error dict directly or logging a warning before the final 'return []'. SECURITY - install_packages does not validate package names before subprocess call (server.py:1142): asyncio.create_subprocess_exec avoids shell injection (no shell=True). However pip interprets arguments beginning with - as options. A crafted package name like --target=/tmp would be silently accepted as a pip flag. A guard rejecting any specifier starting with - would close this off. Note: shlex.quote() at line 1164 applies to the display string only. QUALITY 1 - proxy_to_external swallows all exceptions silently (state.py:141): 'except Exception: return None' makes it impossible to distinguish a legitimate no-external-viewer result from a programming error. Consider logging at DEBUG level before returning None. QUALITY 2 - line_limit == -1 string/int comparison edge case (server.py:998, bridge_server.py:328): execute_code is typed 'line_limit: int | str = 30', but checks 'if line_limit == -1:' which is False when line_limit is the string -1. The string case still produces unlimited output (truncate_output handles -1 correctly) but the warning message is silently skipped. A single int(line_limit) coercion at the top would normalize this. QUALITY 3 - create_server() always overwrites the module-level _state singleton (server.py:101-105): Every call replaces the module-level singleton, including calls from NapariBridgeServer.init. This hidden coupling is worth a code comment. PERFORMANCE - store_output eviction uses sorted() on every overflow (state.py:97): With max_output_items=1000 this is fine, but min(self.output_storage.keys(), key=int) would be O(n) vs O(n log n) for the common single-eviction case. POSITIVES: ServerState is clean and well-encapsulated. The gui_execute indirection makes Qt bridge / standalone modes testable without Qt. ViewerProtocol as a runtime_checkable structural type is the right approach. The _register helper elegantly preserves raw function references for backward-compatible module-level access. Bumping mypy to blocking is a good quality gate. New test_state.py, test_qt_helpers.py, and test_server_tools.py are well-structured. Making bandit/safety advisory is the right pragmatic call. uv sync --group testing removes the awkward manual uv pip install step. SUMMARY: Bug=3, Security=1, Quality=3, Performance=1. Most important before merge: dead code in ensure_qt_app (headless fallback is broken), misleading detect_external_viewer return type (returns closed client), pip argument injection gap in install_packages. The rest are low severity. Generated with Claude Code |
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## main #35 +/- ##
==========================================
+ Coverage 78.23% 78.63% +0.40%
==========================================
Files 16 21 +5
Lines 1792 2191 +399
Branches 230 370 +140
==========================================
+ Hits 1402 1723 +321
- Misses 282 352 +70
- Partials 108 116 +8
🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Actionable comments posted: 13
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (5)
docs/integrations/cline.md (1)
144-144:⚠️ Potential issue | 🟡 MinorTypo in JSON example:
"always Allow"should be"alwaysAllow".The JSON key has an accidental space, which would cause the configuration to be invalid or ignored.
📝 Proposed fix
- "always Allow": [], + "alwaysAllow": [],🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/integrations/cline.md` at line 144, Replace the malformed JSON key "always Allow" with the correct camelCase key "alwaysAllow" in the example JSON (find the string "always Allow" and update it to "alwaysAllow") so the sample configuration is valid and will be recognized by the parser.docs/integrations/claude-code.md (1)
44-44:⚠️ Potential issue | 🟡 MinorMissing space in markdown heading.
The heading
###Plugin Developmentis missing a space after###, which may cause rendering issues in some Markdown parsers.📝 Proposed fix
-###Plugin Development +### Plugin Development🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/integrations/claude-code.md` at line 44, The Markdown heading "###Plugin Development" is missing a space after the hashes; update the heading text to "### Plugin Development" (locate the heading string "###Plugin Development" and insert a space after "###") so it renders correctly in Markdown parsers.src/napari_mcp/cli/main.py (1)
199-275:⚠️ Potential issue | 🟠 MajorProject-scoped Cursor/Gemini installs can't be managed through the new CLI.
installstill supports--project, butuninstallexposes no scope options and both Line 250 and Line 274 forceglobal_install=True. A config created in.cursor/mcp.jsonor.gemini/settings.jsonwill therefore show up as "Not configured" inlistand can't be removed withuninstall. Please thread the same scope parameters through these commands as well.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/napari_mcp/cli/main.py` around lines 199 - 275, Uninstall and list currently force global_install=True for PROJECT_TARGETS and don't accept a project/global scope option; update the CLI commands to accept the same scope flags as install (e.g., a --project/--global or --scope option) and thread them into _create_installer and _get_installer_class instead of hardcoding global_install. In the uninstall function (uninstall) replace the use of global_install=app_target in PROJECT_TARGETS with global_install=(not project_flag) or whatever boolean derives from the new flag you add, and when iterating _INSTALLER_CLASS_NAMES pass the chosen scope into _create_installer; likewise in list_installations use kwargs derived from the new scope option (rather than forcing global_install=True) when constructing installers via _get_installer_class. Ensure option names match the install command and update help text so project-scoped configs are discoverable and removable.docs/integrations/index.md (2)
53-59:⚠️ Potential issue | 🟡 MinorInconsistent CLI syntax in Installation Workflow section.
Lines 53 and 59 still use the old syntax pattern without the
installsubcommand, inconsistent with the updated examples elsewhere in this document.📝 Suggested fix
# For your specific application -napari-mcp-install <app-name> +napari-mcp-install install <app-name> # See all options napari-mcp-install --help # Preview changes before applying -napari-mcp-install <app-name> --dry-run +napari-mcp-install install <app-name> --dry-run🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/integrations/index.md` around lines 53 - 59, Replace the outdated CLI examples that use the old syntax "napari-mcp-install <app-name>" and "napari-mcp-install <app-name> --dry-run" with the updated form that includes the install subcommand (e.g., "napari-mcp-install install <app-name>" and "napari-mcp-install install <app-name> --dry-run") so the Installation Workflow examples match the rest of the document; update both occurrences that show the preview/dry-run example and the basic install example.
14-14:⚠️ Potential issue | 🟡 MinorInconsistent CLI syntax: missing
installsubcommand.Line 14 still uses the old syntax
napari-mcp-install <application-name>without theinstallsubcommand, while the rest of the document (lines 21-25, 74-86) has been updated to usenapari-mcp-install install <target>.📝 Suggested fix
# Auto-configure your application -napari-mcp-install <application-name> +napari-mcp-install install <application-name>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/integrations/index.md` at line 14, Update the inconsistent CLI example by replacing the old invocation string "napari-mcp-install <application-name>" with the new subcommand form "napari-mcp-install install <target>" so it matches other examples (see occurrences using "napari-mcp-install install <target>"); ensure the example text and any surrounding explanatory text refer to the "install" subcommand and use the same placeholder name ("<target>") used elsewhere.
🧹 Nitpick comments (12)
docs/integrations/other-llms.md (1)
208-208: Minor: Inconsistent capitalization in table.The row label
maturityshould be capitalized asMaturityfor consistency with other table entries (Config Format, Project-specific, etc.).📝 Proposed fix
-| **maturity** | Beta | Stable | +| **Maturity** | Beta | Stable |🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/integrations/other-llms.md` at line 208, Change the table row label "maturity" to "Maturity" to match capitalization of other table headers; locate the markdown table row containing the literal cell value `maturity` (the row shown as "| **maturity** | Beta | Stable |") and update it to "| **Maturity** | Beta | Stable |".src/napari_mcp/output.py (1)
21-24: Consider narrowing the exception type.The broad
except Exceptionclause could mask unexpected errors during type conversion. Using specific exceptions would be more precise.♻️ Suggested refinement
try: line_limit = int(line_limit) - except Exception: + except (TypeError, ValueError): line_limit = 30🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/napari_mcp/output.py` around lines 21 - 24, The try/except that converts line_limit to int uses a broad except; change it to catch the specific conversion errors (e.g., ValueError and TypeError) around the int(line_limit) conversion in the same block so only invalid input types/values fall back to the default of 30, leaving other unexpected exceptions to propagate; update the except to "except (ValueError, TypeError):" and keep the existing fallback assignment to line_limit = 30.tests/test_property_based.py (1)
59-69: Line counting logic may produce false positives for edge cases.The line counting at lines 65-67 filters empty lines:
len([line for line in result.split("\n") if line]). However,_truncate_outputusessplitlines(keepends=True)which preserves line endings. This counting approach may not accurately reflect the actual truncation behavior, especially for:
- Lines that are only whitespace
- Consecutive empty lines in the input
Consider aligning the test's line counting with how
_truncate_outputactually counts lines.♻️ Alternative line counting approach
- # Count non-empty trailing lines - actual_line_count = ( - len([line for line in result.split("\n") if line]) if result else 0 - ) + # Count lines the same way truncate_output does + actual_line_count = len(result.splitlines(keepends=True)) if result else 0🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_property_based.py` around lines 59 - 69, The test's line-counting logic doesn't match _truncate_output's use of splitlines(keepends=True), causing edge-case miscounts (whitespace-only or consecutive empty lines); update test_truncation_respects_limit to compute actual_line_count by using result.splitlines(keepends=True) (e.g., actual_line_count = len(result.splitlines(keepends=True)) if result else 0) so the test counts lines the same way _truncate_output does and references the function _truncate_output and the test_truncation_respects_limit name to locate the change.src/napari_mcp/qt_helpers.py (2)
29-37: Existing QApplication instance not stored in state.qt_app.When
QApplication.instance()returns an existing app (line 29),state.qt_appis not updated. This could cause inconsistency if code later checksstate.qt_app.♻️ Proposed fix
app = QtWidgets.QApplication.instance() if app is None: state.qt_app = QtWidgets.QApplication([]) app = state.qt_app + else: + state.qt_app = app try: app.setQuitOnLastWindowClosed(False) except Exception: pass return app🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/napari_mcp/qt_helpers.py` around lines 29 - 37, When QtWidgets.QApplication.instance() returns an existing application, state.qt_app is not being updated which can lead to inconsistent state; update state.qt_app to reference the existing app (i.e., after calling QtWidgets.QApplication.instance() and assigning to app, set state.qt_app = app) before calling app.setQuitOnLastWindowClosed and returning app so all consumers use the same stored QApplication reference.
40-54: Silent exception handling may hinder debugging.The bare
except Exception: passat lines 53-54 silently swallows all errors when connecting the window destroyed signal. Consider logging at debug level to aid troubleshooting.♻️ Proposed fix
+import logging + +logger = logging.getLogger(__name__) + def connect_window_destroyed_signal(state: ServerState, viewer: Any) -> None: """Connect to the Qt window destroyed signal to clear the viewer.""" if state.window_close_connected: return try: qt_win = viewer.window._qt_window # type: ignore[attr-defined] def _on_destroyed(*_args: Any) -> None: state.viewer = None state.window_close_connected = False qt_win.destroyed.connect(_on_destroyed) # type: ignore[attr-defined] state.window_close_connected = True - except Exception: - pass + except Exception: + logger.debug("Failed to connect window destroyed signal", exc_info=True)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/napari_mcp/qt_helpers.py` around lines 40 - 54, The try/except in connect_window_destroyed_signal currently swallows all errors (bare except) when accessing viewer.window._qt_window or connecting qt_win.destroyed, which hides failures; replace the silent pass with logging the exception at debug (or warning) level using the module logger (or state/process logger if available), e.g., catch Exception as e and call logger.debug/f.warning with context including "connect_window_destroyed_signal", the viewer or qt_win info, and the exception, while preserving the existing behavior of not re-raising and still setting state.window_close_connected only on successful connect; ensure references to viewer.window._qt_window, qt_win.destroyed.connect, state.viewer and state.window_close_connected are used in the message to aid debugging.tests/test_bridge_server.py (1)
183-186: Consider extracting theexecute_directlyhelper to reduce duplication.This pattern is repeated in nearly every async test method in this file. While keeping tests self-contained has value, extracting this to a module-level helper or fixture would reduce boilerplate significantly.
♻️ Optional: Extract helper to module level
# At module level, after imports def _execute_directly(func, **kwargs): """Execute a function directly, bypassing Qt thread scheduling.""" return func() # Then in tests: mock_run.side_effect = _execute_directly🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_bridge_server.py` around lines 183 - 186, Extract the repeated local helper execute_directly into a module-level function (e.g., _execute_directly) and update tests to reuse it; specifically, define _execute_directly(func, **kwargs) at the top of the test module and replace local defs of execute_directly and any occurrences like mock_run.side_effect = execute_directly with mock_run.side_effect = _execute_directly to remove duplication while preserving behavior.tests/test_external_viewer.py (1)
263-299: Test modifies global state but may not restore it.Lines 272-273 modify
napari_mcp_server._stateto a newServerStateinAUTO_DETECTmode, but the test doesn't restore the original state afterward. This could affect subsequent tests if they expect STANDALONE mode (though conftest's reset_server_state fixture likely handles this).♻️ Optional: Explicitly restore state
`@pytest.mark.asyncio` `@patch`("fastmcp.Client") async def test_detect_viewers_with_external(self, mock_client_class): """Test detecting viewers with external available.""" from napari_mcp import server as napari_mcp_server from napari_mcp.server import create_server + original_state = napari_mcp_server._state # Create state in AUTO_DETECT mode state = ServerState(mode=StartupMode.AUTO_DETECT) napari_mcp_server._state = state create_server(state) - # ... rest of test ... + try: + # ... rest of test ... + finally: + napari_mcp_server._state = original_state🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_external_viewer.py` around lines 263 - 299, The test test_detect_viewers_with_external mutates the global napari_mcp_server._state to a new ServerState(StartupMode.AUTO_DETECT) but does not restore the original value; to fix, save the current napari_mcp_server._state at the start of the test, assign the new ServerState for the test, and then restore the saved state in a finally/teardown block (or use a pytest fixture) so napari_mcp_server._state is reset regardless of test outcome; reference the test function name test_detect_viewers_with_external and the symbols ServerState, StartupMode, and napari_mcp_server._state when making the change.tests/test_qt_helpers.py (2)
27-31: Test doesn't exercise actual failure path.The test patches
TEST_QT_FAILUREenvironment variable, butensure_qt_appinqt_helpers.pydoesn't check for this variable. The test passes becauseensure_qt_appsucceeds regardless of this env var. If testing error handling insetQuitOnLastWindowClosedis intended, a more direct mock would be needed.♻️ Optional: Mock the actual failure path
def test_qt_app_creation(make_napari_viewer): """Test Qt application creation and error handling.""" viewer = make_napari_viewer() # noqa: F841 app = ensure_qt_app(napari_mcp_server._state) assert app is not None - with patch.dict(os.environ, {"TEST_QT_FAILURE": "1"}): - # Should not raise exception, just continue - app = ensure_qt_app(napari_mcp_server._state) - assert app is not None + # Test that exception in setQuitOnLastWindowClosed is handled gracefully + with patch.object(app, "setQuitOnLastWindowClosed", side_effect=RuntimeError("test")): + napari_mcp_server._state.qt_app = None # Force re-initialization path + app2 = ensure_qt_app(napari_mcp_server._state) + assert app2 is not None🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_qt_helpers.py` around lines 27 - 31, The test sets TEST_QT_FAILURE but ensure_qt_app never reads that, so update the test to mock the actual Qt call that could fail: patch QtWidgets.QApplication.setQuitOnLastWindowClosed (or the method used inside ensure_qt_app) to raise an exception when invoked, then call ensure_qt_app(napari_mcp_server._state) and assert it returns an app (and/or that the failure was handled) instead of relying on the unused TEST_QT_FAILURE env var; reference ensure_qt_app and setQuitOnLastWindowClosed to locate where to apply the mock.
91-103: Test simulates callback behavior rather than triggering actual signal.Lines 95-96 manually set
viewer = Noneandwindow_close_connected = False, simulating what_on_destroyeddoes. While this tests the reconnection logic correctly, it doesn't verify that the signal actually fires_on_destroyed. Consider adding a comment clarifying this is intentional simulation.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_qt_helpers.py` around lines 91 - 103, The test currently simulates the destroyed callback by directly setting napari_mcp_server._state.viewer = None and napari_mcp_server._state.window_close_connected = False instead of emitting the actual signal; update the test to add a short clarifying comment immediately above those lines stating this is an intentional simulation of the _on_destroyed behavior (rather than triggering the Qt signal) so readers know connect_window_destroyed_signal and _on_destroyed are exercised indirectly; reference the functions and state names in the comment (_on_destroyed, connect_window_destroyed_signal, napari_mcp_server._state.viewer, napari_mcp_server._state.window_close_connected and make_napari_viewer) to make the intent explicit.tests/test_server_tools.py (1)
326-345: Mode change may persist beyond test if exception occurs.Line 334 sets
napari_mcp_server._state.mode = StartupMode.AUTO_DETECTbut if the test fails before completion, the mode won't be restored. Consider using a context manager or try/finally pattern.♻️ Optional: Use fixture or try/finally for mode changes
`@pytest.fixture` def auto_detect_mode(): """Temporarily set AUTO_DETECT mode.""" original_mode = napari_mcp_server._state.mode napari_mcp_server._state.mode = StartupMode.AUTO_DETECT yield napari_mcp_server._state.mode = original_mode🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_server_tools.py` around lines 326 - 345, The test sets napari_mcp_server._state.mode = StartupMode.AUTO_DETECT inside test_session_information_fallback_on_exception which can leak state if the test errors; wrap the mode change in a brief restore pattern (e.g., save original = napari_mcp_server._state.mode, set to StartupMode.AUTO_DETECT, then in a finally block restore napari_mcp_server._state.mode = original) or use a temporary fixture to set and restore mode so the test leaves napari_mcp_server._state.mode unchanged on failure.src/napari_mcp/server.py (2)
181-185: Silent exception swallowing may hide connection issues.The bare
except Exception: passswallows all exceptions when attempting to connect to an external viewer in AUTO_DETECT mode. While the fallback behavior is intentional, logging at DEBUG level would help troubleshoot connection issues without affecting normal operation.💡 Proposed improvement
if state.mode == StartupMode.AUTO_DETECT: try: return await state.external_session_information() except Exception: - pass + logger.debug("AUTO_DETECT: external viewer not available, falling back to local")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/napari_mcp/server.py` around lines 181 - 185, Replace the bare "except Exception: pass" when calling state.external_session_information() in the AUTO_DETECT branch with an exception handler that logs the caught exception at DEBUG level and then continues; specifically, change it to "except Exception as e:" and emit a debug log including context (e.g., "Failed to auto-detect external session: %s", e) and include exc_info=True so tracebacks are available, using the module logger (logger) or creating one via logging.getLogger(__name__) if not present, then continue to the fallback behavior.
892-896:Image.BILINEARis deprecated in Pillow 9.1.0+.In Pillow 9.1.0,
Image.BILINEARwas deprecated in favor ofImage.Resampling.BILINEAR. The old constant will be removed in Pillow 10.0.0.♻️ Proposed fix for Pillow compatibility
if new_w != img.width or new_h != img.height: - img = img.resize((new_w, new_h), resample=Image.BILINEAR) + img = img.resize((new_w, new_h), resample=Image.Resampling.BILINEAR)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/napari_mcp/server.py` around lines 892 - 896, Replace the deprecated Image.BILINEAR usage in the img.resize call with the newer Pillow API: use Image.Resampling.BILINEAR, but keep compatibility with older Pillow by resolving the resampling enum first (e.g., resample = getattr(Image, "Resampling", Image).BILINEAR) and then call img.resize((new_w, new_h), resample=resample); update the resize invocation in the block that checks downsample_factor and computes new_w/new_h where img.resize is called.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/getting-started/installation.md`:
- Line 183: The page still contains outdated installer and pip examples; remove
all instances of the old installer command pattern "napari-mcp-install
<application>/<app>" and replace them with the new canonical command
"napari-mcp-install install <application> --persistent" (e.g., update the
example at the changed line and the other occurrences), and update any pip
install examples that use the removed extra—change occurrences of 'pip install
-e ".[test,dev]"' and 'uv pip install -e ".[test,dev]"' to the current valid
extra(s) (e.g., 'pip install -e ".[dev]"' or simply 'pip install -e
".[dev,extras]"' depending on project's pyproject.toml) so all examples match
the current pyproject.toml and no invalid commands remain.
In `@docs/getting-started/zero-install.md`:
- Line 43: The document still contains mixed CLI forms; standardize all
installer examples to use the new form "napari-mcp-install install <app>" (e.g.,
"napari-mcp-install install claude-desktop") and update any variants that omit
"install" to this canonical form; also ensure flags like "--persistent" are
attached to the same command (e.g., "napari-mcp-install install <app>
--persistent") and replace the older shorthand occurrences (instances around the
sections currently showing "napari-mcp-install <app>" or missing "install") so
every example consistently uses "napari-mcp-install install <app> [flags]".
In `@docs/guides/troubleshooting.md`:
- Line 75: The page mixes old and new CLI usage: replace every instance of the
removed command shape "napari-mcp-install <app>" with the new subcommand form
"napari-mcp install <app>" (e.g., change "napari-mcp-install foo" to "napari-mcp
install foo" and ensure the example "napari-mcp install claude-desktop" is the
canonical form); scan the document for any flag or argument differences tied to
the old tool name and update those examples to match the new CLI syntax so all
examples are consistent.
In `@docs/index.md`:
- Around line 58-63: The page still contains the old CLI form
"napari-mcp-install <app-name>" which conflicts with the new command surface;
update all instances (including the examples like "napari-mcp-install install
claude-desktop" section and the stale occurrences mentioned around the examples
and the landing-line that reads `napari-mcp-install <app-name>`) to use the new
form "napari-mcp-install install <app-name>" so examples and any copy-paste
commands (e.g., the installer examples and the single-line usage on the landing
content) consistently show the correct "install" subcommand.
In `@src/napari_mcp/bridge_server.py`:
- Around line 233-236: The bridge sets self.state.exec_globals["napari"] to None
so executed code cannot access the napari module; change this to inject the
actual module (i.e., import or retrieve the napari module and assign
self.state.exec_globals["napari"] = napari) similar to the approach used in
server.py; ensure the import is present (or lazily imported) before the
assignment and keep the existing assignments to "__builtins__", "viewer" and
"np" intact so code run via state.exec_globals can call napari functions.
In `@src/napari_mcp/cli/install/codex_cli.py`:
- Around line 121-123: The current code opens config_path directly and calls
toml_writer.dump(config, f) without honoring self.backup, risking truncation;
update both places that write the TOML (the blocks using open(config_path, "w")
and toml_writer.dump) to: if self.backup create a backup copy first, write to a
temporary file (use encoding="utf-8" on open), fsync/close, then atomically
replace the original (e.g., os.replace) so a failed write won't destroy the
existing Codex config, and ensure the flow mirrors the JSON installer
backup+temp-file replace pattern used elsewhere.
In `@src/napari_mcp/cli/main.py`:
- Around line 156-173: When handling InstallTarget.ALL, don't hard-code
global_install=True for specific installers; instead preserve and forward the
caller's scope flags to _create_installer (pass the caller's project_dir and the
global_install flag derived from the CLI flags) for each app_target in
_INSTALLER_CLASS_NAMES, or explicitly reject/raise if mixing --project/--global
with target==ALL; update the call site that currently sets
global_install=app_target in PROJECT_TARGETS to use the caller-provided global
flag and ensure project_dir is forwarded so project-capable installers receive
the correct scope.
In `@src/napari_mcp/state.py`:
- Around line 120-136: The code in the block starting with if hasattr(result,
"content"): reads content[0].type without verifying content is non-empty; add a
bounds check (e.g., if not content or len(content) == 0) before accessing
content[0] and handle the empty-case consistently (return content, or return an
error dict like {"status":"error","message":"Empty content"}) so the subsequent
logic that inspects content[0].type and builds response/tries
json.loads(response) only runs when content has at least one element; ensure the
hasattr(content[0], "text") check remains inside that guarded branch.
- Around line 172-204: external_session_information currently can raise
connection/JSON errors while its siblings (detect_external_viewer,
proxy_to_external) handle exceptions and return graceful fallbacks; wrap the
existing Client call/processing logic in a try/except that catches Exception,
log or capture the exception message, and return the same error-shaped dict used
at the end (e.g., {"status":"error","message":...}) so callers get a consistent
fallback; update the exception handler to reference the same context (function
external_session_information, the Client instantiation using self.bridge_port
and the result parsing/ json.loads steps) and include the exception message in
the returned "message" for easier debugging.
In `@tests/test_cli_utils.py`:
- Around line 434-441: The test for read_json_config accepts false positives
because it only asserts isinstance(result, dict), so implementations that
swallow PermissionError and return {} pass; update the test to distinguish the
readable vs permission-denied cases by checking actual read access (e.g.,
os.access or attempting an open) before calling read_json_config, and in the
readable branch assert the exact expected payload (not just type) returned by
read_json_config(config_file), while leaving the PermissionError branch to
expect/allow the exception; reference read_json_config and the test block in
tests/test_cli_utils.py to locate the change.
In `@tests/test_performance.py`:
- Around line 112-128: The test
TestExecGlobalsPersistence::test_exec_globals_persist_across_calls sets
napari_mcp_server._state.viewer but doesn't restore it on exit; update the test
to wrap the viewer assignment and subsequent execute_code calls in a try/finally
(or use pytest's context) and in the finally block set
napari_mcp_server._state.viewer back to None (or to its previous value saved
before assignment) to ensure cleanup even on failure; reference the test
function name test_exec_globals_persist_across_calls, the fixture
make_napari_viewer, and the state field napari_mcp_server._state.viewer when
making the change.
In `@tests/test_timelapse.py`:
- Around line 148-163: The test test_timelapse_screenshot_negative_index only
asserts a single result; update it to verify that passing "-1" truly selects the
last frame by calling the same tool.fn with the explicit last index (e.g., "4"
for a 5-frame image) and asserting the two results are equal; additionally, for
the 2D/returned-image path assert the actual image/content matches the expected
last frame (compare to img[-1] or its encoded bytes) rather than only checking
len(result)==1; apply the same stronger assertions to the companion 2D-path test
that exercises tool.fn with the 2D option so both tests validate correct frame
selection and returned image content.
In `@tests/test_tools.py`:
- Around line 154-210: In test_mcp_tool_dispatch_local: remove the redundant
re-import "from napari_mcp import server as napari_mcp_server" and instead use
the module already imported earlier, and ensure state cleanup by resetting
napari_mcp_server._state.viewer to None after the test (use a try/finally or
teardown pattern around the usage of server/_state.viewer so the viewer is
cleared even on assertion failures); reference the test function name
test_mcp_tool_dispatch_local, the napari_mcp_server._state.viewer symbol, and
the server variable to locate where to apply these changes.
---
Outside diff comments:
In `@docs/integrations/claude-code.md`:
- Line 44: The Markdown heading "###Plugin Development" is missing a space after
the hashes; update the heading text to "### Plugin Development" (locate the
heading string "###Plugin Development" and insert a space after "###") so it
renders correctly in Markdown parsers.
In `@docs/integrations/cline.md`:
- Line 144: Replace the malformed JSON key "always Allow" with the correct
camelCase key "alwaysAllow" in the example JSON (find the string "always Allow"
and update it to "alwaysAllow") so the sample configuration is valid and will be
recognized by the parser.
In `@docs/integrations/index.md`:
- Around line 53-59: Replace the outdated CLI examples that use the old syntax
"napari-mcp-install <app-name>" and "napari-mcp-install <app-name> --dry-run"
with the updated form that includes the install subcommand (e.g.,
"napari-mcp-install install <app-name>" and "napari-mcp-install install
<app-name> --dry-run") so the Installation Workflow examples match the rest of
the document; update both occurrences that show the preview/dry-run example and
the basic install example.
- Line 14: Update the inconsistent CLI example by replacing the old invocation
string "napari-mcp-install <application-name>" with the new subcommand form
"napari-mcp-install install <target>" so it matches other examples (see
occurrences using "napari-mcp-install install <target>"); ensure the example
text and any surrounding explanatory text refer to the "install" subcommand and
use the same placeholder name ("<target>") used elsewhere.
In `@src/napari_mcp/cli/main.py`:
- Around line 199-275: Uninstall and list currently force global_install=True
for PROJECT_TARGETS and don't accept a project/global scope option; update the
CLI commands to accept the same scope flags as install (e.g., a
--project/--global or --scope option) and thread them into _create_installer and
_get_installer_class instead of hardcoding global_install. In the uninstall
function (uninstall) replace the use of global_install=app_target in
PROJECT_TARGETS with global_install=(not project_flag) or whatever boolean
derives from the new flag you add, and when iterating _INSTALLER_CLASS_NAMES
pass the chosen scope into _create_installer; likewise in list_installations use
kwargs derived from the new scope option (rather than forcing
global_install=True) when constructing installers via _get_installer_class.
Ensure option names match the install command and update help text so
project-scoped configs are discoverable and removable.
---
Nitpick comments:
In `@docs/integrations/other-llms.md`:
- Line 208: Change the table row label "maturity" to "Maturity" to match
capitalization of other table headers; locate the markdown table row containing
the literal cell value `maturity` (the row shown as "| **maturity** | Beta |
Stable |") and update it to "| **Maturity** | Beta | Stable |".
In `@src/napari_mcp/output.py`:
- Around line 21-24: The try/except that converts line_limit to int uses a broad
except; change it to catch the specific conversion errors (e.g., ValueError and
TypeError) around the int(line_limit) conversion in the same block so only
invalid input types/values fall back to the default of 30, leaving other
unexpected exceptions to propagate; update the except to "except (ValueError,
TypeError):" and keep the existing fallback assignment to line_limit = 30.
In `@src/napari_mcp/qt_helpers.py`:
- Around line 29-37: When QtWidgets.QApplication.instance() returns an existing
application, state.qt_app is not being updated which can lead to inconsistent
state; update state.qt_app to reference the existing app (i.e., after calling
QtWidgets.QApplication.instance() and assigning to app, set state.qt_app = app)
before calling app.setQuitOnLastWindowClosed and returning app so all consumers
use the same stored QApplication reference.
- Around line 40-54: The try/except in connect_window_destroyed_signal currently
swallows all errors (bare except) when accessing viewer.window._qt_window or
connecting qt_win.destroyed, which hides failures; replace the silent pass with
logging the exception at debug (or warning) level using the module logger (or
state/process logger if available), e.g., catch Exception as e and call
logger.debug/f.warning with context including "connect_window_destroyed_signal",
the viewer or qt_win info, and the exception, while preserving the existing
behavior of not re-raising and still setting state.window_close_connected only
on successful connect; ensure references to viewer.window._qt_window,
qt_win.destroyed.connect, state.viewer and state.window_close_connected are used
in the message to aid debugging.
In `@src/napari_mcp/server.py`:
- Around line 181-185: Replace the bare "except Exception: pass" when calling
state.external_session_information() in the AUTO_DETECT branch with an exception
handler that logs the caught exception at DEBUG level and then continues;
specifically, change it to "except Exception as e:" and emit a debug log
including context (e.g., "Failed to auto-detect external session: %s", e) and
include exc_info=True so tracebacks are available, using the module logger
(logger) or creating one via logging.getLogger(__name__) if not present, then
continue to the fallback behavior.
- Around line 892-896: Replace the deprecated Image.BILINEAR usage in the
img.resize call with the newer Pillow API: use Image.Resampling.BILINEAR, but
keep compatibility with older Pillow by resolving the resampling enum first
(e.g., resample = getattr(Image, "Resampling", Image).BILINEAR) and then call
img.resize((new_w, new_h), resample=resample); update the resize invocation in
the block that checks downsample_factor and computes new_w/new_h where
img.resize is called.
In `@tests/test_bridge_server.py`:
- Around line 183-186: Extract the repeated local helper execute_directly into a
module-level function (e.g., _execute_directly) and update tests to reuse it;
specifically, define _execute_directly(func, **kwargs) at the top of the test
module and replace local defs of execute_directly and any occurrences like
mock_run.side_effect = execute_directly with mock_run.side_effect =
_execute_directly to remove duplication while preserving behavior.
In `@tests/test_external_viewer.py`:
- Around line 263-299: The test test_detect_viewers_with_external mutates the
global napari_mcp_server._state to a new ServerState(StartupMode.AUTO_DETECT)
but does not restore the original value; to fix, save the current
napari_mcp_server._state at the start of the test, assign the new ServerState
for the test, and then restore the saved state in a finally/teardown block (or
use a pytest fixture) so napari_mcp_server._state is reset regardless of test
outcome; reference the test function name test_detect_viewers_with_external and
the symbols ServerState, StartupMode, and napari_mcp_server._state when making
the change.
In `@tests/test_property_based.py`:
- Around line 59-69: The test's line-counting logic doesn't match
_truncate_output's use of splitlines(keepends=True), causing edge-case miscounts
(whitespace-only or consecutive empty lines); update
test_truncation_respects_limit to compute actual_line_count by using
result.splitlines(keepends=True) (e.g., actual_line_count =
len(result.splitlines(keepends=True)) if result else 0) so the test counts lines
the same way _truncate_output does and references the function _truncate_output
and the test_truncation_respects_limit name to locate the change.
In `@tests/test_qt_helpers.py`:
- Around line 27-31: The test sets TEST_QT_FAILURE but ensure_qt_app never reads
that, so update the test to mock the actual Qt call that could fail: patch
QtWidgets.QApplication.setQuitOnLastWindowClosed (or the method used inside
ensure_qt_app) to raise an exception when invoked, then call
ensure_qt_app(napari_mcp_server._state) and assert it returns an app (and/or
that the failure was handled) instead of relying on the unused TEST_QT_FAILURE
env var; reference ensure_qt_app and setQuitOnLastWindowClosed to locate where
to apply the mock.
- Around line 91-103: The test currently simulates the destroyed callback by
directly setting napari_mcp_server._state.viewer = None and
napari_mcp_server._state.window_close_connected = False instead of emitting the
actual signal; update the test to add a short clarifying comment immediately
above those lines stating this is an intentional simulation of the _on_destroyed
behavior (rather than triggering the Qt signal) so readers know
connect_window_destroyed_signal and _on_destroyed are exercised indirectly;
reference the functions and state names in the comment (_on_destroyed,
connect_window_destroyed_signal, napari_mcp_server._state.viewer,
napari_mcp_server._state.window_close_connected and make_napari_viewer) to make
the intent explicit.
In `@tests/test_server_tools.py`:
- Around line 326-345: The test sets napari_mcp_server._state.mode =
StartupMode.AUTO_DETECT inside test_session_information_fallback_on_exception
which can leak state if the test errors; wrap the mode change in a brief restore
pattern (e.g., save original = napari_mcp_server._state.mode, set to
StartupMode.AUTO_DETECT, then in a finally block restore
napari_mcp_server._state.mode = original) or use a temporary fixture to set and
restore mode so the test leaves napari_mcp_server._state.mode unchanged on
failure.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: e06852d4-c283-4a38-8753-c620038998eb
⛔ Files ignored due to path filters (1)
uv.lockis excluded by!**/*.lock
📒 Files selected for processing (51)
.github/workflows/test.ymlCONTRIBUTING.mdREADME.mddocs/getting-started/index.mddocs/getting-started/installation.mddocs/getting-started/quickstart.mddocs/getting-started/zero-install.mddocs/guides/napari-plugin.mddocs/guides/troubleshooting.mddocs/index.mddocs/integrations/chatgpt.mddocs/integrations/claude-code.mddocs/integrations/claude-desktop.mddocs/integrations/cline.mddocs/integrations/cursor.mddocs/integrations/index.mddocs/integrations/other-llms.mdpyproject.tomlsrc/napari_mcp/__init__.pysrc/napari_mcp/bridge_server.pysrc/napari_mcp/cli/install/base.pysrc/napari_mcp/cli/install/codex_cli.pysrc/napari_mcp/cli/install/utils.pysrc/napari_mcp/cli/main.pysrc/napari_mcp/output.pysrc/napari_mcp/qt_helpers.pysrc/napari_mcp/server.pysrc/napari_mcp/state.pysrc/napari_mcp/viewer_protocol.pytests/conftest.pytests/fixtures/__init__.pytests/fixtures/async_utils.pytests/test_bridge_server.pytests/test_bridge_simplified.pytests/test_cli_installer.pytests/test_cli_installers/test_platform_installers.pytests/test_cli_utils.pytests/test_coverage.pytests/test_edge_cases.pytests/test_external_viewer.pytests/test_integration.pytests/test_napari_server_coverage.pytests/test_output_storage.pytests/test_performance.pytests/test_property_based.pytests/test_qt_helpers.pytests/test_server_tools.pytests/test_state.pytests/test_timelapse.pytests/test_tools.pytests/test_widget.py
💤 Files with no reviewable changes (6)
- tests/fixtures/init.py
- tests/test_coverage.py
- tests/test_edge_cases.py
- tests/test_napari_server_coverage.py
- tests/test_bridge_simplified.py
- tests/fixtures/async_utils.py
|
|
||
| # Then retry | ||
| napari-mcp-install claude-desktop | ||
| napari-mcp-install install claude-desktop |
There was a problem hiding this comment.
The CLI migration is only partially applied on this page.
Line 75 uses the new install subcommand, but Lines 49, 87, 125, 128, and 138 still show napari-mcp-install <app> .... Those examples now point to a removed command shape, so readers will still hit broken instructions here.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/guides/troubleshooting.md` at line 75, The page mixes old and new CLI
usage: replace every instance of the removed command shape "napari-mcp-install
<app>" with the new subcommand form "napari-mcp install <app>" (e.g., change
"napari-mcp-install foo" to "napari-mcp install foo" and ensure the example
"napari-mcp install claude-desktop" is the canonical form); scan the document
for any flag or argument differences tied to the old tool name and update those
examples to match the new CLI syntax so all examples are consistent.
| async def external_session_information(self) -> dict[str, Any]: | ||
| """Get session information from the external viewer.""" | ||
| from fastmcp import Client | ||
|
|
||
| test_client = Client(f"http://localhost:{self.bridge_port}/mcp") | ||
| async with test_client: | ||
| result = await test_client.call_tool("session_information") | ||
| if hasattr(result, "content"): | ||
| content = result.content | ||
| if isinstance(content, list) and len(content) > 0: | ||
| info = ( | ||
| content[0].text | ||
| if hasattr(content[0], "text") | ||
| else str(content[0]) | ||
| ) | ||
| info_dict = json.loads(info) if isinstance(info, str) else info | ||
| if info_dict.get("session_type") == "napari_bridge_session": | ||
| return { | ||
| "status": "ok", | ||
| "viewer_type": "external", | ||
| "title": info_dict.get("viewer", {}).get( | ||
| "title", "External Viewer" | ||
| ), | ||
| "layers": info_dict.get("viewer", {}).get( | ||
| "layer_names", [] | ||
| ), | ||
| "port": info_dict.get("bridge_port", self.bridge_port), | ||
| } | ||
|
|
||
| return { | ||
| "status": "error", | ||
| "message": "Failed to get session information from external viewer", | ||
| } |
There was a problem hiding this comment.
external_session_information doesn't catch exceptions like its siblings.
Unlike detect_external_viewer and proxy_to_external which catch exceptions and return graceful fallbacks, external_session_information will propagate exceptions (connection errors, JSON decode errors, etc.) to callers. This inconsistency could cause unhandled exceptions in AUTO_DETECT fallback paths.
🐛 Proposed fix: Add exception handling
async def external_session_information(self) -> dict[str, Any]:
"""Get session information from the external viewer."""
- from fastmcp import Client
+ try:
+ from fastmcp import Client
- test_client = Client(f"http://localhost:{self.bridge_port}/mcp")
- async with test_client:
- result = await test_client.call_tool("session_information")
- if hasattr(result, "content"):
- content = result.content
- if isinstance(content, list) and len(content) > 0:
- info = (
- content[0].text
- if hasattr(content[0], "text")
- else str(content[0])
- )
- info_dict = json.loads(info) if isinstance(info, str) else info
- if info_dict.get("session_type") == "napari_bridge_session":
- return {
- "status": "ok",
- "viewer_type": "external",
- "title": info_dict.get("viewer", {}).get(
- "title", "External Viewer"
- ),
- "layers": info_dict.get("viewer", {}).get(
- "layer_names", []
- ),
- "port": info_dict.get("bridge_port", self.bridge_port),
- }
+ test_client = Client(f"http://localhost:{self.bridge_port}/mcp")
+ async with test_client:
+ result = await test_client.call_tool("session_information")
+ if hasattr(result, "content"):
+ content = result.content
+ if isinstance(content, list) and len(content) > 0:
+ info = (
+ content[0].text
+ if hasattr(content[0], "text")
+ else str(content[0])
+ )
+ info_dict = json.loads(info) if isinstance(info, str) else info
+ if info_dict.get("session_type") == "napari_bridge_session":
+ return {
+ "status": "ok",
+ "viewer_type": "external",
+ "title": info_dict.get("viewer", {}).get(
+ "title", "External Viewer"
+ ),
+ "layers": info_dict.get("viewer", {}).get(
+ "layer_names", []
+ ),
+ "port": info_dict.get("bridge_port", self.bridge_port),
+ }
+ except Exception:
+ pass
return {
"status": "error",
"message": "Failed to get session information from external viewer",
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| async def external_session_information(self) -> dict[str, Any]: | |
| """Get session information from the external viewer.""" | |
| from fastmcp import Client | |
| test_client = Client(f"http://localhost:{self.bridge_port}/mcp") | |
| async with test_client: | |
| result = await test_client.call_tool("session_information") | |
| if hasattr(result, "content"): | |
| content = result.content | |
| if isinstance(content, list) and len(content) > 0: | |
| info = ( | |
| content[0].text | |
| if hasattr(content[0], "text") | |
| else str(content[0]) | |
| ) | |
| info_dict = json.loads(info) if isinstance(info, str) else info | |
| if info_dict.get("session_type") == "napari_bridge_session": | |
| return { | |
| "status": "ok", | |
| "viewer_type": "external", | |
| "title": info_dict.get("viewer", {}).get( | |
| "title", "External Viewer" | |
| ), | |
| "layers": info_dict.get("viewer", {}).get( | |
| "layer_names", [] | |
| ), | |
| "port": info_dict.get("bridge_port", self.bridge_port), | |
| } | |
| return { | |
| "status": "error", | |
| "message": "Failed to get session information from external viewer", | |
| } | |
| async def external_session_information(self) -> dict[str, Any]: | |
| """Get session information from the external viewer.""" | |
| try: | |
| from fastmcp import Client | |
| test_client = Client(f"http://localhost:{self.bridge_port}/mcp") | |
| async with test_client: | |
| result = await test_client.call_tool("session_information") | |
| if hasattr(result, "content"): | |
| content = result.content | |
| if isinstance(content, list) and len(content) > 0: | |
| info = ( | |
| content[0].text | |
| if hasattr(content[0], "text") | |
| else str(content[0]) | |
| ) | |
| info_dict = json.loads(info) if isinstance(info, str) else info | |
| if info_dict.get("session_type") == "napari_bridge_session": | |
| return { | |
| "status": "ok", | |
| "viewer_type": "external", | |
| "title": info_dict.get("viewer", {}).get( | |
| "title", "External Viewer" | |
| ), | |
| "layers": info_dict.get("viewer", {}).get( | |
| "layer_names", [] | |
| ), | |
| "port": info_dict.get("bridge_port", self.bridge_port), | |
| } | |
| except Exception: | |
| pass | |
| return { | |
| "status": "error", | |
| "message": "Failed to get session information from external viewer", | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/napari_mcp/state.py` around lines 172 - 204, external_session_information
currently can raise connection/JSON errors while its siblings
(detect_external_viewer, proxy_to_external) handle exceptions and return
graceful fallbacks; wrap the existing Client call/processing logic in a
try/except that catches Exception, log or capture the exception message, and
return the same error-shaped dict used at the end (e.g.,
{"status":"error","message":...}) so callers get a consistent fallback; update
the exception handler to reference the same context (function
external_session_information, the Client instantiation using self.bridge_port
and the result parsing/ json.loads steps) and include the exception message in
the returned "message" for easier debugging.
| try: | ||
| with patch("napari_mcp.cli.install.utils.console"): | ||
| result = read_json_config(config_file) | ||
| # Might succeed on some systems | ||
| assert isinstance(result, dict) | ||
| try: | ||
| result = read_json_config(config_file) | ||
| # Might succeed on some systems (e.g., root) | ||
| assert isinstance(result, dict) | ||
| except PermissionError: | ||
| pass # Expected on most systems |
There was a problem hiding this comment.
This permission test now accepts false positives.
The readable branch only checks isinstance(result, dict), so an implementation that incorrectly swallows PermissionError and returns {} would still pass. If you need to support elevated environments, branch on actual read access and assert the exact payload in the readable case.
Suggested tightening
try:
with patch("napari_mcp.cli.install.utils.console"):
- try:
- result = read_json_config(config_file)
- # Might succeed on some systems (e.g., root)
- assert isinstance(result, dict)
- except PermissionError:
- pass # Expected on most systems
+ if os.access(config_file, os.R_OK):
+ assert read_json_config(config_file) == {"test": "value"}
+ else:
+ with pytest.raises(PermissionError):
+ read_json_config(config_file)
finally:
config_file.chmod(0o644)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test_cli_utils.py` around lines 434 - 441, The test for
read_json_config accepts false positives because it only asserts
isinstance(result, dict), so implementations that swallow PermissionError and
return {} pass; update the test to distinguish the readable vs permission-denied
cases by checking actual read access (e.g., os.access or attempting an open)
before calling read_json_config, and in the readable branch assert the exact
expected payload (not just type) returned by read_json_config(config_file),
while leaving the PermissionError branch to expect/allow the exception;
reference read_json_config and the test block in tests/test_cli_utils.py to
locate the change.
| class TestExecGlobalsPersistence: | ||
| """Test that the execution namespace persists across calls.""" | ||
|
|
||
| @pytest.mark.asyncio | ||
| async def test_high_frequency_operations(self): | ||
| """Test system under high frequency operations.""" | ||
| mock_viewer = Mock() | ||
| mock_viewer.layers = [] | ||
|
|
||
| operation_count = 1000 | ||
|
|
||
| async def rapid_operation(index): | ||
| # Simulate rapid fire operations | ||
| if index % 2 == 0: | ||
| mock_viewer.layers.append(Mock(name=f"layer_{index}")) | ||
| elif mock_viewer.layers: | ||
| mock_viewer.layers.pop() | ||
|
|
||
| with measure_time(f"{operation_count}_rapid_operations") as timer: | ||
| tasks = [rapid_operation(i) for i in range(operation_count)] | ||
| await asyncio.gather(*tasks) | ||
|
|
||
| # Should handle 1000 operations in reasonable time | ||
| assert timer["duration"] < 5.0 | ||
|
|
||
| # System should remain stable | ||
| assert isinstance(mock_viewer.layers, list) | ||
|
|
||
| def test_memory_stability_under_load(self): | ||
| """Test memory stability under sustained load.""" | ||
| import gc | ||
| import tracemalloc | ||
|
|
||
| tracemalloc.start() | ||
| initial_snapshot = tracemalloc.take_snapshot() | ||
|
|
||
| mock_viewer = Mock() | ||
|
|
||
| # Simulate sustained load | ||
| for _ in range(10): | ||
| layers = [] | ||
|
|
||
| # Add many layers | ||
| for _ in range(100): | ||
| layer = Mock(data=np.zeros((100, 100))) | ||
| layers.append(layer) | ||
|
|
||
| mock_viewer.layers = layers | ||
|
|
||
| # Clear and force garbage collection | ||
| mock_viewer.layers.clear() | ||
| gc.collect() | ||
|
|
||
| final_snapshot = tracemalloc.take_snapshot() | ||
| stats = final_snapshot.compare_to(initial_snapshot, "lineno") | ||
| async def test_exec_globals_persist_across_calls(self, make_napari_viewer): | ||
| """Variables set in one execute_code call are available in the next.""" | ||
| from napari_mcp import server as napari_mcp_server | ||
|
|
||
| # Calculate leak | ||
| leak_mb = ( | ||
| sum(stat.size_diff for stat in stats if stat.size_diff > 0) / 1024 / 1024 | ||
| ) | ||
| viewer = make_napari_viewer() | ||
| napari_mcp_server._state.viewer = viewer | ||
|
|
||
| tracemalloc.stop() | ||
| res = await napari_mcp_server.execute_code("my_var = 42") | ||
| assert res["status"] == "ok" | ||
|
|
||
| # Should not leak more than 10MB after cycles | ||
| assert leak_mb < 10, f"Memory leak detected: {leak_mb:.2f}MB" | ||
| res = await napari_mcp_server.execute_code("my_var * 2") | ||
| assert res["status"] == "ok" | ||
| assert res["result_repr"] == "84" |
There was a problem hiding this comment.
Missing cleanup of server state after test.
The test sets napari_mcp_server._state.viewer but does not reset it in a finally block. If the test fails or other tests depend on a clean state, this could cause test pollution. Compare with test_regression_async_operations which properly cleans up.
🧹 Proposed fix to add cleanup
`@pytest.mark.asyncio`
async def test_exec_globals_persist_across_calls(self, make_napari_viewer):
"""Variables set in one execute_code call are available in the next."""
from napari_mcp import server as napari_mcp_server
viewer = make_napari_viewer()
napari_mcp_server._state.viewer = viewer
-
- res = await napari_mcp_server.execute_code("my_var = 42")
- assert res["status"] == "ok"
-
- res = await napari_mcp_server.execute_code("my_var * 2")
- assert res["status"] == "ok"
- assert res["result_repr"] == "84"
+ try:
+ res = await napari_mcp_server.execute_code("my_var = 42")
+ assert res["status"] == "ok"
+
+ res = await napari_mcp_server.execute_code("my_var * 2")
+ assert res["status"] == "ok"
+ assert res["result_repr"] == "84"
+ finally:
+ napari_mcp_server._state.viewer = None🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test_performance.py` around lines 112 - 128, The test
TestExecGlobalsPersistence::test_exec_globals_persist_across_calls sets
napari_mcp_server._state.viewer but doesn't restore it on exit; update the test
to wrap the viewer assignment and subsequent execute_code calls in a try/finally
(or use pytest's context) and in the finally block set
napari_mcp_server._state.viewer back to None (or to its previous value saved
before assignment) to ensure cleanup even on failure; reference the test
function name test_exec_globals_persist_across_calls, the fixture
make_napari_viewer, and the state field napari_mcp_server._state.viewer when
making the change.
| async def test_timelapse_screenshot_negative_index(make_napari_viewer): | ||
| """Test timelapse_screenshot with negative index (last frame).""" | ||
| viewer = make_napari_viewer() | ||
| from napari_mcp import server as napari_mcp_server | ||
|
|
||
| napari_mcp_server._state.viewer = viewer | ||
|
|
||
| img = np.linspace(0, 255, 5 * 32 * 32, dtype=np.uint8).reshape(5, 32, 32) | ||
| viewer.add_image(img, name="timelapse") | ||
|
|
||
| tool = await napari_mcp_server.server.get_tool("timelapse_screenshot") | ||
|
|
||
| # "-1" should select only the last frame | ||
| result = await tool.fn(0, "-1", True, False) | ||
| assert isinstance(result, list) | ||
| assert len(result) == 1 |
There was a problem hiding this comment.
Strengthen these edge-case assertions.
These tests currently prove only that a single screenshot is returned. They do not prove that -1 resolves to the last frame or that the 2D path returns the expected image. A regression that coerces -1 to 0 would still pass here. Please compare the -1 result against an explicit last-index request, and make the 2D case assert on the returned image/content rather than just len(result) == 1.
Also applies to: 167-182
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test_timelapse.py` around lines 148 - 163, The test
test_timelapse_screenshot_negative_index only asserts a single result; update it
to verify that passing "-1" truly selects the last frame by calling the same
tool.fn with the explicit last index (e.g., "4" for a 5-frame image) and
asserting the two results are equal; additionally, for the 2D/returned-image
path assert the actual image/content matches the expected last frame (compare to
img[-1] or its encoded bytes) rather than only checking len(result)==1; apply
the same stronger assertions to the companion 2D-path test that exercises
tool.fn with the 2D option so both tests validate correct frame selection and
returned image content.
| @pytest.mark.asyncio | ||
| async def test_mcp_tool_dispatch_local(make_napari_viewer) -> None: | ||
| """Integration test: verify local server tools are registered and callable via MCP dispatch.""" | ||
| viewer = make_napari_viewer() | ||
| from napari_mcp import server as napari_mcp_server | ||
|
|
||
| napari_mcp_server._state.viewer = viewer | ||
|
|
||
| server = napari_mcp_server.server | ||
|
|
||
| # Verify all expected tools are registered | ||
| expected_tools = { | ||
| "add_points", | ||
| "add_image", | ||
| "add_labels", | ||
| "list_layers", | ||
| "remove_layer", | ||
| "set_active_layer", | ||
| "set_camera", | ||
| "set_ndisplay", | ||
| "set_grid", | ||
| "set_dims_current_step", | ||
| "set_layer_properties", | ||
| "reorder_layer", | ||
| "reset_view", | ||
| "screenshot", | ||
| "timelapse_screenshot", | ||
| "execute_code", | ||
| "session_information", | ||
| "close_viewer", | ||
| "detect_viewers", | ||
| "init_viewer", | ||
| "install_packages", | ||
| "read_output", | ||
| } | ||
|
|
||
| # get_tool should succeed for all expected tools | ||
| for tool_name in expected_tools: | ||
| tool = await server.get_tool(tool_name) | ||
| assert tool is not None, f"Tool '{tool_name}' not registered" | ||
| assert callable(tool.fn), f"Tool '{tool_name}' fn is not callable" | ||
|
|
||
| # Exercise a few tools through MCP dispatch (not direct function calls) | ||
| tool = await server.get_tool("add_points") | ||
| result = await tool.fn([[5, 5], [10, 10]], name="mcp_dispatch_pts", size=3) | ||
| assert result["status"] == "ok" | ||
| assert result["n_points"] == 2 | ||
|
|
||
| tool = await server.get_tool("remove_layer") | ||
| result = await tool.fn("mcp_dispatch_pts") | ||
| assert result["status"] == "removed" | ||
|
|
||
| tool = await server.get_tool("session_information") | ||
| result = await tool.fn() | ||
| assert result["status"] == "ok" | ||
| assert "viewer" in result | ||
|
|
There was a problem hiding this comment.
Missing state cleanup and redundant import.
- The test does not clean up
napari_mcp_server._state.viewerafter completion, which could affect subsequent tests. - Line 158 re-imports
napari_mcp_serverwhich is already imported at line 6.
🧹 Proposed fix
`@pytest.mark.asyncio`
async def test_mcp_tool_dispatch_local(make_napari_viewer) -> None:
"""Integration test: verify local server tools are registered and callable via MCP dispatch."""
viewer = make_napari_viewer()
- from napari_mcp import server as napari_mcp_server
napari_mcp_server._state.viewer = viewer
+ try:
+ server = napari_mcp_server.server
- server = napari_mcp_server.server
+ # ... rest of the test ...
- # Verify all expected tools are registered
- expected_tools = {
- # ... tools ...
- }
-
- # ... assertions ...
+ tool = await server.get_tool("session_information")
+ result = await tool.fn()
+ assert result["status"] == "ok"
+ assert "viewer" in result
+ finally:
+ napari_mcp_server._state.viewer = None🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test_tools.py` around lines 154 - 210, In test_mcp_tool_dispatch_local:
remove the redundant re-import "from napari_mcp import server as
napari_mcp_server" and instead use the module already imported earlier, and
ensure state cleanup by resetting napari_mcp_server._state.viewer to None after
the test (use a try/finally or teardown pattern around the usage of
server/_state.viewer so the viewer is cleared even on assertion failures);
reference the test function name test_mcp_tool_dispatch_local, the
napari_mcp_server._state.viewer symbol, and the server variable to locate where
to apply these changes.
API consolidation: - Merge add_image/add_labels/add_points into unified add_layer (all 7 types) - Merge set_camera/set_ndisplay/set_dims_current_step/set_grid/reset_view into configure_viewer - Absorb set_active_layer into set_layer_properties (active param) - Absorb timelapse_screenshot into screenshot (axis/slice_range params) - New tools: get_layer (metadata+data+slicing), apply_to_layers (batch ops), save_layer_data (npy/tiff/csv export) Validation and error handling: - Input validation for opacity (0-1), gamma (>0), zoom (>0), ndisplay (2-3) - contrast_limits must be exactly [min, max] (reject wrong lengths) - Invalid colormap/blending return error instead of crashing - configure_viewer validates before mutating (no partial state on error) - Dims values clamped to valid range with warning - add_layer rejects conflicting data sources (data + data_var + path) New shared helpers module (_helpers.py): - parse_bool, resolve_layer_type, build_layer_detail, create_layer_on_viewer - run_code (exec with stdout/stderr capture), build_truncated_response - Eliminates duplication between server.py and bridge_server.py Bridge server: - Exclude lifecycle tools (init_viewer, close_viewer) in bridge mode - Updated for add_layer API Tests (454 passing, 80.71% coverage): - test_server_tools.py rewritten: class-per-tool, ~100 tests - test_helpers.py: unit tests for shared helpers - test_widget.py: port change UI, server failure handling - test_bridge_server.py: lifecycle exclusion, add_layer via bridge - test_tools.py: tool completeness checks, README sync test Other: - Delete tox.ini (uv-based workflow) - Add _helpers.py to ruff per-file ignores - Update docs, CONTRIBUTING.md, codecov.yml Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
✅ Documentation builds successfully! The site will be updated when this PR is merged. |
There was a problem hiding this comment.
Actionable comments posted: 12
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
docs/integrations/other-llms.md (1)
161-172:⚠️ Potential issue | 🟡 MinorUse the documented Codex config path in the TOML validation step.
Line 164 validates
.codex/config.toml, but the doc says the Codex config lives at~/.codex/config.tomlon all platforms. As written, this troubleshooting command can fail for users who copy/paste it outside a directory containing a local.codexfolder.Suggested doc fix
- 3. Validate TOML syntax: `python -c "import toml; toml.load(open('.codex/config.toml'))"` + 3. Validate TOML syntax: `python -c "import toml; toml.load(open('~/.codex/config.toml'))"`🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/integrations/other-llms.md` around lines 161 - 172, Update the TOML validation step so it points to the documented Codex config at ~/.codex/config.toml (or explicitly resolve the home directory) instead of a relative .codex/config.toml; specifically change the line that runs the Python toml load (the "Validate TOML syntax" step) to open the file at ~/.codex/config.toml or use os.path.expanduser to resolve '~' when loading the file..github/workflows/release.yml (1)
82-89:⚠️ Potential issue | 🟡 MinorRedundant and misleading dry-run output.
The label "Wheel contents:" suggests inspecting inside the wheel, but it just lists
dist/again. Lines 87 and 89 both list the same directory with slightly different flags (-lavs-lh), which is redundant.Proposed fix
- name: Dry run notification if: inputs.dry_run run: | echo "🔍 Dry run complete - artifacts built but not published" echo "Built artifacts:" - ls -la dist/ - echo "Wheel contents:" ls -lh dist/If you actually want to show wheel contents, use:
echo "Wheel contents:" unzip -l dist/*.whl | head -30🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/release.yml around lines 82 - 89, The "Dry run notification" step prints redundant listings and mislabels the second listing as "Wheel contents:" while both commands list dist/; replace the duplicate ls -lh dist/ with either a proper wheel-inspection command (e.g., run unzip -l dist/*.whl | head -30 to show wheel contents) or change the label to reflect what's actually listed and remove one of the duplicate ls lines so only one directory listing remains; update the echo label(s) and the commands in the step so the output matches the action (refer to the step name "Dry run notification" and the existing ls commands)..github/workflows/claude-code-review.yml (1)
26-42:⚠️ Potential issue | 🟠 MajorGate this job for fork PRs, or the new comment flow will fail.
This job now depends on both
${{ secrets.ANTHROPIC_API_KEY }}andpull-requests: write, but workflows triggered bypull_requestevents from forks have repository secrets withheld andGITHUB_TOKENforced to read-only, regardless of requested permissions. The action will fail silently for external PRs without a fork-safe guard.Suggested guard
- if: github.event.pull_request.draft == false + if: | + github.event.pull_request.draft == false && + github.event.pull_request.head.repo.full_name == github.repository🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/claude-code-review.yml around lines 26 - 42, The Run Claude Code Review job (id: claude-review) must be gated to skip forked PRs so it won’t try to use the withheld secret (anthropic_api_key) or pull-requests: write permission for external contributors; add a fork-safe guard on the job (or that specific step) using a conditional that only runs when the PR source repo equals the target repo (e.g., check github.event.pull_request.head.repo.full_name == github.repository or similar) so the job/step is skipped for fork PRs and the action won’t attempt to read secrets for those events.src/napari_mcp/cli/main.py (1)
199-255:⚠️ Potential issue | 🟡 Minor
uninstalllacks--global/--projectoptions, creating asymmetry withinstall.The
installcommand accepts--globaland--projectfor Cursor/Gemini, butuninstallhard-codesglobal_install=target in PROJECT_TARGETS(lines 233, 250). If a user installed with--project /repo, they cannot uninstall from that same location.Consider adding the same scope options to
uninstallfor consistency, or document that uninstall always targets the global configuration.
♻️ Duplicate comments (3)
src/napari_mcp/state.py (2)
120-136:⚠️ Potential issue | 🟡 MinorCheck that
contentis non-empty beforecontent[0].Line 122 assumes the bridge returned at least one content item. An empty response currently falls into the broad
exceptand gets reported as "no external viewer" instead of a malformed response.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/napari_mcp/state.py` around lines 120 - 136, The code accesses content[0] without verifying content is non-empty; update the block that handles hasattr(result, "content") to first check that content is truthy and len(content) > 0 (or content != []) before inspecting content[0]. If content is empty, return a clear error dict (e.g., {"status":"error","message":"Empty content from bridge"}) instead of letting the call fall through to the outer exception handling; keep the existing JSON parsing logic for non-empty content so functions like result.content and the local variable content are still used.
179-210:⚠️ Potential issue | 🟠 MajorMake
external_session_information()return an error dict on transport/parse failures.Unlike
detect_external_viewer()andproxy_to_external(), this method still lets connection and JSON errors escape. That makes the AUTO_DETECT fallback path much less tolerant than the rest of the state API.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/napari_mcp/state.py` around lines 179 - 210, external_session_information currently allows transport (Client/call_tool) and JSON parsing errors to bubble up; wrap the client creation/async-with block and the json.loads() call in a try/except that catches network/transport exceptions and json.JSONDecodeError (and a general Exception fallback), and on any caught error return the standard error dict {"status": "error", "message": "<brief error>"} instead of raising; keep the existing logic using test_client, bridge_port, and info_dict intact and include the caught exception message in the returned "message" to aid debugging.tests/test_tools.py (1)
47-136:⚠️ Potential issue | 🟡 MinorRestore
napari_mcp_server._state.viewerinfinally.These tests overwrite the module-global viewer and only clean up on the happy path. If an assertion fails before the last step, later tests inherit a live viewer and the suite becomes order-dependent.
Also applies to: 140-171
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_tools.py` around lines 47 - 136, The test overwrites the module-global napari_mcp_server._state.viewer and only resets it on the happy path; capture the original viewer at the start of test_all_tools_end_to_end (and the similar test at lines ~140-171) into a local variable, wrap the existing test body in try/finally, and in finally restore napari_mcp_server._state.viewer = original_viewer (and optionally call napari_mcp_server.close_viewer() if the original was None) so the global viewer is always restored even when assertions fail; locate the test by the function name test_all_tools_end_to_end to make the change.
🧹 Nitpick comments (8)
docs/integrations/python.md (1)
309-309: Makecall_toolexamples consistently use thearguments=parameter.Most examples in this guide use
await session.call_tool("...", arguments={...}), but lines 309, 318, 321, and 353 pass the arguments dict positionally. While positional arguments are supported, this inconsistency makes the documentation harder to follow and the examples less safe to copy-paste. Align all examples with the documented pattern by adding thearguments=keyword.Suggested changes
- await session.call_tool("add_layer", {"layer_type": "image", "path": path}) + await session.call_tool("add_layer", arguments={"layer_type": "image", "path": path})- await session.call_tool("execute_code", {"code": code}) + await session.call_tool("execute_code", arguments={"code": code})- await session.call_tool("screenshot", {"canvas_only": True}) + await session.call_tool("screenshot", arguments={"canvas_only": True})- result = await session.call_tool("add_layer", {"layer_type": "image", "path": "/invalid/path.tif"}) + result = await session.call_tool("add_layer", arguments={"layer_type": "image", "path": "/invalid/path.tif"})🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/integrations/python.md` at line 309, The examples using session.call_tool pass the arguments dict positionally; update each to use the keyword parameter so they match the documented pattern—replace calls like session.call_tool("add_layer", {"layer_type": "image", "path": path}) with session.call_tool("add_layer", arguments={"layer_type": "image", "path": path}) and do the same for the other positional usages of session.call_tool in this file to ensure consistent, copy-pastable examples..github/workflows/docs.yml (1)
44-48: Pinuvto an explicit version for reproducible CI.Dropping the explicit version pin makes the deploy and preview jobs float to whatever
uvrelease is current, potentially breaking docs CI without any repo change. Addversion: "X.Y.Z"toastral-sh/setup-uv@v6and upgrade it intentionally.Also applies to: 97-100
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/docs.yml around lines 44 - 48, The workflow step using the GitHub Action astral-sh/setup-uv@v6 (the "Install uv" step) must be pinned to a specific tool version for reproducible CI; update that step to include the input key version with an explicit semantic version string (e.g. version: "X.Y.Z") instead of relying on the action alone, and apply the same change to the other Install uv step that also uses astral-sh/setup-uv@v6 so both occurrences pass a fixed version.docs/scripts/gen_ref_pages.py (1)
107-142: Preserve keyword-only and variadic parameters in rendered signatures.
_format_signature()drops the bare*separator plus any*args/**kwargs, so a valid source signature can be rendered as a different call shape in the generated docs. If any tool uses those forms, readers will get the wrong API contract.src/napari_mcp/viewer_protocol.py (1)
16-21: Modelwindowas an optional capability, not a required protocol member.In a
Protocol,windowis required for structural typing and runtime checks, but the comment says it's optional. Any backend without that attribute won't actually satisfyViewerProtocol. Either guaranteewindowexists (for example asNone) or split the Qt-specific piece into a narrower protocol.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/napari_mcp/viewer_protocol.py` around lines 16 - 21, The ViewerProtocol currently declares `window: Any` but the comment says it's optional; update the protocol so `window` is modeled as optional or moved to a narrower Qt-specific protocol: either change the `window` attribute type to `Optional[Any]` (importing `Optional` from typing) in `ViewerProtocol` to allow backends without a Qt window, or remove `window` from `ViewerProtocol` and create a new `QtViewerProtocol` that extends `ViewerProtocol` and adds `window: Any` for Qt-specific implementations; adjust any usages or isinstance/issubclass checks to reference the appropriate protocol (`ViewerProtocol` vs `QtViewerProtocol`).tests/test_helpers.py (1)
509-541: Test verifies field expectations rather than actual integration.The test
test_list_layers_returns_build_layer_detail_fieldsdocuments expected fields but doesn't actually calllist_layers. Consider renaming totest_build_layer_detail_includes_expected_fieldsto clarify intent.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_helpers.py` around lines 509 - 541, The test is named test_list_layers_returns_build_layer_detail_fields but never calls list_layers; rename the test function to clearly reflect its intent (e.g., test_build_layer_detail_includes_expected_fields) and update any references or test discovery expectations accordingly; locate the test function named test_list_layers_returns_build_layer_detail_fields in the TestListLayersBuildLayerDetailConsistency class and change its name while leaving the calls to build_layer_detail and the asserted keys unchanged so the test intent matches its implementation.docs/index.md (1)
64-64: Consider clarifying the help command.Line 64 uses
napari-mcp-install --helpwhich shows all subcommands, while the context suggests install options. Consider either:
- Changing to
napari-mcp-install install --helpfor install-specific options- Updating the comment to "See all subcommands"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/index.md` at line 64, The help example currently uses "napari-mcp-install --help" which shows all subcommands; update the example to either use the install-specific help by replacing it with "napari-mcp-install install --help" or change the trailing comment to "See all subcommands" so the intent matches the command; look for the literal "napari-mcp-install --help" in docs/index.md and adjust the command or its comment accordingly.src/napari_mcp/_helpers.py (1)
223-232: Surface data indexing assumes tuple structure.Lines 225-226 assume
resolved_datais indexable with at least 2 elements. Whileviewer.add_surfacewould fail first with a clearer error, adding a guard would improve error messages.This is minor since the caller wraps this in try/except.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/napari_mcp/_helpers.py` around lines 223 - 232, The surface branch currently assumes resolved_data[0] and resolved_data[1] exist; before calling viewer.add_surface (and before using verts/faces), validate resolved_data is a sequence/tuple/list with length >= 2 and that the first two elements are array-like (e.g., numpy-convertible) — if the check fails, raise a clear ValueError or return an error dict indicating "surface data must be a (vertices, faces) pair"; then proceed to call viewer.add_surface and compute n_vertices/n_faces from verts/faces as before.tests/test_bridge_server.py (1)
70-92: Prefer FastMCP's public tool lookup API here.
server.server._tool_manager._toolsis an internal structure. A FastMCP refactor can break this test even when the exposed tool set is unchanged;await server.server.get_tools()keeps the assertion on the supported surface.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_bridge_server.py` around lines 70 - 92, The test test_lifecycle_tools_excluded is reading the internal structure server.server._tool_manager._tools; change it to use the public API by calling server.server.get_tools() (await if async) and base assertions on the returned tool names instead of the private _tools dict; locate the usage in the test and replace the internal access with the get_tools() result while keeping the same exclusion/assertion checks for "close_viewer", "init_viewer" and presence checks for "session_information", "execute_code", "list_layers", "screenshot".
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/test.yml:
- Around line 191-198: The CI job currently runs the "Safety check" step without
installing the project's dependencies so it only scans the scanner tools; update
the workflow to install the project environment before running safety (e.g., add
a step to run pip install -e . or otherwise install/export the project's
dependency set) so that the "Safety check" step audits napari-mcp's actual
runtime/dev dependencies rather than only bandit/safety's dependencies.
In `@CONTRIBUTING.md`:
- Around line 246-257: The example uses the old module-level coroutine
napari_mcp.server.add_layer which no longer exists; update it to instantiate the
new factory-based server via create_server(state) and call the handler on that
instance. Specifically, import or construct a minimal server state, call
create_server(state) to get the server object, then await server.add_layer(...)
(or the instance method name used by create_server) instead of
napari_mcp_server.add_layer so the snippet targets the current API.
In `@docs/scripts/gen_ref_pages.py`:
- Around line 169-170: The script currently ignores failed AST matches from
_extract_tool_functions and continues, so change the logic after calling tools =
_extract_tool_functions(source) (and the similar extraction points around the
184-210 block) to detect extraction failures and abort the build: if
_extract_tool_functions returns None, an empty list, or otherwise indicates a
missing/partial extraction for the given src_file, log a clear error via the
existing logger and exit with a non-zero status (e.g., raise SystemExit/exit(1))
so the docs build fails when tools cannot be extracted; ensure you check for
both None and empty results and include the source file name in the error
message for easier debugging.
In `@src/napari_mcp/_helpers.py`:
- Around line 374-378: The code converts line_limit to int without validation
which can raise ValueError; update the block around line_limit conversion to
catch ValueError/TypeError when calling int(line_limit) (referencing the
variable line_limit and function truncate_output) and handle it by falling back
to no truncation (assign stdout_full/stderr_full directly to response["stdout"]
/ response["stderr"]) or a sensible default limit; ensure you still set
stdout_was_truncated/stderr_was_truncated appropriately and avoid letting the
exception propagate from this helper.
- Around line 214-221: The code assumes arr has at least 2 dimensions and uses
arr[:,0], which can IndexError for 1D or empty arrays; after creating arr from
resolved_data (in the lt == "tracks" branch where viewer.add_tracks is called),
defensively handle shapes: if arr.size == 0 set n_tracks = 0; elif arr.ndim == 1
compute n_tracks = int(len(np.unique(arr))); else compute n_tracks =
int(len(np.unique(arr[:, 0]))). Keep the existing viewer.add_tracks(arr,
name=name) and return the same dict with "status", "name", and the computed
n_tracks.
In `@src/napari_mcp/bridge_server.py`:
- Around line 119-122: Replace direct access to the private tool storage
self.server._tool_manager._tools with the public API: call the public
remove_tool() on self.server for each lifecycle tool name ("close_viewer",
"init_viewer") instead of popping from _tool_manager._tools; update the loop
that currently references _tool_manager._tools to use
self.server.remove_tool(name) so the code uses the supported remove_tool API.
In `@src/napari_mcp/cli/install/base.py`:
- Around line 194-198: The uninstall() flow should be idempotent: when
check_existing_server(config, self.server_name) returns False, log a warning via
console.print (mentioning self.server_name and self.app_name) and return success
instead of failure; specifically change the current branch in uninstall() that
returns (False, ...) to a no-op success return (e.g., True with an explanatory
message) so repeated uninstall calls do not error while still informing the user
that the server was already absent.
In `@src/napari_mcp/qt_helpers.py`:
- Around line 12-27: The import of QtWidgets in ensure_qt_app can raise
qtpy.QtBindingsNotFoundError or ImportError and bypass the stub fallback; wrap
"from qtpy import QtWidgets" in a try/except that catches
qtpy.QtBindingsNotFoundError and ImportError (or a broad Exception if qtpy type
isn't available) and sets QtWidgets = None on failure so the existing _StubApp
logic and state.qt_app assignment in ensure_qt_app work as intended.
In `@src/napari_mcp/state.py`:
- Around line 39-42: The env parsing for NAPARI_MCP_BRIDGE_PORT in the
ServerState constructor can raise on malformed input; wrap the
int(os.environ.get(...)) logic in a guard similar to how max_output_items is
handled: attempt to parse the env value to int inside a try/except ValueError
(and TypeError), fall back to the default 9999 when parsing fails, and emit a
warning/log via the existing logger rather than letting the exception propagate;
update the assignment to self.bridge_port accordingly in the ServerState (or
relevant class) initializer.
In `@tests/test_external_viewer.py`:
- Around line 267-295: Before calling create_server(state) or
init_viewer(detect_only=True), capture the original module-level server surface
and state (e.g., save napari_mcp_server.server and napari_mcp_server._state into
locals) and after the test restore them (use a try/finally or test teardown) so
the AUTO_DETECT ServerState does not leak; specifically, wrap the
create_server(state)/init_viewer call in a block that reassigns
napari_mcp_server.server and napari_mcp_server._state back to their saved values
at the end of the test.
In `@tests/test_qt_helpers.py`:
- Around line 71-106: The test is bypassing the real destroyed callback by
directly setting _state.viewer and window_close_connected; instead trigger the
actual Qt destroyed signal so connect_window_destroyed_signal and the module's
_on_destroyed are exercised: after connecting the signal to viewer, invoke the
viewer's destroyed event (e.g. call viewer.window.destroyed.emit() or
viewer.close() so the Qt signal fires) and then assert that
napari_mcp_server._state.viewer is None and
napari_mcp_server._state.window_close_connected is False before creating viewer2
and reconnecting; update references to connect_window_destroyed_signal,
_on_destroyed, _state.viewer and window_close_connected accordingly.
---
Outside diff comments:
In @.github/workflows/claude-code-review.yml:
- Around line 26-42: The Run Claude Code Review job (id: claude-review) must be
gated to skip forked PRs so it won’t try to use the withheld secret
(anthropic_api_key) or pull-requests: write permission for external
contributors; add a fork-safe guard on the job (or that specific step) using a
conditional that only runs when the PR source repo equals the target repo (e.g.,
check github.event.pull_request.head.repo.full_name == github.repository or
similar) so the job/step is skipped for fork PRs and the action won’t attempt to
read secrets for those events.
In @.github/workflows/release.yml:
- Around line 82-89: The "Dry run notification" step prints redundant listings
and mislabels the second listing as "Wheel contents:" while both commands list
dist/; replace the duplicate ls -lh dist/ with either a proper wheel-inspection
command (e.g., run unzip -l dist/*.whl | head -30 to show wheel contents) or
change the label to reflect what's actually listed and remove one of the
duplicate ls lines so only one directory listing remains; update the echo
label(s) and the commands in the step so the output matches the action (refer to
the step name "Dry run notification" and the existing ls commands).
In `@docs/integrations/other-llms.md`:
- Around line 161-172: Update the TOML validation step so it points to the
documented Codex config at ~/.codex/config.toml (or explicitly resolve the home
directory) instead of a relative .codex/config.toml; specifically change the
line that runs the Python toml load (the "Validate TOML syntax" step) to open
the file at ~/.codex/config.toml or use os.path.expanduser to resolve '~' when
loading the file.
---
Duplicate comments:
In `@src/napari_mcp/state.py`:
- Around line 120-136: The code accesses content[0] without verifying content is
non-empty; update the block that handles hasattr(result, "content") to first
check that content is truthy and len(content) > 0 (or content != []) before
inspecting content[0]. If content is empty, return a clear error dict (e.g.,
{"status":"error","message":"Empty content from bridge"}) instead of letting the
call fall through to the outer exception handling; keep the existing JSON
parsing logic for non-empty content so functions like result.content and the
local variable content are still used.
- Around line 179-210: external_session_information currently allows transport
(Client/call_tool) and JSON parsing errors to bubble up; wrap the client
creation/async-with block and the json.loads() call in a try/except that catches
network/transport exceptions and json.JSONDecodeError (and a general Exception
fallback), and on any caught error return the standard error dict {"status":
"error", "message": "<brief error>"} instead of raising; keep the existing logic
using test_client, bridge_port, and info_dict intact and include the caught
exception message in the returned "message" to aid debugging.
In `@tests/test_tools.py`:
- Around line 47-136: The test overwrites the module-global
napari_mcp_server._state.viewer and only resets it on the happy path; capture
the original viewer at the start of test_all_tools_end_to_end (and the similar
test at lines ~140-171) into a local variable, wrap the existing test body in
try/finally, and in finally restore napari_mcp_server._state.viewer =
original_viewer (and optionally call napari_mcp_server.close_viewer() if the
original was None) so the global viewer is always restored even when assertions
fail; locate the test by the function name test_all_tools_end_to_end to make the
change.
---
Nitpick comments:
In @.github/workflows/docs.yml:
- Around line 44-48: The workflow step using the GitHub Action
astral-sh/setup-uv@v6 (the "Install uv" step) must be pinned to a specific tool
version for reproducible CI; update that step to include the input key version
with an explicit semantic version string (e.g. version: "X.Y.Z") instead of
relying on the action alone, and apply the same change to the other Install uv
step that also uses astral-sh/setup-uv@v6 so both occurrences pass a fixed
version.
In `@docs/index.md`:
- Line 64: The help example currently uses "napari-mcp-install --help" which
shows all subcommands; update the example to either use the install-specific
help by replacing it with "napari-mcp-install install --help" or change the
trailing comment to "See all subcommands" so the intent matches the command;
look for the literal "napari-mcp-install --help" in docs/index.md and adjust the
command or its comment accordingly.
In `@docs/integrations/python.md`:
- Line 309: The examples using session.call_tool pass the arguments dict
positionally; update each to use the keyword parameter so they match the
documented pattern—replace calls like session.call_tool("add_layer",
{"layer_type": "image", "path": path}) with session.call_tool("add_layer",
arguments={"layer_type": "image", "path": path}) and do the same for the other
positional usages of session.call_tool in this file to ensure consistent,
copy-pastable examples.
In `@src/napari_mcp/_helpers.py`:
- Around line 223-232: The surface branch currently assumes resolved_data[0] and
resolved_data[1] exist; before calling viewer.add_surface (and before using
verts/faces), validate resolved_data is a sequence/tuple/list with length >= 2
and that the first two elements are array-like (e.g., numpy-convertible) — if
the check fails, raise a clear ValueError or return an error dict indicating
"surface data must be a (vertices, faces) pair"; then proceed to call
viewer.add_surface and compute n_vertices/n_faces from verts/faces as before.
In `@src/napari_mcp/viewer_protocol.py`:
- Around line 16-21: The ViewerProtocol currently declares `window: Any` but the
comment says it's optional; update the protocol so `window` is modeled as
optional or moved to a narrower Qt-specific protocol: either change the `window`
attribute type to `Optional[Any]` (importing `Optional` from typing) in
`ViewerProtocol` to allow backends without a Qt window, or remove `window` from
`ViewerProtocol` and create a new `QtViewerProtocol` that extends
`ViewerProtocol` and adds `window: Any` for Qt-specific implementations; adjust
any usages or isinstance/issubclass checks to reference the appropriate protocol
(`ViewerProtocol` vs `QtViewerProtocol`).
In `@tests/test_bridge_server.py`:
- Around line 70-92: The test test_lifecycle_tools_excluded is reading the
internal structure server.server._tool_manager._tools; change it to use the
public API by calling server.server.get_tools() (await if async) and base
assertions on the returned tool names instead of the private _tools dict; locate
the usage in the test and replace the internal access with the get_tools()
result while keeping the same exclusion/assertion checks for "close_viewer",
"init_viewer" and presence checks for "session_information", "execute_code",
"list_layers", "screenshot".
In `@tests/test_helpers.py`:
- Around line 509-541: The test is named
test_list_layers_returns_build_layer_detail_fields but never calls list_layers;
rename the test function to clearly reflect its intent (e.g.,
test_build_layer_detail_includes_expected_fields) and update any references or
test discovery expectations accordingly; locate the test function named
test_list_layers_returns_build_layer_detail_fields in the
TestListLayersBuildLayerDetailConsistency class and change its name while
leaving the calls to build_layer_detail and the asserted keys unchanged so the
test intent matches its implementation.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 299784e4-2781-497b-86c6-3d4fce5cf9d6
⛔ Files ignored due to path filters (1)
uv.lockis excluded by!**/*.lock
📒 Files selected for processing (42)
.github/workflows/claude-code-review.yml.github/workflows/docs.yml.github/workflows/release.yml.github/workflows/test.ymlCONTRIBUTING.mdREADME.mdcodecov.ymldocs/examples/README.mddocs/examples/anthropic_integration.pydocs/getting-started/installation.mddocs/getting-started/zero-install.mddocs/guides/troubleshooting.mddocs/index.mddocs/integrations/cline.mddocs/integrations/index.mddocs/integrations/other-llms.mddocs/integrations/python.mddocs/scripts/gen_ref_pages.pymkdocs.ymlpyproject.tomlsrc/napari_mcp/__init__.pysrc/napari_mcp/_helpers.pysrc/napari_mcp/bridge_server.pysrc/napari_mcp/cli/install/base.pysrc/napari_mcp/cli/install/utils.pysrc/napari_mcp/cli/main.pysrc/napari_mcp/qt_helpers.pysrc/napari_mcp/server.pysrc/napari_mcp/state.pysrc/napari_mcp/viewer_protocol.pysrc/napari_mcp/widget.pytests/test_bridge_server.pytests/test_external_viewer.pytests/test_helpers.pytests/test_property_based.pytests/test_qt_helpers.pytests/test_server_tools.pytests/test_state.pytests/test_timelapse.pytests/test_tools.pytests/test_widget.pytox.ini
💤 Files with no reviewable changes (1)
- tox.ini
✅ Files skipped from review due to trivial changes (1)
- codecov.yml
🚧 Files skipped from review as they are similar to previous changes (1)
- docs/integrations/cline.md
| - name: Validate version matches tag | ||
| if: contains(github.ref, 'tags') | ||
| run: | | ||
| WHEEL=$(ls dist/*.whl | head -1) | ||
| WHEEL_VERSION=$(echo "$WHEEL" | sed -n 's/.*-\([0-9][0-9.]*\)-.*/\1/p') | ||
| TAG_VERSION="${GITHUB_REF_NAME#v}" | ||
| echo "Wheel version: $WHEEL_VERSION" | ||
| echo "Tag version: $TAG_VERSION" | ||
| if [ "$WHEEL_VERSION" != "$TAG_VERSION" ]; then | ||
| echo "ERROR: Version mismatch! Wheel=$WHEEL_VERSION Tag=$TAG_VERSION" | ||
| exit 1 | ||
| fi |
There was a problem hiding this comment.
Version regex doesn't handle PEP 440 pre-release versions.
The sed pattern [0-9][0-9.]* only matches digits and dots, but PEP 440 versions can include alpha/beta/rc/dev/post suffixes (e.g., 0.1.0a1, 0.1.0.dev1, 0.1.0rc1). For a wheel named napari_mcp-0.1.0a1-py3-none-any.whl, the regex extracts 0.1.0 instead of 0.1.0a1, causing a false mismatch with tag v0.1.0a1.
Proposed fix to handle PEP 440 versions
- name: Validate version matches tag
if: contains(github.ref, 'tags')
run: |
WHEEL=$(ls dist/*.whl | head -1)
- WHEEL_VERSION=$(echo "$WHEEL" | sed -n 's/.*-\([0-9][0-9.]*\)-.*/\1/p')
+ # Extract version: matches digits, dots, and PEP 440 suffixes (a, b, rc, dev, post, local)
+ WHEEL_VERSION=$(echo "$WHEEL" | sed -n 's/.*-\([0-9][0-9.]*[a-z0-9.+]*\)-py.*/\1/p')
TAG_VERSION="${GITHUB_REF_NAME#v}"
echo "Wheel version: $WHEEL_VERSION"
echo "Tag version: $TAG_VERSION"Alternatively, consider using Python to extract the version reliably:
from pathlib import Path
import re
wheel = next(Path("dist").glob("*.whl"))
version = re.match(r"[^-]+-([^-]+)-", wheel.name).group(1)| - name: Install security tools | ||
| run: pip install bandit "safety<4.0" | ||
|
|
||
| - name: Bandit scan | ||
| run: bandit -r src/ --skip B110,B101 -f json | ||
|
|
||
| - name: Safety check | ||
| run: safety check |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n .github/workflows/test.yml | sed -n '170,210p'Repository: royerlab/napari-mcp
Length of output: 1472
🌐 Web query:
safety CLI tool how does it scan for vulnerabilities requirements
💡 Result:
How Safety CLI scans for vulnerabilities
- Discovers dependencies automatically (Safety CLI 3+): running
safety scanfrom a project root recursively searches the directory for Python dependency/lock files and virtual environments, then indexes the packages it finds. [1], [2] - Matches package name + version against Safety’s data: it then performs a security analysis against known vulnerability records and “malicious package” lists in Safety DB, reporting affected packages, locations/versions, and remediation guidance. [1], [3]
Requirements / inputs it can scan
- Python runtime: Safety CLI 3 supports Python >= 3.7 (older versions can use a Docker option per docs). [3]
- Install:
pip install safety(then runsafety scan). [3] - Auth: scans typically require you to authenticate (it will prompt on first use;
safety authshows/starts auth). [3] - What it can scan (CLI 3
scan): finds and supports common dependency sources includingrequirements.txt,poetry.lock,Pipfile.lock, and virtualenv directories without you having to point to each file. [2] - Legacy CLI 2
checkinputs (still documented): can scan the current environment, a specific requirements file (-r), or stdin (--stdin). [4]
Sources: Safety CLI docs and PyPI project page. [1], [2], [3], [4]
🏁 Script executed:
fd -t f "(pyproject\.toml|requirements\.txt|Pipfile|setup\.py|setup\.cfg|poetry\.lock)" --max-depth 2Repository: royerlab/napari-mcp
Length of output: 77
🏁 Script executed:
git ls-files | grep -E "(pyproject|requirements|Pipfile|setup)" | head -20Repository: royerlab/napari-mcp
Length of output: 99
🏁 Script executed:
cat pyproject.toml | head -50Repository: royerlab/napari-mcp
Length of output: 1741
🌐 Web query:
safety check command behavior pyproject.toml auto-discover CLI 2.x versus safety scan
💡 Result:
safety check (Safety CLI 2.x) behavior
- No auto-discovery: it scans either:
- the current Python environment (
site-packages), or - an explicitly provided requirements.txt (
-r ...), or - requirements via stdin (
--stdin). [1]
- the current Python environment (
pyproject.tomlis not an input forsafety checkin 2.x (the documented inputs are environment / requirements file / stdin). So it will not discover or parse dependencies frompyproject.tomlby itself. [1]
safety scan (Safety CLI 3.x) behavior (vs check)
safety scanis the newer primary command and recursively searches the target directory to automatically find dependency manifests / environments—i.e., it does auto-discover within the scanned directory tree. [2]- Its native manifest support is documented as requirements.txt, poetry.lock, and Pipfile.lock, plus virtualenv directories. It does not list
pyproject.tomlitself as a supported manifest to scan directly. [2] safety checkis now considered deprecated / replaced byscan, but still exists. [3]
Practical implication for pyproject.toml
- If your project only has
pyproject.tomlwithout a lockfile,safety scangenerally won’t have a lock/requirements manifest to consume; you typically need to generate a lockfile (e.g.,poetry.lock) or export to requirements.txt before scanning. (Becausescanauto-discovers the supported manifests, butpyproject.tomlisn’t one of them.) [2]
Sources: [1] [2] [3]
Install the project environment before running safety check.
The job runs safety check in an environment containing only bandit and safety itself, so it scans only these scanner tools' dependencies and misses all vulnerabilities in napari-mcp's actual runtime/dev dependency set. Add pip install -e . (or export the dependency set to a requirements file) before the safety scan step to audit the project's actual dependencies.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/test.yml around lines 191 - 198, The CI job currently runs
the "Safety check" step without installing the project's dependencies so it only
scans the scanner tools; update the workflow to install the project environment
before running safety (e.g., add a step to run pip install -e . or otherwise
install/export the project's dependency set) so that the "Safety check" step
audits napari-mcp's actual runtime/dev dependencies rather than only
bandit/safety's dependencies.
| from napari_mcp import server as napari_mcp_server | ||
|
|
||
| @pytest.mark.asyncio | ||
| async def test_add_image_success(): | ||
| """Test successful image addition.""" | ||
| async def test_add_layer_success(): | ||
| """Test successful layer addition.""" | ||
| # Arrange | ||
| test_image_path = "test_data/sample.png" | ||
|
|
||
| # Act | ||
| result = await add_image(test_image_path, name="test_image") | ||
| result = await napari_mcp_server.add_layer( | ||
| layer_type="image", path=test_image_path, name="test_image" | ||
| ) |
There was a problem hiding this comment.
Update this example to the new factory-based tool surface.
add_layer is no longer documented elsewhere as a module-level coroutine on napari_mcp.server; this refactor moved tool handlers behind create_server(state). As written, contributors copying this snippet will target a stale API and likely hit AttributeError.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@CONTRIBUTING.md` around lines 246 - 257, The example uses the old
module-level coroutine napari_mcp.server.add_layer which no longer exists;
update it to instantiate the new factory-based server via create_server(state)
and call the handler on that instance. Specifically, import or construct a
minimal server state, call create_server(state) to get the server object, then
await server.add_layer(...) (or the instance method name used by create_server)
instead of napari_mcp_server.add_layer so the snippet targets the current API.
| source = src_file.read_text() | ||
| tools = _extract_tool_functions(source) |
There was a problem hiding this comment.
Fail the docs build when a tool can't be extracted.
A missed AST match currently becomes a placeholder in category pages and simply disappears from the complete reference, but the script still exits successfully. That makes API-doc drift easy to miss after future renames or decorator changes.
🔧 Suggested guard
source = src_file.read_text()
tools = _extract_tool_functions(source)
+missing_tools = [name for name in ALL_TOOLS if name not in tools]
+if missing_tools:
+ raise RuntimeError(
+ f"Failed to extract API docs for: {', '.join(missing_tools)}"
+ )Also applies to: 184-210
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/scripts/gen_ref_pages.py` around lines 169 - 170, The script currently
ignores failed AST matches from _extract_tool_functions and continues, so change
the logic after calling tools = _extract_tool_functions(source) (and the similar
extraction points around the 184-210 block) to detect extraction failures and
abort the build: if _extract_tool_functions returns None, an empty list, or
otherwise indicates a missing/partial extraction for the given src_file, log a
clear error via the existing logger and exit with a non-zero status (e.g., raise
SystemExit/exit(1)) so the docs build fails when tools cannot be extracted;
ensure you check for both None and empty results and include the source file
name in the error message for easier debugging.
| elif lt == "tracks": | ||
| arr = np.asarray(resolved_data, dtype=float) | ||
| layer = viewer.add_tracks(arr, name=name) | ||
| return { | ||
| "status": "ok", | ||
| "name": layer.name, | ||
| "n_tracks": int(len(np.unique(arr[:, 0]))), | ||
| } |
There was a problem hiding this comment.
Potential IndexError if tracks array has unexpected shape.
Line 220 assumes arr has at least 2 dimensions with column 0 containing track IDs. If passed a 1D array or empty array, arr[:, 0] will raise an IndexError.
🛡️ Proposed defensive fix
elif lt == "tracks":
arr = np.asarray(resolved_data, dtype=float)
+ if arr.ndim < 2 or arr.size == 0:
+ return {
+ "status": "error",
+ "message": "Cannot add tracks layer: data must be 2D array with columns [track_id, time, coords...].",
+ }
layer = viewer.add_tracks(arr, name=name)
return {
"status": "ok",
"name": layer.name,
"n_tracks": int(len(np.unique(arr[:, 0]))),
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| elif lt == "tracks": | |
| arr = np.asarray(resolved_data, dtype=float) | |
| layer = viewer.add_tracks(arr, name=name) | |
| return { | |
| "status": "ok", | |
| "name": layer.name, | |
| "n_tracks": int(len(np.unique(arr[:, 0]))), | |
| } | |
| elif lt == "tracks": | |
| arr = np.asarray(resolved_data, dtype=float) | |
| if arr.ndim < 2 or arr.size == 0: | |
| return { | |
| "status": "error", | |
| "message": "Cannot add tracks layer: data must be 2D array with columns [track_id, time, coords...].", | |
| } | |
| layer = viewer.add_tracks(arr, name=name) | |
| return { | |
| "status": "ok", | |
| "name": layer.name, | |
| "n_tracks": int(len(np.unique(arr[:, 0]))), | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/napari_mcp/_helpers.py` around lines 214 - 221, The code assumes arr has
at least 2 dimensions and uses arr[:,0], which can IndexError for 1D or empty
arrays; after creating arr from resolved_data (in the lt == "tracks" branch
where viewer.add_tracks is called), defensively handle shapes: if arr.size == 0
set n_tracks = 0; elif arr.ndim == 1 compute n_tracks =
int(len(np.unique(arr))); else compute n_tracks = int(len(np.unique(arr[:,
0]))). Keep the existing viewer.add_tracks(arr, name=name) and return the same
dict with "status", "name", and the computed n_tracks.
| if not check_existing_server(config, self.server_name): | ||
| console.print( | ||
| f"[yellow]napari-mcp server '{self.server_name}' not found in {self.app_name} configuration[/yellow]" | ||
| ) | ||
| return False, f"Server '{self.server_name}' not found in configuration" |
There was a problem hiding this comment.
Keep uninstall() idempotent.
Returning failure when the server entry is already absent makes repeated cleanup scripts fail even though the target state is already satisfied. This should be a warning plus a no-op success instead of an error.
Suggested fix
if not check_existing_server(config, self.server_name):
console.print(
f"[yellow]napari-mcp server '{self.server_name}' not found in {self.app_name} configuration[/yellow]"
)
- return False, f"Server '{self.server_name}' not found in configuration"
+ return True, (
+ f"Server '{self.server_name}' is already absent from configuration"
+ )📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if not check_existing_server(config, self.server_name): | |
| console.print( | |
| f"[yellow]napari-mcp server '{self.server_name}' not found in {self.app_name} configuration[/yellow]" | |
| ) | |
| return False, f"Server '{self.server_name}' not found in configuration" | |
| if not check_existing_server(config, self.server_name): | |
| console.print( | |
| f"[yellow]napari-mcp server '{self.server_name}' not found in {self.app_name} configuration[/yellow]" | |
| ) | |
| return True, ( | |
| f"Server '{self.server_name}' is already absent from configuration" | |
| ) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/napari_mcp/cli/install/base.py` around lines 194 - 198, The uninstall()
flow should be idempotent: when check_existing_server(config, self.server_name)
returns False, log a warning via console.print (mentioning self.server_name and
self.app_name) and return success instead of failure; specifically change the
current branch in uninstall() that returns (False, ...) to a no-op success
return (e.g., True with an explanatory message) so repeated uninstall calls do
not error while still informing the user that the server was already absent.
| def ensure_qt_app(state: ServerState) -> Any: | ||
| """Return the Qt application, creating one if necessary, or a no-op stub.""" | ||
| from qtpy import QtWidgets | ||
|
|
||
| if QtWidgets is None: | ||
|
|
||
| class _StubApp: | ||
| def processEvents(self, *_: Any) -> None: # noqa: N802 | ||
| pass | ||
|
|
||
| def setQuitOnLastWindowClosed(self, *_: Any) -> None: # noqa: N802 | ||
| pass | ||
|
|
||
| if state.qt_app is None: | ||
| state.qt_app = _StubApp() | ||
| return state.qt_app |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n src/napari_mcp/qt_helpers.pyRepository: royerlab/napari-mcp
Length of output: 3080
🏁 Script executed:
# Search for other qtpy imports and patterns in the codebase
rg "from qtpy|import qtpy" -A 3 -B 1Repository: royerlab/napari-mcp
Length of output: 3144
🌐 Web query:
qtpy QtWidgets import behavior when Qt not available exception or None
💡 Result:
With QtPy installed but no Qt bindings (PyQt5/PySide2/PyQt6/PySide6) available, doing:
from qtpy import QtWidgets
# or
import qtpy.QtWidgetsdoes not return None—it raises qtpy.QtBindingsNotFoundError with the message “No Qt bindings could be found”. QtPy defines that exception and raises it when it fails to import the last fallback binding (PySide6) during its binding-selection step. [1][2]
QtPy also explicitly documents that you must have one of those bindings installed; QtPy itself “doesn't install any binding by itself.” [3]
Sources: [1] [2] [3]
Wrap the qtpy import in a try/except to enable the stub fallback.
When Qt bindings are unavailable, from qtpy import QtWidgets on line 14 raises qtpy.QtBindingsNotFoundError (or similar exceptions like ImportError), bypassing the if QtWidgets is None: check on line 16 entirely. The stub app fallback never executes, breaking the headless/no-Qt path that the docstring promises. Wrap the import in a try/except to set QtWidgets = None on failure, allowing the stub logic to work as intended.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/napari_mcp/qt_helpers.py` around lines 12 - 27, The import of QtWidgets
in ensure_qt_app can raise qtpy.QtBindingsNotFoundError or ImportError and
bypass the stub fallback; wrap "from qtpy import QtWidgets" in a try/except that
catches qtpy.QtBindingsNotFoundError and ImportError (or a broad Exception if
qtpy type isn't available) and sets QtWidgets = None on failure so the existing
_StubApp logic and state.qt_app assignment in ensure_qt_app work as intended.
| self.mode: StartupMode = mode | ||
| self.bridge_port: int = bridge_port or int( | ||
| os.environ.get("NAPARI_MCP_BRIDGE_PORT", "9999") | ||
| ) |
There was a problem hiding this comment.
Guard NAPARI_MCP_BRIDGE_PORT parsing.
A malformed env value here raises during ServerState() construction, which can abort startup/import before any fallback logic runs. max_output_items below already handles the same class of config error gracefully.
Suggested fix
- self.bridge_port: int = bridge_port or int(
- os.environ.get("NAPARI_MCP_BRIDGE_PORT", "9999")
- )
+ if bridge_port is not None:
+ self.bridge_port = bridge_port
+ else:
+ try:
+ self.bridge_port = int(os.environ.get("NAPARI_MCP_BRIDGE_PORT", "9999"))
+ except ValueError:
+ self.bridge_port = 9999🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/napari_mcp/state.py` around lines 39 - 42, The env parsing for
NAPARI_MCP_BRIDGE_PORT in the ServerState constructor can raise on malformed
input; wrap the int(os.environ.get(...)) logic in a guard similar to how
max_output_items is handled: attempt to parse the env value to int inside a
try/except ValueError (and TypeError), fall back to the default 9999 when
parsing fails, and emit a warning/log via the existing logger rather than
letting the exception propagate; update the assignment to self.bridge_port
accordingly in the ServerState (or relevant class) initializer.
| from napari_mcp import server as napari_mcp_server | ||
| from napari_mcp.server import create_server | ||
|
|
||
| # Create state in AUTO_DETECT mode | ||
| state = ServerState(mode=StartupMode.AUTO_DETECT) | ||
| napari_mcp_server._state = state | ||
| create_server(state) | ||
|
|
||
| mock_client = AsyncMock() | ||
| mock_client_class.return_value = mock_client | ||
| mock_client.__aenter__ = AsyncMock(return_value=mock_client) | ||
| mock_client.__aexit__ = AsyncMock(return_value=None) | ||
|
|
||
| mock_result = Mock() | ||
| mock_result.content = [ | ||
| Mock( | ||
| text=json.dumps( | ||
| { | ||
| "session_type": "napari_bridge_session", | ||
| "bridge_port": 9999, | ||
| "viewer": {"title": "External"}, | ||
| } | ||
| ) | ||
| ) | ||
| ] | ||
| mock_client.call_tool.return_value = mock_result | ||
|
|
||
| result = await detect_viewers() | ||
| result = await napari_mcp_server.init_viewer(detect_only=True) | ||
|
|
There was a problem hiding this comment.
Restore the module server/state after this test.
create_server(state) rebinds the module-level server surface, not just _state. Leaving the AUTO_DETECT instance installed here can contaminate later tests that rely on the conftest STANDALONE state.
Suggested fix
from napari_mcp import server as napari_mcp_server
from napari_mcp.server import create_server
- # Create state in AUTO_DETECT mode
- state = ServerState(mode=StartupMode.AUTO_DETECT)
- napari_mcp_server._state = state
- create_server(state)
+ original_state = napari_mcp_server._state
+ try:
+ state = ServerState(mode=StartupMode.AUTO_DETECT)
+ create_server(state)
- mock_client = AsyncMock()
- mock_client_class.return_value = mock_client
- mock_client.__aenter__ = AsyncMock(return_value=mock_client)
- mock_client.__aexit__ = AsyncMock(return_value=None)
+ mock_client = AsyncMock()
+ mock_client_class.return_value = mock_client
+ mock_client.__aenter__ = AsyncMock(return_value=mock_client)
+ mock_client.__aexit__ = AsyncMock(return_value=None)
- mock_result = Mock()
- mock_result.content = [
- Mock(
- text=json.dumps(
- {
- "session_type": "napari_bridge_session",
- "bridge_port": 9999,
- "viewer": {"title": "External"},
- }
- )
- )
- ]
- mock_client.call_tool.return_value = mock_result
+ mock_result = Mock()
+ mock_result.content = [
+ Mock(
+ text=json.dumps(
+ {
+ "session_type": "napari_bridge_session",
+ "bridge_port": 9999,
+ "viewer": {"title": "External"},
+ }
+ )
+ )
+ ]
+ mock_client.call_tool.return_value = mock_result
- result = await napari_mcp_server.init_viewer(detect_only=True)
+ result = await napari_mcp_server.init_viewer(detect_only=True)
- assert result["status"] == "ok"
- assert result["viewers"]["external"]["available"] is True
- assert result["viewers"]["external"]["type"] == "napari_bridge"
- assert result["viewers"]["external"]["port"] == 9999
+ assert result["status"] == "ok"
+ assert result["viewers"]["external"]["available"] is True
+ assert result["viewers"]["external"]["type"] == "napari_bridge"
+ assert result["viewers"]["external"]["port"] == 9999
+ finally:
+ create_server(original_state)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| from napari_mcp import server as napari_mcp_server | |
| from napari_mcp.server import create_server | |
| # Create state in AUTO_DETECT mode | |
| state = ServerState(mode=StartupMode.AUTO_DETECT) | |
| napari_mcp_server._state = state | |
| create_server(state) | |
| mock_client = AsyncMock() | |
| mock_client_class.return_value = mock_client | |
| mock_client.__aenter__ = AsyncMock(return_value=mock_client) | |
| mock_client.__aexit__ = AsyncMock(return_value=None) | |
| mock_result = Mock() | |
| mock_result.content = [ | |
| Mock( | |
| text=json.dumps( | |
| { | |
| "session_type": "napari_bridge_session", | |
| "bridge_port": 9999, | |
| "viewer": {"title": "External"}, | |
| } | |
| ) | |
| ) | |
| ] | |
| mock_client.call_tool.return_value = mock_result | |
| result = await detect_viewers() | |
| result = await napari_mcp_server.init_viewer(detect_only=True) | |
| from napari_mcp import server as napari_mcp_server | |
| from napari_mcp.server import create_server | |
| original_state = napari_mcp_server._state | |
| try: | |
| state = ServerState(mode=StartupMode.AUTO_DETECT) | |
| create_server(state) | |
| mock_client = AsyncMock() | |
| mock_client_class.return_value = mock_client | |
| mock_client.__aenter__ = AsyncMock(return_value=mock_client) | |
| mock_client.__aexit__ = AsyncMock(return_value=None) | |
| mock_result = Mock() | |
| mock_result.content = [ | |
| Mock( | |
| text=json.dumps( | |
| { | |
| "session_type": "napari_bridge_session", | |
| "bridge_port": 9999, | |
| "viewer": {"title": "External"}, | |
| } | |
| ) | |
| ) | |
| ] | |
| mock_client.call_tool.return_value = mock_result | |
| result = await napari_mcp_server.init_viewer(detect_only=True) | |
| assert result["status"] == "ok" | |
| assert result["viewers"]["external"]["available"] is True | |
| assert result["viewers"]["external"]["type"] == "napari_bridge" | |
| assert result["viewers"]["external"]["port"] == 9999 | |
| finally: | |
| create_server(original_state) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test_external_viewer.py` around lines 267 - 295, Before calling
create_server(state) or init_viewer(detect_only=True), capture the original
module-level server surface and state (e.g., save napari_mcp_server.server and
napari_mcp_server._state into locals) and after the test restore them (use a
try/finally or test teardown) so the AUTO_DETECT ServerState does not leak;
specifically, wrap the create_server(state)/init_viewer call in a block that
reassigns napari_mcp_server.server and napari_mcp_server._state back to their
saved values at the end of the test.
- Normalize line_limit from string to int with fallback to 30 in execute_code, install_packages, and build_truncated_response - Remove _parse_bool/_truncate_output backward-compat aliases from server.py; tests now import directly from _helpers/output modules - Add tests: numpy slicing edge cases, max_elements cap, line_limit normalization (string/invalid/None), save_layer_data format coverage (png, jpg, csv tracks, labels tiff), security package name validation (pip flag injection, URL specifiers, newline injection) 474 tests, 81.05% coverage. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
✅ Documentation builds successfully! The site will be updated when this PR is merged. |
There was a problem hiding this comment.
Actionable comments posted: 4
♻️ Duplicate comments (2)
tests/test_external_viewer.py (1)
270-295:⚠️ Potential issue | 🟠 MajorRestore the module-level server surface after rebinding it in this test.
This test swaps
napari_mcp_server._stateand re-registers module-level tool closures withcreate_server(state), but it never restores the fixture-managed STANDALONE surface. That can leak AUTO_DETECT behavior into later teardown/test code.Suggested fix
from napari_mcp import server as napari_mcp_server from napari_mcp.server import create_server - # Create state in AUTO_DETECT mode - state = ServerState(mode=StartupMode.AUTO_DETECT) - napari_mcp_server._state = state - create_server(state) + original_state = napari_mcp_server._state + try: + state = ServerState(mode=StartupMode.AUTO_DETECT) + napari_mcp_server._state = state + create_server(state) - mock_client = AsyncMock() - mock_client_class.return_value = mock_client - mock_client.__aenter__ = AsyncMock(return_value=mock_client) - mock_client.__aexit__ = AsyncMock(return_value=None) + mock_client = AsyncMock() + mock_client_class.return_value = mock_client + mock_client.__aenter__ = AsyncMock(return_value=mock_client) + mock_client.__aexit__ = AsyncMock(return_value=None) - mock_result = Mock() - mock_result.content = [ - Mock( - text=json.dumps( - { - "session_type": "napari_bridge_session", - "bridge_port": 9999, - "viewer": {"title": "External"}, - } - ) - ) - ] - mock_client.call_tool.return_value = mock_result + mock_result = Mock() + mock_result.content = [ + Mock( + text=json.dumps( + { + "session_type": "napari_bridge_session", + "bridge_port": 9999, + "viewer": {"title": "External"}, + } + ) + ) + ] + mock_client.call_tool.return_value = mock_result - result = await napari_mcp_server.init_viewer(detect_only=True) + result = await napari_mcp_server.init_viewer(detect_only=True) - assert result["status"] == "ok" - assert result["viewers"]["external"]["available"] is True - assert result["viewers"]["external"]["type"] == "napari_bridge" - assert result["viewers"]["external"]["port"] == 9999 + assert result["status"] == "ok" + assert result["viewers"]["external"]["available"] is True + assert result["viewers"]["external"]["type"] == "napari_bridge" + assert result["viewers"]["external"]["port"] == 9999 + finally: + napari_mcp_server._state = original_state + create_server(original_state)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_external_viewer.py` around lines 270 - 295, Save the original module-level server state before rebinding (e.g., original_state = napari_mcp_server._state), then after you set napari_mcp_server._state = state and call create_server(state) (and after awaiting init_viewer), restore napari_mcp_server._state = original_state and call create_server(original_state) to re-register the fixture-managed STANDALONE tool closures so AUTO_DETECT changes don’t leak into later teardown/tests.src/napari_mcp/_helpers.py (1)
214-220:⚠️ Potential issue | 🟡 MinorValidate tracks input shape before indexing the ID column.
arr[:, 0]still assumes a 2D tracks array. A 1D or malformed payload will raiseIndexErrorhere, so the tool returns an exception instead of a structured error response.Suggested fix
elif lt == "tracks": arr = np.asarray(resolved_data, dtype=float) + if arr.ndim != 2 or arr.shape[1] < 2: + return { + "status": "error", + "message": ( + "Cannot add tracks layer: data must be a 2D array with " + "columns [track_id, time, coords...]." + ), + } layer = viewer.add_tracks(arr, name=name) return { "status": "ok", "name": layer.name, - "n_tracks": int(len(np.unique(arr[:, 0]))), + "n_tracks": 0 if arr.size == 0 else int(len(np.unique(arr[:, 0]))), }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/napari_mcp/_helpers.py` around lines 214 - 220, The tracks branch assumes a 2D array and indexes arr[:,0], which will raise IndexError for 1D/malformed payloads; before computing n_tracks or indexing the ID column, validate arr.ndim >= 2 and arr.shape[1] >= 1 (the local variable arr created from resolved_data) and if invalid return a structured error dict (e.g. {"status":"error","message":"invalid tracks shape, expected 2D array with at least one column"}) instead of proceeding to viewer.add_tracks/arr[:,0]; update the block that handles lt == "tracks" to perform this shape check and return the error response when the check fails.
🧹 Nitpick comments (1)
tests/test_external_viewer.py (1)
160-183: Make these proxy tests independent ofServerState's default port.Both tests assert
fastmcp.Client("http://localhost:9999/mcp"), but the setup never setsstate.bridge_port. That couples the tests to the constructor default rather than the behavior under test. Setstate.bridge_port = 9999explicitly, or assert againststate.bridge_portinstead.Also applies to: 198-216
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_external_viewer.py` around lines 160 - 183, The tests rely on ServerState's default bridge port when asserting the fastmcp.Client URL; make them explicit by setting state.bridge_port = 9999 before calling state.proxy_to_external (or change the assertion to use state.bridge_port) so the test no longer depends on ServerState's constructor default; update both occurrences around proxy_to_external and the mock_client_class.assert_called_once_with call to use the explicit state.bridge_port value.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/napari_mcp/_helpers.py`:
- Around line 159-160: The code currently coerces inputs like channel_axis with
int(channel_axis) (and similar coercions at the other sites) which can raise
ValueError/TypeError or silently overwrite explicit falsy values with the `or`
pattern; replace these direct coercions with guarded parsing helpers that (a)
only apply defaults when the value is None (not using `value or default`), (b)
try to cast inside a try/except and return the module's structured error payload
on ValueError/TypeError, and (c) preserve explicit 0/""-intent by checking `if
value is None` before defaulting; update the places mentioning `channel_axis`,
the other numeric options referenced on the diff (the blocks around lines 190,
199, 209), and add a small parse_int/parse_float helper used by those sites to
centralize the try/except and error construction so malformed inputs produce the
same `"status": "error"` response instead of raising.
- Around line 368-383: After parsing line_limit, clamp any negative values less
than -1 to -1 so they are treated as the explicit "unlimited" case before
calling truncate_output; update the branch that checks line_limit (and the
handling of response["warning"], response["stdout"], response["stderr"]) so that
values like -2 don't bypass the unlimited-warning logic, and ensure
truncate_output is only called for non-unlimited (line_limit != -1) cases
(references: variable line_limit, truncate_output(), response["warning"],
stdout_full, stderr_full).
In `@tests/test_helpers.py`:
- Around line 545-573: The test
test_list_layers_returns_build_layer_detail_fields currently only calls
build_layer_detail; modify it to actually exercise list_layers by constructing a
mock viewer containing the mock_layer, call list_layers(mock_viewer) (or the
actual function signature for listing layers), find the corresponding layer
entry in the returned list, and assert that its keys/values match the detail
produced by build_layer_detail(mock_layer); reference the test function name,
the list_layers function, and build_layer_detail so the change replaces the
existing direct assertions with assertions comparing the list_layers output to
build_layer_detail output.
- Around line 344-350: Update test_eval_label_derived_from_exec_label to not
only call run_code but also assert that the label used for the evaluated
expression was rewritten from "<bridge-exec>" to "<bridge-eval>"; specifically,
call run_code("42", ns, source_label="<bridge-exec>") and then assert the
recorded/evaluated filename/label (the value your runner records for the last
expression, e.g., a field or return value exposed by run_code or a tracking
variable in ns) equals "<bridge-eval>" so the test will fail if the
"-exec"→"-eval" rewrite stops working.
---
Duplicate comments:
In `@src/napari_mcp/_helpers.py`:
- Around line 214-220: The tracks branch assumes a 2D array and indexes
arr[:,0], which will raise IndexError for 1D/malformed payloads; before
computing n_tracks or indexing the ID column, validate arr.ndim >= 2 and
arr.shape[1] >= 1 (the local variable arr created from resolved_data) and if
invalid return a structured error dict (e.g.
{"status":"error","message":"invalid tracks shape, expected 2D array with at
least one column"}) instead of proceeding to viewer.add_tracks/arr[:,0]; update
the block that handles lt == "tracks" to perform this shape check and return the
error response when the check fails.
In `@tests/test_external_viewer.py`:
- Around line 270-295: Save the original module-level server state before
rebinding (e.g., original_state = napari_mcp_server._state), then after you set
napari_mcp_server._state = state and call create_server(state) (and after
awaiting init_viewer), restore napari_mcp_server._state = original_state and
call create_server(original_state) to re-register the fixture-managed STANDALONE
tool closures so AUTO_DETECT changes don’t leak into later teardown/tests.
---
Nitpick comments:
In `@tests/test_external_viewer.py`:
- Around line 160-183: The tests rely on ServerState's default bridge port when
asserting the fastmcp.Client URL; make them explicit by setting
state.bridge_port = 9999 before calling state.proxy_to_external (or change the
assertion to use state.bridge_port) so the test no longer depends on
ServerState's constructor default; update both occurrences around
proxy_to_external and the mock_client_class.assert_called_once_with call to use
the explicit state.bridge_port value.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 982cd303-be03-46d9-be05-b348472aba6e
📒 Files selected for processing (7)
src/napari_mcp/_helpers.pysrc/napari_mcp/server.pytests/test_external_viewer.pytests/test_helpers.pytests/test_output_storage.pytests/test_property_based.pytests/test_server_tools.py
🚧 Files skipped from review as they are similar to previous changes (2)
- tests/test_server_tools.py
- tests/test_property_based.py
| if channel_axis is not None: | ||
| kwargs["channel_axis"] = int(channel_axis) |
There was a problem hiding this comment.
Return a structured error for invalid numeric options.
These coercions can still raise ValueError/TypeError on malformed tool inputs like "" or "abc", which turns a bad request into an exception instead of the "status": "error" payload this helper uses elsewhere. size or 10.0 also overwrites an explicit 0.
Suggested fix
if channel_axis is not None:
- kwargs["channel_axis"] = int(channel_axis)
+ try:
+ kwargs["channel_axis"] = int(channel_axis)
+ except (TypeError, ValueError):
+ return {
+ "status": "error",
+ "message": f"Invalid channel_axis {channel_axis!r}; expected an integer.",
+ }
@@
- layer = viewer.add_points(arr, name=name, size=float(size or 10.0))
+ try:
+ point_size = 10.0 if size is None else float(size)
+ except (TypeError, ValueError):
+ return {
+ "status": "error",
+ "message": f"Invalid size {size!r}; expected a number.",
+ }
+ layer = viewer.add_points(arr, name=name, size=point_size)
@@
if edge_width is not None:
- kwargs["edge_width"] = float(edge_width)
+ try:
+ kwargs["edge_width"] = float(edge_width)
+ except (TypeError, ValueError):
+ return {
+ "status": "error",
+ "message": f"Invalid edge_width {edge_width!r}; expected a number.",
+ }
@@
if edge_width is not None:
- kwargs["edge_width"] = float(edge_width)
+ try:
+ kwargs["edge_width"] = float(edge_width)
+ except (TypeError, ValueError):
+ return {
+ "status": "error",
+ "message": f"Invalid edge_width {edge_width!r}; expected a number.",
+ }Also applies to: 190-190, 199-200, 209-210
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/napari_mcp/_helpers.py` around lines 159 - 160, The code currently
coerces inputs like channel_axis with int(channel_axis) (and similar coercions
at the other sites) which can raise ValueError/TypeError or silently overwrite
explicit falsy values with the `or` pattern; replace these direct coercions with
guarded parsing helpers that (a) only apply defaults when the value is None (not
using `value or default`), (b) try to cast inside a try/except and return the
module's structured error payload on ValueError/TypeError, and (c) preserve
explicit 0/""-intent by checking `if value is None` before defaulting; update
the places mentioning `channel_axis`, the other numeric options referenced on
the diff (the blocks around lines 190, 199, 209), and add a small
parse_int/parse_float helper used by those sites to centralize the try/except
and error construction so malformed inputs produce the same `"status": "error"`
response instead of raising.
| try: | ||
| line_limit = int(line_limit) | ||
| except (ValueError, TypeError): | ||
| line_limit = 30 | ||
|
|
||
| if line_limit == -1: | ||
| response["warning"] = ( | ||
| "Unlimited output requested. This may consume a large number " | ||
| "of tokens. Consider using read_output for large outputs." | ||
| ) | ||
| response["stdout"] = stdout_full | ||
| response["stderr"] = stderr_full | ||
| else: | ||
| limit = line_limit | ||
| stdout_truncated, stdout_was_truncated = truncate_output(stdout_full, limit) | ||
| stderr_truncated, stderr_was_truncated = truncate_output(stderr_full, limit) |
There was a problem hiding this comment.
Clamp negative limits before delegating to truncate_output.
truncate_output() treats any value < -1 as unlimited, but this helper only special-cases -1. Passing -2 currently returns full output without the unlimited-output warning, so callers can silently bypass the guard with any other negative value.
Suggested fix
try:
line_limit = int(line_limit)
except (ValueError, TypeError):
line_limit = 30
+ if line_limit < -1:
+ line_limit = -1
if line_limit == -1:📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| try: | |
| line_limit = int(line_limit) | |
| except (ValueError, TypeError): | |
| line_limit = 30 | |
| if line_limit == -1: | |
| response["warning"] = ( | |
| "Unlimited output requested. This may consume a large number " | |
| "of tokens. Consider using read_output for large outputs." | |
| ) | |
| response["stdout"] = stdout_full | |
| response["stderr"] = stderr_full | |
| else: | |
| limit = line_limit | |
| stdout_truncated, stdout_was_truncated = truncate_output(stdout_full, limit) | |
| stderr_truncated, stderr_was_truncated = truncate_output(stderr_full, limit) | |
| try: | |
| line_limit = int(line_limit) | |
| except (ValueError, TypeError): | |
| line_limit = 30 | |
| if line_limit < -1: | |
| line_limit = -1 | |
| if line_limit == -1: | |
| response["warning"] = ( | |
| "Unlimited output requested. This may consume a large number " | |
| "of tokens. Consider using read_output for large outputs." | |
| ) | |
| response["stdout"] = stdout_full | |
| response["stderr"] = stderr_full | |
| else: | |
| limit = line_limit | |
| stdout_truncated, stdout_was_truncated = truncate_output(stdout_full, limit) | |
| stderr_truncated, stderr_was_truncated = truncate_output(stderr_full, limit) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/napari_mcp/_helpers.py` around lines 368 - 383, After parsing line_limit,
clamp any negative values less than -1 to -1 so they are treated as the explicit
"unlimited" case before calling truncate_output; update the branch that checks
line_limit (and the handling of response["warning"], response["stdout"],
response["stderr"]) so that values like -2 don't bypass the unlimited-warning
logic, and ensure truncate_output is only called for non-unlimited (line_limit
!= -1) cases (references: variable line_limit, truncate_output(),
response["warning"], stdout_full, stderr_full).
| def test_eval_label_derived_from_exec_label(self): | ||
| """The eval label should be derived from the exec label by replacing -exec with -eval.""" | ||
| ns = {} | ||
| # This should use <bridge-eval> for the last expression | ||
| run_code("42", ns, source_label="<bridge-exec>") | ||
| # Just verify it doesn't crash - label is internal | ||
|
|
There was a problem hiding this comment.
Make the eval-label test assert the label rewrite.
This currently only checks that run_code() does not crash, so it will not catch a regression in the -exec → -eval filename rewrite.
Suggested fix
def test_eval_label_derived_from_exec_label(self):
"""The eval label should be derived from the exec label by replacing -exec with -eval."""
ns = {}
- # This should use <bridge-eval> for the last expression
- run_code("42", ns, source_label="<bridge-exec>")
- # Just verify it doesn't crash - label is internal
+ _, stderr, _, error = run_code("1 / 0", ns, source_label="<bridge-exec>")
+ assert isinstance(error, ZeroDivisionError)
+ assert "<bridge-eval>" in stderr🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test_helpers.py` around lines 344 - 350, Update
test_eval_label_derived_from_exec_label to not only call run_code but also
assert that the label used for the evaluated expression was rewritten from
"<bridge-exec>" to "<bridge-eval>"; specifically, call run_code("42", ns,
source_label="<bridge-exec>") and then assert the recorded/evaluated
filename/label (the value your runner records for the last expression, e.g., a
field or return value exposed by run_code or a tracking variable in ns) equals
"<bridge-eval>" so the test will fail if the "-exec"→"-eval" rewrite stops
working.
| @pytest.mark.asyncio | ||
| async def test_list_layers_returns_build_layer_detail_fields(self): | ||
| """list_layers should include all fields from build_layer_detail.""" | ||
|
|
||
| # The conftest fixture creates fresh state; set up a mock viewer | ||
| mock_layer = MagicMock() | ||
| mock_layer.name = "test_layer" | ||
| mock_layer.__class__.__name__ = "Image" | ||
| mock_layer.visible = True | ||
| mock_layer.opacity = 0.8 | ||
| mock_layer.data.shape = (100, 100) | ||
| mock_layer.data.dtype = np.dtype("float32") | ||
| mock_layer.blending = "translucent" | ||
| mock_layer.colormap.name = "viridis" | ||
| mock_layer.gamma = 1.5 | ||
|
|
||
| # Compare: what build_layer_detail returns vs what list_layers would build | ||
| detail = build_layer_detail(mock_layer) | ||
|
|
||
| # build_layer_detail should include all these keys | ||
| assert "name" in detail | ||
| assert "type" in detail | ||
| assert "visible" in detail | ||
| assert "opacity" in detail | ||
| assert "data_shape" in detail | ||
| assert "data_dtype" in detail | ||
| assert "colormap" in detail | ||
| assert "blending" in detail | ||
| assert "gamma" in detail |
There was a problem hiding this comment.
This consistency test never exercises list_layers.
The name/docstring says it is checking list_layers, but the body only asserts keys from build_layer_detail(). As written, list_layers can drift and this test will still pass.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test_helpers.py` around lines 545 - 573, The test
test_list_layers_returns_build_layer_detail_fields currently only calls
build_layer_detail; modify it to actually exercise list_layers by constructing a
mock viewer containing the mock_layer, call list_layers(mock_viewer) (or the
actual function signature for listing layers), find the corresponding layer
entry in the returned list, and assert that its keys/values match the detail
produced by build_layer_detail(mock_layer); reference the test function name,
the list_layers function, and build_layer_detail so the change replaces the
existing direct assertions with assertions comparing the list_layers output to
build_layer_detail output.
Server shutdown:
- Add ServerState.request_shutdown() — schedules delayed loop.stop(),
safe to call from any thread (Qt main thread on viewer destroy)
- Wire _on_destroyed signal and close_viewer to request_shutdown
- Capture event loop via FastMCP lifespan context manager
- Clean up shutdown state in conftest reset_server_state fixture
Bridge server fixes:
- Reject 'path' param for non-image/labels layer types
- Require 'data_var' for surface layers (no inline data support)
- Inject real napari module in exec namespace (was None)
Other:
- Fix cline_cursor install command hint (was missing 'install')
- Fix codecov threshold comment (said 10%, was actually 2%)
- Remove CHANGELOG.md from MANIFEST.in (doesn't exist)
- Rename CI step labels ("Test with tox" -> proper names)
- Make test_expected_tools_matches_server properly async
- Add benchmark marker to pyproject.toml
Tests (487 passing, 81.20% coverage):
- TestRequestShutdown: lifecycle, idempotency, closed loop, delayed stop
- TestBridgeAddLayerValidation: path rejection, surface, conflicts, unknown type
- TestBridgeExecNamespace: napari module injection, numpy availability
- test_close_viewer_requests_shutdown, test_on_destroyed_requests_shutdown
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
✅ Documentation builds successfully! The site will be updated when this PR is merged. |
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (4)
src/napari_mcp/state.py (3)
208-240:⚠️ Potential issue | 🟠 Major
external_session_informationdoesn't catch exceptions like its siblings.Unlike
detect_external_viewer(line 205) andproxy_to_external(line 170) which catch exceptions and return graceful fallbacks,external_session_informationwill propagate connection errors, JSON decode errors, etc. to callers. This inconsistency could cause unhandled exceptions in AUTO_DETECT fallback paths.🐛 Proposed fix: Wrap in try/except
async def external_session_information(self) -> dict[str, Any]: """Get session information from the external viewer.""" - from fastmcp import Client + try: + from fastmcp import Client - test_client = Client(f"http://localhost:{self.bridge_port}/mcp") - async with test_client: - result = await test_client.call_tool("session_information") - # ... rest of processing ... + test_client = Client(f"http://localhost:{self.bridge_port}/mcp") + async with test_client: + result = await test_client.call_tool("session_information") + # ... rest of processing ... + except Exception: + pass return { "status": "error", "message": "Failed to get session information from external viewer", }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/napari_mcp/state.py` around lines 208 - 240, Wrap the body of async def external_session_information(self) in a try/except to match the other helpers (e.g., detect_external_viewer and proxy_to_external): catch broad exceptions (at least Exception), log the error via the instance logger (self.logger) including the exception and context (bridge_port and the Client call), and return the same graceful error dict currently returned at the end on failure; ensure JSON decoding (json.loads on info) and network/client errors from Client(f"http://localhost:{self.bridge_port}/mcp") / test_client.call_tool are covered so no exception propagates out of external_session_information.
149-165:⚠️ Potential issue | 🟡 MinorMissing bounds check before accessing
content[0].Line 151 accesses
content[0].typeafter checkinghasattr(result, "content"), but doesn't verify thatcontentis a non-empty list. Ifcontentis an empty list, this will raise anIndexError.🐛 Proposed fix
if hasattr(result, "content"): content = result.content - if content[0].type == "text": + if isinstance(content, list) and len(content) > 0 and content[0].type == "text": response = (🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/napari_mcp/state.py` around lines 149 - 165, The code accesses content[0].type without ensuring content is a non-empty sequence; update the block that checks hasattr(result, "content") to also verify content is a list-like and len(content) > 0 (or truthy) before accessing content[0]; if content is empty, return a sensible value (e.g., {"status":"error","message":"Empty content"} or simply return content) instead of letting an IndexError propagate, and keep the existing handling for the text branch (response/json.loads) and the non-text branch intact (use the same variable names result, content, response).
40-42:⚠️ Potential issue | 🟡 MinorGuard
NAPARI_MCP_BRIDGE_PORTparsing.A malformed environment value will raise
ValueErrorduringServerState()construction, potentially aborting startup before any fallback logic runs. Themax_output_itemsparsing below (lines 61-66) already handles this gracefully with a try/except.🐛 Proposed fix
- self.bridge_port: int = bridge_port or int( - os.environ.get("NAPARI_MCP_BRIDGE_PORT", "9999") - ) + if bridge_port is not None: + self.bridge_port = bridge_port + else: + try: + self.bridge_port = int(os.environ.get("NAPARI_MCP_BRIDGE_PORT", "9999")) + except ValueError: + self.bridge_port = 9999🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/napari_mcp/state.py` around lines 40 - 42, The ServerState constructor currently does int(os.environ.get("NAPARI_MCP_BRIDGE_PORT", "9999")) which will raise ValueError for malformed env values; wrap the parsing used for self.bridge_port in a try/except (like the existing max_output_items handling) to catch ValueError/TypeError, fall back to the default 9999 (or the provided bridge_port arg), and optionally log a warning; update the code that sets self.bridge_port in ServerState (the bridge_port parsing line) to use this guarded parsing approach..github/workflows/test.yml (1)
190-197:⚠️ Potential issue | 🟠 MajorInstall the project environment before running
safety check.The security job installs only
banditandsafetytools but does not install the project's dependencies. Thesafety checkcommand will only scan the environment's currently installed packages, which means it will audit the scanner tools' dependencies rather thannapari-mcp's actual runtime/dev dependencies.Consider adding a step to install the project before running safety:
- name: Install security tools - run: pip install bandit "safety<4.0" + run: | + pip install -e . + pip install bandit "safety<4.0"Alternatively, export dependencies to a requirements file and use
safety check -r requirements.txt.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/test.yml around lines 190 - 197, The workflow currently installs only the security tools ("Install security tools") and then runs "Safety check", which scans the environment rather than the project's dependencies; add a step that installs the project dependencies before running safety (e.g., run pip install -e . or pip install -r requirements.txt) or generate/export a requirements file and change the "Safety check" step to run safety check -r requirements.txt so safety audits the project's actual runtime/dev dependencies instead of the scanner tools.
🧹 Nitpick comments (2)
tests/test_bridge_server.py (1)
118-124: Consider usingqtbot.waitUntilfor deterministic waiting.The fixed
qtbot.wait(100)may cause flaky tests on slower CI runners. Consider usingqtbot.waitUntilwith a condition:qtbot.waitUntil(lambda: "executed" in results, timeout=1000)This is a minor reliability improvement.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_bridge_server.py` around lines 118 - 124, Replace the fixed sleep-style wait (qtbot.wait(100)) with a deterministic waitUntil that watches the actual condition; specifically, modify the test around Thread/ run_from_thread to use qtbot.waitUntil checking that "executed" is in results (and optionally that "test_result" is present) with a larger timeout (e.g., 1000ms) so the thread.join and assertions become reliable on slower CI runners.tests/conftest.py (1)
50-55: Consider potential side effects of callingcreate_serverper test.Each test creates a new
ServerStateand callscreate_server(fresh_state), which overwrites the module-level_stateand re-registers tool functions viasetattr. While this ensures clean state isolation, be aware that:
- The module-level
serverinstance is replaced on eachcreate_servercall- Tool function references are overwritten as module attributes
This is likely intentional for test isolation, but worth documenting if other fixtures or test utilities cache references to the old server/tools.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/conftest.py` around lines 50 - 55, create_server(fresh_state) replaces napari_mcp_server._state and re-registers tool functions via setattr, which can surprise other fixtures that cached references; either (a) add a clear comment in tests/conftest.py next to the ServerState/create_server call documenting that create_server will replace the module-level server and overwrite tool function attributes (ServerState, create_server, napari_mcp_server._state, setattr), or (b) modify create_server to accept an optional flag (e.g., rebind_tools=False) and call it here only when rebind is desired, so existing cached references aren’t clobbered—apply one of these changes and update any affected fixtures accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@pyproject.toml`:
- Around line 205-210: The dependency constraint for pytest-asyncio is too low
for the configured option; update the version specifier "pytest-asyncio>=0.23.0"
in the dependency list to "pytest-asyncio>=0.24.0" so it matches the pytest
configuration option asyncio_default_fixture_loop_scope = "function" which
requires pytest-asyncio v0.24+.
---
Duplicate comments:
In @.github/workflows/test.yml:
- Around line 190-197: The workflow currently installs only the security tools
("Install security tools") and then runs "Safety check", which scans the
environment rather than the project's dependencies; add a step that installs the
project dependencies before running safety (e.g., run pip install -e . or pip
install -r requirements.txt) or generate/export a requirements file and change
the "Safety check" step to run safety check -r requirements.txt so safety audits
the project's actual runtime/dev dependencies instead of the scanner tools.
In `@src/napari_mcp/state.py`:
- Around line 208-240: Wrap the body of async def
external_session_information(self) in a try/except to match the other helpers
(e.g., detect_external_viewer and proxy_to_external): catch broad exceptions (at
least Exception), log the error via the instance logger (self.logger) including
the exception and context (bridge_port and the Client call), and return the same
graceful error dict currently returned at the end on failure; ensure JSON
decoding (json.loads on info) and network/client errors from
Client(f"http://localhost:{self.bridge_port}/mcp") / test_client.call_tool are
covered so no exception propagates out of external_session_information.
- Around line 149-165: The code accesses content[0].type without ensuring
content is a non-empty sequence; update the block that checks hasattr(result,
"content") to also verify content is a list-like and len(content) > 0 (or
truthy) before accessing content[0]; if content is empty, return a sensible
value (e.g., {"status":"error","message":"Empty content"} or simply return
content) instead of letting an IndexError propagate, and keep the existing
handling for the text branch (response/json.loads) and the non-text branch
intact (use the same variable names result, content, response).
- Around line 40-42: The ServerState constructor currently does
int(os.environ.get("NAPARI_MCP_BRIDGE_PORT", "9999")) which will raise
ValueError for malformed env values; wrap the parsing used for self.bridge_port
in a try/except (like the existing max_output_items handling) to catch
ValueError/TypeError, fall back to the default 9999 (or the provided bridge_port
arg), and optionally log a warning; update the code that sets self.bridge_port
in ServerState (the bridge_port parsing line) to use this guarded parsing
approach.
---
Nitpick comments:
In `@tests/conftest.py`:
- Around line 50-55: create_server(fresh_state) replaces
napari_mcp_server._state and re-registers tool functions via setattr, which can
surprise other fixtures that cached references; either (a) add a clear comment
in tests/conftest.py next to the ServerState/create_server call documenting that
create_server will replace the module-level server and overwrite tool function
attributes (ServerState, create_server, napari_mcp_server._state, setattr), or
(b) modify create_server to accept an optional flag (e.g., rebind_tools=False)
and call it here only when rebind is desired, so existing cached references
aren’t clobbered—apply one of these changes and update any affected fixtures
accordingly.
In `@tests/test_bridge_server.py`:
- Around line 118-124: Replace the fixed sleep-style wait (qtbot.wait(100)) with
a deterministic waitUntil that watches the actual condition; specifically,
modify the test around Thread/ run_from_thread to use qtbot.waitUntil checking
that "executed" is in results (and optionally that "test_result" is present)
with a larger timeout (e.g., 1000ms) so the thread.join and assertions become
reliable on slower CI runners.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 55c60e06-adbc-421d-9e81-ac513ff4321b
📒 Files selected for processing (14)
.github/workflows/test.ymlMANIFEST.incodecov.ymlpyproject.tomlsrc/napari_mcp/bridge_server.pysrc/napari_mcp/cli/install/cline_cursor.pysrc/napari_mcp/qt_helpers.pysrc/napari_mcp/server.pysrc/napari_mcp/state.pytests/conftest.pytests/test_bridge_server.pytests/test_qt_helpers.pytests/test_state.pytests/test_tools.py
💤 Files with no reviewable changes (1)
- MANIFEST.in
🚧 Files skipped from review as they are similar to previous changes (3)
- src/napari_mcp/qt_helpers.py
- codecov.yml
- tests/test_qt_helpers.py
| "pytest-asyncio>=0.23.0", | ||
| "pytest-cov>=4.0.0", | ||
| "pytest-qt>=4.0.0", | ||
| "pytest-asyncio>=0.23.0", | ||
| "pytest-xdist>=3.5.0", | ||
| "pytest-timeout>=2.2.0", | ||
| "pytest-benchmark>=4.0.0", |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
pytest-asyncio 0.23 asyncio_default_fixture_loop_scope configuration
💡 Result:
asyncio_default_fixture_loop_scope is not a pytest-asyncio 0.23 setting—it was introduced later (notably mentioned as introduced in v0.24) to control which event loop scope async fixtures run in. [2]
What it does (when available)
- Sets the default event-loop scope for async fixtures.
- Valid values:
function,class,module,package,session. [5] - If unset, it defaults to the fixture’s own scope (but docs note a future default change to
functionwhen unset). [5]
How to configure it
pytest.ini
[pytest]
asyncio_default_fixture_loop_scope = sessionpyproject.toml
[tool.pytest.ini_options]
asyncio_default_fixture_loop_scope = "session"[3]
Why you might be looking for this in 0.23
In pytest-asyncio 0.23, loops are effectively attached per test-suite item (session/package/module/class/function), and async fixture behavior was tied to fixture scope, which could break some suites that expect fixtures/tests to share a single loop. [4]
If you’re on 0.23.x and need this control, you’ll need to upgrade (or explicitly set per-fixture loop_scope after upgrading, as described in the migration guidance). [1]
Sources: [1] [2] [3] [4] [5]
🏁 Script executed:
# First, find the pyproject.toml file
fd -n "pyproject.toml" -x cat {} \; | head -250Repository: royerlab/napari-mcp
Length of output: 293
🏁 Script executed:
# Find and display the pyproject.toml file
fd "pyproject.toml" -x cat {}Repository: royerlab/napari-mcp
Length of output: 6834
Update pytest-asyncio version constraint to match configuration.
The asyncio_default_fixture_loop_scope = "function" configuration at line 92 requires pytest-asyncio 0.24 or later, but the dependency specifies pytest-asyncio>=0.23.0. This option was not introduced until v0.24, so the version constraint should be updated to pytest-asyncio>=0.24.0 to ensure compatibility.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pyproject.toml` around lines 205 - 210, The dependency constraint for
pytest-asyncio is too low for the configured option; update the version
specifier "pytest-asyncio>=0.23.0" in the dependency list to
"pytest-asyncio>=0.24.0" so it matches the pytest configuration option
asyncio_default_fixture_loop_scope = "function" which requires pytest-asyncio
v0.24+.
|
✅ Documentation builds successfully! The site will be updated when this PR is merged. |
There was a problem hiding this comment.
♻️ Duplicate comments (3)
src/napari_mcp/cli/install/codex_cli.py (1)
117-119:⚠️ Potential issue | 🟠 MajorPreserve Codex config with backup + atomic replace writes.
Both install and uninstall still overwrite
config.tomlin place, so a failed/interrupted write can destroy the existing file, andself.backupis not respected in this TOML path.Suggested fix (backup + temp-file + atomic replace)
+import os +import shutil +import tempfile +import time ... - # Write TOML configuration - with open(config_path, "w") as f: - toml_writer.dump(config, f) + # Write TOML configuration safely + if self.backup and config_path.exists(): + backup_path = config_path.with_name( + f"{config_path.stem}.backup_{int(time.time())}{config_path.suffix}" + ) + shutil.copy2(config_path, backup_path) + + with tempfile.NamedTemporaryFile( + mode="w", + encoding="utf-8", + delete=False, + dir=str(config_path.parent), + suffix=".tmp", + ) as f: + toml_writer.dump(config, f) + f.flush() + os.fsync(f.fileno()) + tmp_name = f.name + + os.replace(tmp_name, config_path) ... - # Write TOML configuration - with open(config_path, "w") as f: - toml_writer.dump(config, f) + # Write TOML configuration safely + if self.backup and config_path.exists(): + backup_path = config_path.with_name( + f"{config_path.stem}.backup_{int(time.time())}{config_path.suffix}" + ) + shutil.copy2(config_path, backup_path) + + with tempfile.NamedTemporaryFile( + mode="w", + encoding="utf-8", + delete=False, + dir=str(config_path.parent), + suffix=".tmp", + ) as f: + toml_writer.dump(config, f) + f.flush() + os.fsync(f.fileno()) + tmp_name = f.name + + os.replace(tmp_name, config_path)Also applies to: 168-169
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/napari_mcp/cli/install/codex_cli.py` around lines 117 - 119, The TOML write currently overwrites config.toml in place; update the write logic in the method containing the lines that open config_path (the block using toml_writer.dump(config, f)) to first honor self.backup by copying the existing config_path to a timestamped or .bak file when self.backup is truthy, then write the new TOML to a temporary file in the same directory (e.g., config_path + ".tmp"), flush and fsync the temp file, close it, and atomically replace the original with os.replace(temp_path, config_path); apply the same backup+temp+atomic-replace pattern to the other identical write at the later occurrence (the block around lines 168-169) so both install and uninstall preserve and atomically update config.toml.src/napari_mcp/cli/install/base.py (1)
205-209:⚠️ Potential issue | 🟠 MajorMake
uninstall()idempotent when the server is already absent.Returning failure here breaks repeated cleanup flows even though desired state is already satisfied.
Suggested fix
if not check_existing_server(config, self.server_name): console.print( f"[yellow]napari-mcp server '{self.server_name}' not found in {self.app_name} configuration[/yellow]" ) - return False, f"Server '{self.server_name}' not found in configuration" + return True, ( + f"Server '{self.server_name}' is already absent from configuration" + )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/napari_mcp/cli/install/base.py` around lines 205 - 209, In uninstall(), when check_existing_server(config, self.server_name) is False, treat this as success (idempotent): replace the current branch that prints a warning and returns (False, message) with an informational console.print indicating the server is already absent and return success (e.g., return True, "Server already absent" or return True, None). Update references inside uninstall() that rely on the return value to accept this success tuple; keep the console output using self.server_name and self.app_name to provide context.tests/test_cli_utils.py (1)
501-513:⚠️ Potential issue | 🟡 MinorTighten permission-denied test to avoid false positives.
This branch still accepts
{}as a passing result in readable contexts, so an implementation that swallowsPermissionErrorcan incorrectly pass.Suggested tightening
try: with patch("napari_mcp.cli.install.utils.console"): - try: - result = read_json_config(config_file) - # Might succeed on some systems (e.g., root) - assert isinstance(result, dict) - except PermissionError: - pass # Expected on most systems + if os.access(config_file, os.R_OK): + assert read_json_config(config_file) == {"test": "value"} + else: + with pytest.raises(PermissionError): + read_json_config(config_file) finally: config_file.chmod(0o644)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_cli_utils.py` around lines 501 - 513, The test currently accepts any dict result and thus can mask implementations that swallow PermissionError; update the test around read_json_config(config_file) to assert the exact expected content when the call succeeds (e.g., result == {"test": "value"}) and only allow PermissionError via except; also ensure you restore permissions on config_file in a finally block so tmp cleanup won't fail—refer to the config_file variable and the read_json_config function to locate and change the assertions and add the permission reset.
🧹 Nitpick comments (2)
src/napari_mcp/cli/main.py (2)
3-4: Consider using the module-level_sysconsistently.The module imports
sys as _sysat line 3, butlist_installationsimportssyslocally at line 317. Using_sys.version_infoinstead would be more consistent.♻️ Proposed fix
- import sys - - if sys.version_info >= (3, 11): + if _sys.version_info >= (3, 11):Also applies to: 317-318
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/napari_mcp/cli/main.py` around lines 3 - 4, The module imports sys as _sys but list_installations still does a local import of sys; remove the local import inside list_installations and replace usages of the local sys (e.g., sys.version_info) with the module-level _sys.version_info to keep imports consistent; ensure any other references in functions like list_installations use _sys instead of importing sys locally.
279-285: Uninstall always targets global scope for project-capable installers.Line 284 sets
global_install=target in PROJECT_TARGETSunconditionally. Unlikeinstall,uninstalllacks--global/--projectoptions, so users cannot uninstall from a specific project directory for Cursor/Gemini.If this is intentional, consider documenting it. Otherwise, consider adding scope options to
uninstall.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/napari_mcp/cli/main.py` around lines 279 - 285, The uninstall command currently always sets global_install based on target in PROJECT_TARGETS when calling _create_installer, preventing users from choosing scope; update the uninstall CLI to accept the same scope flags as install (e.g., add --global / --project or a --scope option to the uninstall click command), parse those flags and pass the chosen boolean into _create_installer via global_install instead of computing it solely from target, and ensure the uninstall handler uses the same target variable and PROJECT_TARGETS reference so uninstall can operate in project or global scope as requested (or alternatively document this behavior if intentional).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@src/napari_mcp/cli/install/base.py`:
- Around line 205-209: In uninstall(), when check_existing_server(config,
self.server_name) is False, treat this as success (idempotent): replace the
current branch that prints a warning and returns (False, message) with an
informational console.print indicating the server is already absent and return
success (e.g., return True, "Server already absent" or return True, None).
Update references inside uninstall() that rely on the return value to accept
this success tuple; keep the console output using self.server_name and
self.app_name to provide context.
In `@src/napari_mcp/cli/install/codex_cli.py`:
- Around line 117-119: The TOML write currently overwrites config.toml in place;
update the write logic in the method containing the lines that open config_path
(the block using toml_writer.dump(config, f)) to first honor self.backup by
copying the existing config_path to a timestamped or .bak file when self.backup
is truthy, then write the new TOML to a temporary file in the same directory
(e.g., config_path + ".tmp"), flush and fsync the temp file, close it, and
atomically replace the original with os.replace(temp_path, config_path); apply
the same backup+temp+atomic-replace pattern to the other identical write at the
later occurrence (the block around lines 168-169) so both install and uninstall
preserve and atomically update config.toml.
In `@tests/test_cli_utils.py`:
- Around line 501-513: The test currently accepts any dict result and thus can
mask implementations that swallow PermissionError; update the test around
read_json_config(config_file) to assert the exact expected content when the call
succeeds (e.g., result == {"test": "value"}) and only allow PermissionError via
except; also ensure you restore permissions on config_file in a finally block so
tmp cleanup won't fail—refer to the config_file variable and the
read_json_config function to locate and change the assertions and add the
permission reset.
---
Nitpick comments:
In `@src/napari_mcp/cli/main.py`:
- Around line 3-4: The module imports sys as _sys but list_installations still
does a local import of sys; remove the local import inside list_installations
and replace usages of the local sys (e.g., sys.version_info) with the
module-level _sys.version_info to keep imports consistent; ensure any other
references in functions like list_installations use _sys instead of importing
sys locally.
- Around line 279-285: The uninstall command currently always sets
global_install based on target in PROJECT_TARGETS when calling
_create_installer, preventing users from choosing scope; update the uninstall
CLI to accept the same scope flags as install (e.g., add --global / --project or
a --scope option to the uninstall click command), parse those flags and pass the
chosen boolean into _create_installer via global_install instead of computing it
solely from target, and ensure the uninstall handler uses the same target
variable and PROJECT_TARGETS reference so uninstall can operate in project or
global scope as requested (or alternatively document this behavior if
intentional).
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 5ac9ca36-a4cf-413b-b0fd-997838dd8cf9
📒 Files selected for processing (11)
README.mddocs/getting-started/installation.mdsrc/napari_mcp/cli/install/base.pysrc/napari_mcp/cli/install/codex_cli.pysrc/napari_mcp/cli/install/cursor.pysrc/napari_mcp/cli/install/gemini_cli.pysrc/napari_mcp/cli/install/utils.pysrc/napari_mcp/cli/main.pytests/test_base_installer.pytests/test_cli_installer.pytests/test_cli_utils.py
🚧 Files skipped from review as they are similar to previous changes (1)
- tests/test_cli_installer.py
|
superseded by #36 |
Summary
Major refactoring that extracts a modular architecture from the monolithic
server.py, reorganizes the test suite to eliminate duplication, and strengthens CI.Architecture refactor
state.py: NewServerStateclass replaces module-level globals. Encapsulates viewer, locks, exec_globals, output storage. IntroducesStartupModeenum (STANDALONE vs AUTO_DETECT)qt_helpers.py: Pure Qt management functions (ensure_qt_app,ensure_viewer,process_events,qt_event_pump,connect_window_destroyed_signal) — all takestateparameter for testabilityoutput.py: Puretruncate_output()utilityviewer_protocol.py:ViewerProtocolstructural typing for viewer backendsserver.py:create_server(state) -> FastMCPfactory. Tools defined as closures over state. 37% line reduction (2203 → 1375 lines)bridge_server.py: Creates ownServerState+ callscreate_server(), overrides only 3 tools (session_info, add_image, execute_code). Shares 95%+ code with standalone serverinstall --target <name>command replaces 7 subcommands (43% reduction)Test suite reorganization
test_coverage.py,test_napari_server_coverage.py,test_edge_cases.py,test_bridge_simplified.py,fixtures/— all duplicated or obsoletetest_server_tools.py— canonical tool-by-tool tests + AUTO_DETECT proxy fallback tests + invalid input edge casestest_qt_helpers.py— renamed fromtest_edge_cases.py, focused on Qt helperstest_state.py— ServerState, env var fallbacks, ViewerProtocolCI/build improvements
--cov-fail-underfrom 70% to 80%test_all_tools_end_to_endonly)Documentation
Test plan
uv run pytest tests/ -q --timeout=60)--cov-fail-under=80)ruff checkpasses (0 errors)ruff format --checkpasses (all files formatted)🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Documentation
Chores