feat: HuggingFace local LLM inference via Transformers.js (ENG-687)#705
feat: HuggingFace local LLM inference via Transformers.js (ENG-687)#705
Conversation
⛔ Snyk checks have failed. 10 issues have been found so far.
💻 Catch issues earlier using the plugins for VS Code, JetBrains IDEs, Visual Studio, and Eclipse. |
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds a HuggingFace Local provider: model download/management, a local Transformers.js-based inference server with OpenAI-compatible endpoints, IPC and preload bindings, UI components, storage/migration support, and removal of legacy browser recovery code. Also updates provider types, exports, and dependencies. Changes
Sequence Diagram(s)sequenceDiagram
participant UI as User (Web)
participant Pre as Preload IPC
participant Main as Main Process
participant MM as Model Manager
participant Hub as HuggingFace Hub API
participant Cache as Cache Dir
participant Server as Local Server
UI->>Pre: downloadHuggingFaceModel(modelId)
Pre->>Main: huggingface-local:download-model
Main->>MM: downloadModel(modelId, progressCB)
MM->>Hub: request model files (q4 → fallback fp32)
Hub-->>MM: stream model data (progress)
MM->>Cache: write model files
MM-->>Main: {success}
Main->>Pre: emit progress/completion
Pre-->>UI: progress events
sequenceDiagram
participant App as Electron App
participant Main as Main Process
participant Storage as App Settings
participant IPC as IPC Handler
participant Server as Local Server
App->>Main: app ready
Main->>Storage: getHuggingFaceLocalConfig()
Storage-->>Main: {enabled: true, selectedModelId}
alt enabled && selectedModelId
Main->>IPC: huggingface-local:start-server
IPC->>Server: startServer(modelId)
Server-->>IPC: {success, port}
IPC-->>Main: server started
end
App->>Main: before-quit
Main->>Server: stopServer()
Server-->>Main: stopped
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 3 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
apps/desktop/src/main/ipc/task-callbacks.ts (1)
156-192:⚠️ Potential issue | 🟠 MajorMid-session browser recovery removed but failure detection kept — is this intentional?
The code still detects repeated browser connection failures (lines 156-185) but no longer recovers automatically. The warning at lines 186-189 only logs to the console — the user has no way to know the browser needs restarting.
Consider either:
- Removing the unused failure detection infrastructure if recovery won't be re-implemented
- Adding a user-facing notification (e.g., via
forwardToRenderer, which is already used in this file for other user-facing events) so users know to restart🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/desktop/src/main/ipc/task-callbacks.ts` around lines 156 - 192, The repeated-browser-failure detection in onToolCallComplete is left active but no recovery path is implemented; instead of silently logging, send a user-facing notification via the existing forwardToRenderer mechanism when the threshold is reached (inside the branch where browserFailureCount >= BROWSER_FAILURE_THRESHOLD and before/after setting browserRecoveryInFlight), including a descriptive event name (e.g. "dev-browser-restart-required") and the reason string; ensure forwardToRenderer is referenced/imported consistent with other uses in this file, keep resetBrowserFailureState() and the browserFailure* state updates as-is, and you may keep the console.warns but add the forwardToRenderer call so the renderer can show a visible prompt to the user to restart the browser.
🧹 Nitpick comments (10)
apps/desktop/__tests__/unit/main/ipc/handlers.unit.test.ts (1)
504-511: LGTM on registration assertions.The test correctly verifies all 7 HuggingFace Local IPC channels are registered, and the channel names match the implementation in
handlers.ts.Consider adding a dedicated
describe('HuggingFace Local Handlers', ...)block with behavior tests (e.g., verifyinghuggingface-local:start-servercalls the mock with correct arguments, error handling for invalid model IDs, etc.) to match the coverage pattern of other handler groups in this file. This can be deferred to a follow-up.,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/desktop/__tests__/unit/main/ipc/handlers.unit.test.ts` around lines 504 - 511, Add a new describe block named "HuggingFace Local Handlers" that contains behavior tests for the registered channels (e.g., 'huggingface-local:start-server', 'huggingface-local:stop-server', 'huggingface-local:server-status', 'huggingface-local:test-connection', 'huggingface-local:download-model', 'huggingface-local:list-models', 'huggingface-local:delete-model'); for each channel verify the corresponding handler calls the mocked implementation with correct arguments and that error paths are handled (e.g., invalid model IDs for 'huggingface-local:download-model' and failed start for 'huggingface-local:start-server'), using the same mock helpers and assertion style used in other handler groups in this test file to keep consistency.apps/web/src/client/components/settings/providers/HuggingFaceProviderForm.tsx (2)
85-87: Consider logging the error for debuggability.The empty catch block swallows errors silently. While the comment notes it's non-fatal, logging would help with debugging if model listing consistently fails.
♻️ Suggested improvement
.catch(() => { - // Non-fatal: suggested models will still be shown + // Non-fatal: suggested models will still be shown + console.warn('[HuggingFace] Failed to list models'); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/src/client/components/settings/providers/HuggingFaceProviderForm.tsx` around lines 85 - 87, The empty catch block in HuggingFaceProviderForm.tsx swallows errors; update the promise rejection handler (the .catch() following the suggested models fetch) to log the error for debuggability by capturing the error parameter and calling a logger (e.g., console.error or the app's logging utility) with a clear contextual message like "Failed to fetch Hugging Face suggested models" plus the error object; ensure you reference the same promise/fetch call used to load suggested models so the log is emitted when that request fails.
174-176: Consider logging disconnect errors.While ignoring disconnect errors is reasonable, a warning log would aid debugging without affecting user experience.
♻️ Suggested improvement
- } catch { - // Ignore errors during disconnect + } catch (err) { + console.warn('[HuggingFace] Disconnect error (ignored):', err); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/src/client/components/settings/providers/HuggingFaceProviderForm.tsx` around lines 174 - 176, In the HuggingFaceProviderForm component update the empty catch block that follows the disconnect attempt to log the error (e.g., using console.warn or the app's logger) so disconnect failures are surfaced for debugging; locate the catch in the disconnect flow inside HuggingFaceProviderForm (the try/catch around the disconnect request) and replace the silent catch with a warning that includes a short message and the caught error object.apps/desktop/src/main/index.ts (1)
355-358: Shutdown may not complete before process exit.
stopHuggingFaceServer()is async but called withoutawaitin the synchronousbefore-quithandler. The process may exit before the server fully closes or the model disposes. This is generally acceptable for app quit, but if you need guaranteed cleanup (e.g., to release the port for a quick restart), consider usingapp.on('will-quit')withevent.preventDefault()and resuming quit after cleanup.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/desktop/src/main/index.ts` around lines 355 - 358, Current shutdown calls stopHuggingFaceServer() without awaiting it in the 'before-quit' handler so cleanup may not finish before process exit; change the quit flow to perform async cleanup by moving the logic to app.on('will-quit'), call event.preventDefault(), await stopHuggingFaceServer() (and any model disposal) and only then resume quitting (e.g., call app.quit() or remove the preventDefault) so the HF server port reliably gets released; reference stopHuggingFaceServer and the Electron events 'before-quit'/'will-quit' when updating the handler.packages/agent-core/src/storage/repositories/appSettings.ts (1)
146-154: Add braces to the new guard clause.Line 148 introduces a braceless
if, which goes against the repo'scurlyrule.♻️ Proposed fix
export function getHuggingFaceLocalConfig(): HuggingFaceLocalConfig | null { const row = getRow(); - if (!row.huggingface_local_config) return null; + if (!row.huggingface_local_config) { + return null; + } try { return JSON.parse(row.huggingface_local_config) as HuggingFaceLocalConfig; } catch {As per coding guidelines, "Always use braces for
if/else/for/whilestatements — no single-line braceless statements (enforced bycurlyESLint rule)".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/agent-core/src/storage/repositories/appSettings.ts` around lines 146 - 154, The function getHuggingFaceLocalConfig contains a braceless guard if ( !row.huggingface_local_config ) return null; which violates the project's curly rule; update the guard in getHuggingFaceLocalConfig to use braces (e.g., if (!row.huggingface_local_config) { return null; }) so the early-return is wrapped in braces and the function complies with the coding guidelines.apps/desktop/src/main/providers/huggingface-local/index.ts (1)
1-6: Remove the file-header banner.This block just restates that the file is the HuggingFace local barrel and what it exports.
As per coding guidelines, "Do NOT add unnecessary comments — comments should explain why, not what the code does".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/desktop/src/main/providers/huggingface-local/index.ts` around lines 1 - 6, Remove the top-of-file banner comment block (the /** HuggingFace Local Provider ... */ block at the very beginning of apps/desktop/src/main/providers/huggingface-local/index.ts) — delete that redundant "what it exports" header so the file only contains code and any purposeful "why" comments, keeping comments minimal and focused on rationale rather than restating functionality.apps/desktop/src/main/providers/huggingface-local/model-manager.ts (3)
78-95: Consider validatingmodelIdformat before download attempt.The function accepts any string as
modelIdand passes it directly tofrom_pretrained. Malformed IDs (empty strings, special characters, etc.) will only fail after network calls are attempted. Basic validation could provide faster feedback.♻️ Add modelId validation
export async function downloadModel( modelId: string, onProgress?: ProgressCallback, cachePath?: string, ): Promise<{ success: boolean; error?: string }> { + // Basic validation: expect "org/model" format + if (!modelId || !modelId.includes('/') || modelId.trim() !== modelId) { + return { success: false, error: 'Invalid model ID format. Expected "org/model".' }; + } + const cacheDir = ensureCacheDir(cachePath);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/desktop/src/main/providers/huggingface-local/model-manager.ts` around lines 78 - 95, Add upfront validation for the modelId in downloadModel: before calling ensureCacheDir/from_pretrained and before registering in activeDownloads, verify modelId is a non-empty string and matches the expected pattern (e.g., allowed characters/namespace format your app expects) and return { success: false, error: '...' } immediately for invalid values; reference the downloadModel function, the modelId parameter, the from_pretrained call, and the activeDownloads registration to locate where to insert this check so malformed IDs are rejected synchronously with a clear error message.
84-87: UnusedAbortControlleradds misleading API.The
AbortControlleris created and stored but, as the comment notes, Transformers.js doesn't support aborting downloads. This creates a misleadingcancelDownloadfunction that appears to work but doesn't actually cancel the download. Consider either removing the dead code or making the limitation explicit to callers.♻️ Option A: Document limitation in cancelDownload
/** * Cancel an active download. * - * Note: Transformers.js does not currently support aborting in-progress downloads. - * This function marks the download as cancelled but the underlying network request - * will continue until completion. The downloaded files will remain in the cache. + * ⚠️ WARNING: Transformers.js does not support aborting in-progress downloads. + * This function removes the download from tracking but does NOT stop the actual + * network transfer. Downloaded files will remain in cache. + * + * `@returns` false if no active download found, true if marked as cancelled (but not stopped) */ -export function cancelDownload(modelId: string): void { +export function cancelDownload(modelId: string): boolean { const download = activeDownloads.get(modelId); - if (download) { - download.abort.abort(); - activeDownloads.delete(modelId); + if (!download) { + return false; } + activeDownloads.delete(modelId); + return true; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/desktop/src/main/providers/huggingface-local/model-manager.ts` around lines 84 - 87, The AbortController created and stored in activeDownloads for model downloads (created near the from_pretrained call) is useless because Transformers.js doesn't support aborting; update the code to avoid the misleading API by either removing the AbortController and any use of activeDownloads entries for cancellation, or change cancelDownload (and any callers referencing activeDownloads/modelId) to explicitly indicate cancellation is not supported (e.g., return false or throw/log a “not supported” message) and keep a comment referencing from_pretrained’s lack of abort support; ensure you update references to AbortController, activeDownloads, cancelDownload, and the from_pretrained usage so the behaviour is clear to callers.
177-215: Synchronous filesystem operations may block the main process.
listCachedModelsuses synchronousfs.readdirSyncandfs.statSync(viagetDirSize). For large model caches or slow storage, this could block the Electron main process event loop. Consider making this async if performance issues arise with many cached models.This is acceptable for typical usage (few models), but worth noting for future optimization if needed.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/desktop/src/main/providers/huggingface-local/model-manager.ts` around lines 177 - 215, listCachedModels currently performs blocking I/O (readdirSync + getDirSize which uses statSync), which can block the Electron main process; convert listCachedModels to an async function that returns Promise<HuggingFaceLocalModelInfo[]>, replace synchronous fs.readdirSync/fs.statSync calls with fs.promises equivalents (or use asynchronous helpers) and make getDirSize async (e.g., async getDirSize that uses fs.promises.stat and async recursion), iterate with await for directory entries, and update any callers to await listCachedModels; keep the same output shape and preserve error handling but use try/catch around await calls.apps/web/src/client/lib/accomplish.ts (1)
467-484: Redundant method wiring ingetAccomplish().These explicit HuggingFace method overrides are redundant since
getAccomplish()already spreadswindow.accomplishat the start. The methods would work identically without these explicit declarations, unlike the Bedrock/Vertex methods above which transform parameters (e.g.,JSON.stringify).Consider removing these pass-through wrappers for consistency:
♻️ Remove redundant wrappers
listVertexProjects: () => window.accomplish!.listVertexProjects(), - - listHuggingFaceModels: () => window.accomplish!.listHuggingFaceModels(), - - downloadHuggingFaceModel: (modelId: string) => - window.accomplish!.downloadHuggingFaceModel(modelId), - - startHuggingFaceServer: (modelId: string) => window.accomplish!.startHuggingFaceServer(modelId), - - stopHuggingFaceServer: () => window.accomplish!.stopHuggingFaceServer(), - - onHuggingFaceDownloadProgress: ( - callback: (progress: { - modelId: string; - status: 'downloading' | 'complete' | 'error'; - progress: number; - error?: string; - }) => void, - ) => window.accomplish!.onHuggingFaceDownloadProgress(callback), }; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/src/client/lib/accomplish.ts` around lines 467 - 484, The explicit pass-through wrappers listHuggingFaceModels, downloadHuggingFaceModel, startHuggingFaceServer, stopHuggingFaceServer and onHuggingFaceDownloadProgress in getAccomplish are redundant because getAccomplish already spreads window.accomplish; remove these method declarations so the spreaded window.accomplish implementations are used directly (leave transformed wrappers like Bedrock/Vertex intact). Ensure no other code depends on these explicit names being re-declared—if any consumer expects these exact properties, rely on the spread to provide them from window.accomplish instead.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/desktop/src/main/ipc/handlers.ts`:
- Around line 1420-1440: The handler for 'huggingface-local:set-config'
currently only validates selectedModelId, serverPort and enabled; add explicit
validation for the quantization and devicePreference fields on the incoming
config (HuggingFaceLocalConfig) before calling
storage.setHuggingFaceLocalConfig: ensure quantization, if non-null, is one of
the supported enum values (e.g., the allowed quantization strings used elsewhere
in the codebase) and ensure devicePreference, if non-null, matches the permitted
device identifiers (e.g., "cpu", "cuda", "mps" or whatever your app expects); if
either field is invalid, throw an Error('Invalid HuggingFace config: unexpected
field types') (or a clearer message) so malformed renderer payloads cannot be
persisted.
In `@apps/desktop/src/main/opencode/electron-options.ts`:
- Around line 160-170: The hfBaseUrl can be undefined when hfProvider is true
but the HF server isn't running; update the logic around hfBaseUrl/openAiBaseUrl
so openAiBaseUrl uses configuredOpenAiBaseUrl as a fallback when hfBaseUrl is
undefined (or explicitly decide HF should always override and surface an error),
by checking getHuggingFaceServerStatus() and only assigning openAiBaseUrl =
hfBaseUrl when hfBaseUrl is non-null; reference the hfProvider/hfBaseUrl
variables and the getHuggingFaceServerStatus() call, and ensure openAiBaseUrl
and configuredOpenAiBaseUrl are used correctly to avoid assigning undefined.
In `@apps/desktop/src/main/providers/huggingface-local/server.ts`:
- Around line 552-558: startServer() currently binds to port 0 and sets
state.port but never persists it, so on restart getHuggingFaceLocalConfig() has
serverPort: null; after the server successfully starts (in the server.listen
callback where state.server and state.port are set) persist the chosen port into
HuggingFaceLocalConfig (e.g., call the existing config save/update function used
by getHuggingFaceLocalConfig or IPC handlers to set serverPort) so the IPC
handler and clients can construct the base URL; alternatively you can choose to
switch to a fixed/configurable port or expose the port via getServerStatus(),
but the immediate fix is to write state.port into HuggingFaceLocalConfig in the
server.listen callback (the startServer / state.server / state.port locations
referenced in the diff).
In `@apps/desktop/src/preload/index.ts`:
- Around line 479-484: The preload API getHuggingFaceServerStatus returns a
richer shape ({ running, port, loadedModel, isLoading }) than the AccomplishAPI
declaration; update the AccomplishAPI interface in accomplish.ts to include the
missing fields so types align with the preload: add loadedModel: string | null
and isLoading: boolean to the server status return type (keep port optional or
as number to match existing usage) and ensure the method signature for
getHuggingFaceServerStatus in AccomplishAPI matches this full shape.
- Line 471: The preload layer no longer exposes getUserSkillsPath but
CreateSkillModal.tsx still calls accomplish.getUserSkillsPath(), causing a
runtime error; fix by either restoring the method in the preload API to forward
an IPC invoke to the main process or by updating the frontend to call the new
IPC channel directly. If restoring, add a preload export named getUserSkillsPath
in apps/desktop/src/preload/index.ts that uses
ipcRenderer.invoke('get-user-skills-path') and ensure there is a corresponding
handler in apps/desktop/src/main/ipc/handlers.ts that calls
SkillsManager.getUserSkillsPath; if updating the frontend, replace
accomplish.getUserSkillsPath() in CreateSkillModal.tsx with the appropriate
ipcRenderer.invoke call or the new accomplish wrapper name that matches the
existing handler. Ensure the IPC channel name and method symbol
(getUserSkillsPath, SkillsManager.getUserSkillsPath, CreateSkillModal.tsx) match
between preload, main handler, and frontend.
In
`@apps/web/src/client/components/settings/providers/HuggingFaceProviderForm.tsx`:
- Line 15: Remove the absolute-path constant huggingfaceLogo and replace it with
an ES module image import at the top of the HuggingFaceProviderForm component
file (e.g. import huggingfaceLogo from '...';), then update any usages of the
huggingfaceLogo variable to use the imported symbol; delete the original const
declaration on line 15 so the component uses the module import instead of an
absolute string path.
---
Outside diff comments:
In `@apps/desktop/src/main/ipc/task-callbacks.ts`:
- Around line 156-192: The repeated-browser-failure detection in
onToolCallComplete is left active but no recovery path is implemented; instead
of silently logging, send a user-facing notification via the existing
forwardToRenderer mechanism when the threshold is reached (inside the branch
where browserFailureCount >= BROWSER_FAILURE_THRESHOLD and before/after setting
browserRecoveryInFlight), including a descriptive event name (e.g.
"dev-browser-restart-required") and the reason string; ensure forwardToRenderer
is referenced/imported consistent with other uses in this file, keep
resetBrowserFailureState() and the browserFailure* state updates as-is, and you
may keep the console.warns but add the forwardToRenderer call so the renderer
can show a visible prompt to the user to restart the browser.
---
Nitpick comments:
In `@apps/desktop/__tests__/unit/main/ipc/handlers.unit.test.ts`:
- Around line 504-511: Add a new describe block named "HuggingFace Local
Handlers" that contains behavior tests for the registered channels (e.g.,
'huggingface-local:start-server', 'huggingface-local:stop-server',
'huggingface-local:server-status', 'huggingface-local:test-connection',
'huggingface-local:download-model', 'huggingface-local:list-models',
'huggingface-local:delete-model'); for each channel verify the corresponding
handler calls the mocked implementation with correct arguments and that error
paths are handled (e.g., invalid model IDs for
'huggingface-local:download-model' and failed start for
'huggingface-local:start-server'), using the same mock helpers and assertion
style used in other handler groups in this test file to keep consistency.
In `@apps/desktop/src/main/index.ts`:
- Around line 355-358: Current shutdown calls stopHuggingFaceServer() without
awaiting it in the 'before-quit' handler so cleanup may not finish before
process exit; change the quit flow to perform async cleanup by moving the logic
to app.on('will-quit'), call event.preventDefault(), await
stopHuggingFaceServer() (and any model disposal) and only then resume quitting
(e.g., call app.quit() or remove the preventDefault) so the HF server port
reliably gets released; reference stopHuggingFaceServer and the Electron events
'before-quit'/'will-quit' when updating the handler.
In `@apps/desktop/src/main/providers/huggingface-local/index.ts`:
- Around line 1-6: Remove the top-of-file banner comment block (the /**
HuggingFace Local Provider ... */ block at the very beginning of
apps/desktop/src/main/providers/huggingface-local/index.ts) — delete that
redundant "what it exports" header so the file only contains code and any
purposeful "why" comments, keeping comments minimal and focused on rationale
rather than restating functionality.
In `@apps/desktop/src/main/providers/huggingface-local/model-manager.ts`:
- Around line 78-95: Add upfront validation for the modelId in downloadModel:
before calling ensureCacheDir/from_pretrained and before registering in
activeDownloads, verify modelId is a non-empty string and matches the expected
pattern (e.g., allowed characters/namespace format your app expects) and return
{ success: false, error: '...' } immediately for invalid values; reference the
downloadModel function, the modelId parameter, the from_pretrained call, and the
activeDownloads registration to locate where to insert this check so malformed
IDs are rejected synchronously with a clear error message.
- Around line 84-87: The AbortController created and stored in activeDownloads
for model downloads (created near the from_pretrained call) is useless because
Transformers.js doesn't support aborting; update the code to avoid the
misleading API by either removing the AbortController and any use of
activeDownloads entries for cancellation, or change cancelDownload (and any
callers referencing activeDownloads/modelId) to explicitly indicate cancellation
is not supported (e.g., return false or throw/log a “not supported” message) and
keep a comment referencing from_pretrained’s lack of abort support; ensure you
update references to AbortController, activeDownloads, cancelDownload, and the
from_pretrained usage so the behaviour is clear to callers.
- Around line 177-215: listCachedModels currently performs blocking I/O
(readdirSync + getDirSize which uses statSync), which can block the Electron
main process; convert listCachedModels to an async function that returns
Promise<HuggingFaceLocalModelInfo[]>, replace synchronous
fs.readdirSync/fs.statSync calls with fs.promises equivalents (or use
asynchronous helpers) and make getDirSize async (e.g., async getDirSize that
uses fs.promises.stat and async recursion), iterate with await for directory
entries, and update any callers to await listCachedModels; keep the same output
shape and preserve error handling but use try/catch around await calls.
In
`@apps/web/src/client/components/settings/providers/HuggingFaceProviderForm.tsx`:
- Around line 85-87: The empty catch block in HuggingFaceProviderForm.tsx
swallows errors; update the promise rejection handler (the .catch() following
the suggested models fetch) to log the error for debuggability by capturing the
error parameter and calling a logger (e.g., console.error or the app's logging
utility) with a clear contextual message like "Failed to fetch Hugging Face
suggested models" plus the error object; ensure you reference the same
promise/fetch call used to load suggested models so the log is emitted when that
request fails.
- Around line 174-176: In the HuggingFaceProviderForm component update the empty
catch block that follows the disconnect attempt to log the error (e.g., using
console.warn or the app's logger) so disconnect failures are surfaced for
debugging; locate the catch in the disconnect flow inside
HuggingFaceProviderForm (the try/catch around the disconnect request) and
replace the silent catch with a warning that includes a short message and the
caught error object.
In `@apps/web/src/client/lib/accomplish.ts`:
- Around line 467-484: The explicit pass-through wrappers listHuggingFaceModels,
downloadHuggingFaceModel, startHuggingFaceServer, stopHuggingFaceServer and
onHuggingFaceDownloadProgress in getAccomplish are redundant because
getAccomplish already spreads window.accomplish; remove these method
declarations so the spreaded window.accomplish implementations are used directly
(leave transformed wrappers like Bedrock/Vertex intact). Ensure no other code
depends on these explicit names being re-declared—if any consumer expects these
exact properties, rely on the spread to provide them from window.accomplish
instead.
In `@packages/agent-core/src/storage/repositories/appSettings.ts`:
- Around line 146-154: The function getHuggingFaceLocalConfig contains a
braceless guard if ( !row.huggingface_local_config ) return null; which violates
the project's curly rule; update the guard in getHuggingFaceLocalConfig to use
braces (e.g., if (!row.huggingface_local_config) { return null; }) so the
early-return is wrapped in braces and the function complies with the coding
guidelines.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: ea8feaf2-1c28-4625-aada-c43dc5aada33
⛔ Files ignored due to path filters (2)
apps/desktop/public/assets/ai-logos/huggingface.svgis excluded by!**/*.svgapps/web/public/assets/ai-logos/huggingface.svgis excluded by!**/*.svg
📒 Files selected for processing (31)
.gitignoreapps/desktop/__tests__/unit/main/ipc/handlers.unit.test.tsapps/desktop/package.jsonapps/desktop/src/main/index.tsapps/desktop/src/main/ipc/handlers.tsapps/desktop/src/main/ipc/task-callbacks.tsapps/desktop/src/main/opencode/electron-options.tsapps/desktop/src/main/opencode/index.tsapps/desktop/src/main/providers/huggingface-local/index.tsapps/desktop/src/main/providers/huggingface-local/model-manager.tsapps/desktop/src/main/providers/huggingface-local/server.tsapps/desktop/src/preload/index.tsapps/web/src/client/components/settings/ProviderGrid.tsxapps/web/src/client/components/settings/ProviderSettingsPanel.tsxapps/web/src/client/components/settings/providers/HuggingFaceProviderForm.tsxapps/web/src/client/components/settings/providers/index.tsapps/web/src/client/components/ui/ProviderIcon.tsxapps/web/src/client/lib/accomplish.tsapps/web/src/client/lib/provider-logos.tspackages/agent-core/src/common.tspackages/agent-core/src/common/index.tspackages/agent-core/src/common/types/provider.tspackages/agent-core/src/common/types/providerSettings.tspackages/agent-core/src/factories/storage.tspackages/agent-core/src/index.tspackages/agent-core/src/storage/index.tspackages/agent-core/src/storage/migrations/index.tspackages/agent-core/src/storage/migrations/v009-huggingface-local.tspackages/agent-core/src/storage/repositories/appSettings.tspackages/agent-core/src/storage/repositories/index.tspackages/agent-core/src/types/storage.ts
💤 Files with no reviewable changes (1)
- apps/desktop/src/main/opencode/index.ts
apps/web/src/client/components/settings/providers/HuggingFaceProviderForm.tsx
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
apps/web/src/client/components/settings/providers/HuggingFaceProviderForm.tsx (1)
296-304: Model selector for switching usesonModelChangebut lacks download/server restart logic.When the user selects a different model via
ModelSelectorwhile connected (line 301), it callsonModelChange(modelId). However, this doesn't trigger a model download or server restart with the new model. The user might expect selecting a new model to switch to it, but this only updates the parent component's state without loading the new model into the inference server.Consider either:
- Disabling model switching while connected (simpler UX)
- Adding logic to download and restart the server with the new model
💡 Option 1: Disable model switching while connected
{/* Model selector for switching */} - {allModels.length > 1 && ( + {/* Model switching requires disconnect/reconnect - show as read-only */} + {allModels.length > 1 && false && ( <ModelSelector models={allModels} value={connectedModelId ?? null} onChange={onModelChange} error={showModelError && !connectedModelId} /> )}Or better, add a comment explaining the intended behavior if this is by design.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/src/client/components/settings/providers/HuggingFaceProviderForm.tsx` around lines 296 - 304, The ModelSelector in HuggingFaceProviderForm currently calls onModelChange(modelId) but does not trigger any model download or server restart, so selecting a new model only updates state (symbols: ModelSelector, onModelChange, connectedModelId, allModels); either disable the selector while connected by guarding it with connectedModelId (e.g., render disabled or remove onChange when a connection exists) or implement the full switch flow: when onModelChange is invoked and connectedModelId exists, invoke the existing model download + server restart routine (the functions that perform modelInstall/download and restart the inference server used elsewhere in this component) and only update connectedModelId after the restart completes; if keeping current behavior by design, add a short comment in HuggingFaceProviderForm explaining that model selection does not auto-load or restart the server.apps/desktop/src/main/providers/huggingface-local/server.ts (1)
360-374: Consider handling model load failure more explicitly when server is already running.When the server is already running (line 363-374), if
loadModelfails, the server continues running with the previously loaded model. This is acceptable behavior, but the error message returned doesn't indicate that the previous model is still loaded. Consider enhancing the error response to clarify the server state.💡 Optional: Enhanced error message
try { await loadModel(modelId); return { success: true, port: state.port! }; } catch (error) { return { success: false, - error: error instanceof Error ? error.message : 'Failed to load model', + error: error instanceof Error + ? `Failed to load model: ${error.message}. Previous model "${state.loadedModelId}" remains active.` + : 'Failed to load model', }; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/desktop/src/main/providers/huggingface-local/server.ts` around lines 360 - 374, When startServer detects an already-running server and loadModel(modelId) throws, update the catch path to return a clearer error that the server is still running with the previously loaded model and include identifying info (e.g., state.modelId and state.port) so callers know the server/port remain active; modify the error string returned in startServer's catch to mention the previous model id and port (and still return port where possible) instead of the generic 'Failed to load model'.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/desktop/src/main/opencode/electron-options.ts`:
- Around line 71-73: Tests still reference the old export
isOpenCodeCliAvailable; update all test imports and mocks to use the renamed
export isOpenCodeBundled (replace any jest.mock or manual mock references and
import statements), ensure the mock implementations export isOpenCodeBundled
instead of isOpenCodeCliAvailable and update any test usages/calls (there are
multiple occurrences) to the new name so the exported symbol matches the
implementation in isOpenCodeBundled.
---
Nitpick comments:
In `@apps/desktop/src/main/providers/huggingface-local/server.ts`:
- Around line 360-374: When startServer detects an already-running server and
loadModel(modelId) throws, update the catch path to return a clearer error that
the server is still running with the previously loaded model and include
identifying info (e.g., state.modelId and state.port) so callers know the
server/port remain active; modify the error string returned in startServer's
catch to mention the previous model id and port (and still return port where
possible) instead of the generic 'Failed to load model'.
In
`@apps/web/src/client/components/settings/providers/HuggingFaceProviderForm.tsx`:
- Around line 296-304: The ModelSelector in HuggingFaceProviderForm currently
calls onModelChange(modelId) but does not trigger any model download or server
restart, so selecting a new model only updates state (symbols: ModelSelector,
onModelChange, connectedModelId, allModels); either disable the selector while
connected by guarding it with connectedModelId (e.g., render disabled or remove
onChange when a connection exists) or implement the full switch flow: when
onModelChange is invoked and connectedModelId exists, invoke the existing model
download + server restart routine (the functions that perform
modelInstall/download and restart the inference server used elsewhere in this
component) and only update connectedModelId after the restart completes; if
keeping current behavior by design, add a short comment in
HuggingFaceProviderForm explaining that model selection does not auto-load or
restart the server.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: f853cf63-37fa-4a17-98e1-3730a32c6803
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (6)
apps/desktop/src/main/ipc/handlers.tsapps/desktop/src/main/opencode/electron-options.tsapps/desktop/src/main/providers/huggingface-local/server.tsapps/desktop/src/preload/index.tsapps/web/src/client/components/settings/providers/HuggingFaceProviderForm.tsxapps/web/src/client/lib/accomplish.ts
| export function isOpenCodeBundled(): boolean { | ||
| return coreIsCliAvailable(getCliResolverConfig()); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify test files reference the old function name
echo "=== Searching for isOpenCodeCliAvailable in test files ==="
rg -n 'isOpenCodeCliAvailable' apps/desktop/__tests__/
echo ""
echo "=== Searching for isOpenCodeBundled to confirm new name ==="
rg -n 'isOpenCodeBundled' apps/desktop/Repository: accomplish-ai/accomplish
Length of output: 1342
🏁 Script executed:
#!/bin/bash
# Check all exports from electron-options.ts
echo "=== All exports from electron-options.ts ==="
rg -n 'export' apps/desktop/src/main/opencode/electron-options.ts | head -20
echo ""
echo "=== Check actual imports in test files ==="
echo "--- task-manager.unit.test.ts imports ---"
rg -B5 -A5 'isOpenCodeCliAvailable' apps/desktop/__tests__/unit/main/opencode/task-manager.unit.test.ts | head -20
echo ""
echo "--- adapter.unit.test.ts imports ---"
rg -B5 -A5 'isOpenCodeCliAvailable' apps/desktop/__tests__/unit/main/opencode/adapter.unit.test.ts | head -20
echo ""
echo "--- cli-path.integration.test.ts imports ---"
rg -B2 -A2 'import.*isOpenCodeCliAvailable' apps/desktop/__tests__/integration/main/opencode/cli-path.integration.test.tsRepository: accomplish-ai/accomplish
Length of output: 1984
🏁 Script executed:
# Final confirmation: check test file line counts to ensure we have the full picture
echo "=== Confirming test files contain old function references ==="
echo "task-manager.unit.test.ts - isOpenCodeCliAvailable mocks:"
rg -c 'isOpenCodeCliAvailable' apps/desktop/__tests__/unit/main/opencode/task-manager.unit.test.ts
echo "adapter.unit.test.ts - isOpenCodeCliAvailable mocks:"
rg -c 'isOpenCodeCliAvailable' apps/desktop/__tests__/unit/main/opencode/adapter.unit.test.ts
echo "cli-path.integration.test.ts - isOpenCodeCliAvailable usage:"
rg -c 'isOpenCodeCliAvailable' apps/desktop/__tests__/integration/main/opencode/cli-path.integration.test.ts
echo ""
echo "=== Check if old function still exists anywhere ==="
rg 'function isOpenCodeCliAvailable' apps/desktop/Repository: accomplish-ai/accomplish
Length of output: 360
Update test files to use renamed function isOpenCodeBundled().
The function isOpenCodeCliAvailable() was renamed to isOpenCodeBundled() but test files still reference the old name:
apps/desktop/__tests__/unit/main/opencode/task-manager.unit.test.ts(mock)apps/desktop/__tests__/unit/main/opencode/adapter.unit.test.ts(mock)apps/desktop/__tests__/integration/main/opencode/cli-path.integration.test.ts(import and 4 usages)
These will fail with export not found errors. Update mocks and imports to use the new function name.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/desktop/src/main/opencode/electron-options.ts` around lines 71 - 73,
Tests still reference the old export isOpenCodeCliAvailable; update all test
imports and mocks to use the renamed export isOpenCodeBundled (replace any
jest.mock or manual mock references and import statements), ensure the mock
implementations export isOpenCodeBundled instead of isOpenCodeCliAvailable and
update any test usages/calls (there are multiple occurrences) to the new name so
the exported symbol matches the implementation in isOpenCodeBundled.
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (1)
packages/agent-core/src/providers/huggingface-local.ts (1)
1-9: Move PR provenance out of the source file.Git history already captures this, and keeping contributor/PR attribution in the module will drift as the implementation changes. As per coding guidelines,
**/*.{ts,tsx,js,jsx}: Do NOT add unnecessary comments — comments should explain why, not what the code does.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/agent-core/src/providers/huggingface-local.ts` around lines 1 - 9, Remove the PR/contributor provenance header comment and leave a concise module header that explains the purpose only; delete the lines listing contributors and PR numbers (the block mentioning HF_LOCAL_DEFAULT_URL, HF_RECOMMENDED_MODELS, testHuggingFaceLocalConnection, fetchHuggingFaceLocalModels, HuggingFaceHubModel, searchHuggingFaceHubModels, getHuggingFaceRecommendedModels) so the file contains a short comment describing the module intent instead of per-PR attribution.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/agent-core/src/providers/huggingface-local.ts`:
- Line 93: The single-line guard "if (!response.ok) throw new Error(`HuggingFace
Hub API error: ${response.status}`);" and the other single-line conditional
later in this module must be converted to use braces to satisfy the repo's curly
rule; locate the occurrences that reference the response variable (the "if
(!response.ok)" check and the second guard around line 135) and replace each
single-line if with a braced block (e.g., if (condition) { throw ... } ) so all
if/else/for/while statements use braces.
- Around line 66-73: The Hub response is being cast directly to
HuggingFaceHubModel which can leave required fields like modelId undefined;
instead, locate where the raw Hub payload is converted (the code that currently
casts the raw response to HuggingFaceHubModel) and map each field into a new
HuggingFaceHubModel object explicitly, setting modelId to raw.modelId || raw.id
and providing safe defaults for optional fields (description, tags, downloads,
quantizations) before returning or pushing into results so callers always
receive a valid contract.
- Around line 125-141: The function fetchHuggingFaceLocalModels currently claims
to "Fetch available models" but the server /v1/models only returns the currently
loaded model (or empty), and the function is unused; change the implementation
and docs to match reality or remove it: either (A) rename and document
fetchHuggingFaceLocalModels -> fetchLoadedHuggingFaceLocalModel (update JSDoc
accordingly), keep the current fetch to `${serverUrl}/v1/models` and return at
most one model (preserve HF_LOCAL_API_TIMEOUT_MS usage), or (B) implement true
model discovery by calling a server endpoint that enumerates cached models (add
new endpoint on the server and call it here), or (C) delete the unused export
entirely; pick one and make corresponding code and JSDoc changes and remove any
dead imports/usages.
---
Nitpick comments:
In `@packages/agent-core/src/providers/huggingface-local.ts`:
- Around line 1-9: Remove the PR/contributor provenance header comment and leave
a concise module header that explains the purpose only; delete the lines listing
contributors and PR numbers (the block mentioning HF_LOCAL_DEFAULT_URL,
HF_RECOMMENDED_MODELS, testHuggingFaceLocalConnection,
fetchHuggingFaceLocalModels, HuggingFaceHubModel, searchHuggingFaceHubModels,
getHuggingFaceRecommendedModels) so the file contains a short comment describing
the module intent instead of per-PR attribution.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 026b3897-8869-48de-928f-3e77706fd3a0
📒 Files selected for processing (4)
apps/web/locales/en/settings.jsonpackages/agent-core/src/common/constants/model-display.tspackages/agent-core/src/providers/huggingface-local.tspackages/agent-core/src/providers/index.ts
| export interface HuggingFaceHubModel { | ||
| id: string; | ||
| modelId: string; | ||
| description?: string; | ||
| tags?: string[]; | ||
| downloads?: number; | ||
| quantizations?: string[]; | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
python - <<'PY'
import json
import urllib.parse
import urllib.request
params = urllib.parse.urlencode({
"search": "phi-3",
"filter": "onnx",
"limit": "1",
"sort": "downloads",
})
url = f"https://huggingface.co/api/models?{params}"
with urllib.request.urlopen(url, timeout=15) as resp:
item = json.load(resp)[0]
print("keys:", sorted(item.keys()))
print("id:", item.get("id"))
print("modelId:", item.get("modelId"))
PYRepository: accomplish-ai/accomplish
Length of output: 2650
🏁 Script executed:
cat -n packages/agent-core/src/providers/huggingface-local.ts | head -100Repository: accomplish-ai/accomplish
Length of output: 4029
🏁 Script executed:
cd packages/agent-core && git ls-files src/providers | head -20Repository: accomplish-ai/accomplish
Length of output: 428
Map the Hub payload into your own contract before returning it.
HuggingFaceHubModel.modelId is required, but line 94 casts the raw Hub response directly without mapping. If the upstream API payload only provides id without modelId, this will hand callers modelId: undefined while still type-checking as valid.
Suggested fix
export async function searchHuggingFaceHubModels(
query: string,
limit = 20,
): Promise<HuggingFaceHubModel[]> {
const controller = new AbortController();
const timeout = setTimeout(() => controller.abort(), HF_HUB_API_TIMEOUT_MS);
try {
const url = new URL('https://huggingface.co/api/models');
url.searchParams.set('search', query);
url.searchParams.set('filter', 'onnx');
url.searchParams.set('limit', String(limit));
url.searchParams.set('sort', 'downloads');
const response = await fetch(url.toString(), { signal: controller.signal });
- if (!response.ok) throw new Error(`HuggingFace Hub API error: ${response.status}`);
- const data = (await response.json()) as HuggingFaceHubModel[];
- return data;
+ if (!response.ok) {
+ throw new Error(`HuggingFace Hub API error: ${response.status}`);
+ }
+
+ const data = (await response.json()) as Array<{
+ id: string;
+ tags?: string[];
+ downloads?: number;
+ cardData?: { summary?: string };
+ }>;
+
+ return data.map((model) => ({
+ id: model.id,
+ modelId: model.id,
+ description: model.cardData?.summary,
+ tags: Array.isArray(model.tags) ? model.tags : undefined,
+ downloads: typeof model.downloads === 'number' ? model.downloads : undefined,
+ }));
} finally {
clearTimeout(timeout);
}
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/agent-core/src/providers/huggingface-local.ts` around lines 66 - 73,
The Hub response is being cast directly to HuggingFaceHubModel which can leave
required fields like modelId undefined; instead, locate where the raw Hub
payload is converted (the code that currently casts the raw response to
HuggingFaceHubModel) and map each field into a new HuggingFaceHubModel object
explicitly, setting modelId to raw.modelId || raw.id and providing safe defaults
for optional fields (description, tags, downloads, quantizations) before
returning or pushing into results so callers always receive a valid contract.
| url.searchParams.set('sort', 'downloads'); | ||
|
|
||
| const response = await fetch(url.toString(), { signal: controller.signal }); | ||
| if (!response.ok) throw new Error(`HuggingFace Hub API error: ${response.status}`); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n packages/agent-core/src/providers/huggingface-local.ts | sed -n '90,96p;132,138p'Repository: accomplish-ai/accomplish
Length of output: 872
Wrap these single-line if statements in braces.
Lines 93 and 135 both have single-line guards without braces, violating the repo's curly ESLint rule. Use braces for all if/else/for/while statements.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/agent-core/src/providers/huggingface-local.ts` at line 93, The
single-line guard "if (!response.ok) throw new Error(`HuggingFace Hub API error:
${response.status}`);" and the other single-line conditional later in this
module must be converted to use braces to satisfy the repo's curly rule; locate
the occurrences that reference the response variable (the "if (!response.ok)"
check and the second guard around line 135) and replace each single-line if with
a braced block (e.g., if (condition) { throw ... } ) so all if/else/for/while
statements use braces.
Adds a HuggingFace Local inference provider enabling fully offline LLM inference via @huggingface/transformers, without relying on external APIs. The provider runs an OpenAI-compatible HTTP server inside the Electron main process and integrates with the existing provider settings. Key changes: - New huggingface-local provider with model manager and inference server - HuggingFaceProviderForm UI component in settings - v009 migration adding huggingface_local_config column - IPC handlers for model download/management - Provider icon and logo assets Credit: GunaPalanivel (#650), nancysangani (#488), SaaiAravindhRaja (#604) Closes #183
- From #604 (SaaiAravindhRaja): i18n locale strings for HuggingFace Local settings panel (provider name, labels, server URL hint, model selection), 'huggingface-local/' prefix in PROVIDER_PREFIXES, testHuggingFaceLocalConnection(), fetchHuggingFaceLocalModels(), and curated HF_RECOMMENDED_MODELS list - From #488 (nancysangani): HuggingFaceHubModel type, searchHuggingFaceHubModels() for discovering ONNX-compatible models via HuggingFace Hub API, and additional Xenova mirror models in HF_RECOMMENDED_MODELS
…formers - Override tar to >=7.5.8 to fix CVE-2026-26960 (directory traversal, CVSS 8.4) - Add sharp to ignoredOptionalDependencies to drop LGPL-3.0 @img/sharp-libvips-* packages (sharp is not needed — only text tokenization/generation is used) Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
d1f2ead to
0be3b7e
Compare
There was a problem hiding this comment.
Actionable comments posted: 7
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
apps/desktop/src/main/opencode/electron-options.ts (1)
62-68:⚠️ Potential issue | 🟠 MajorFail fast when the bundled CLI cannot be resolved.
When
resolveCliPath()misses, Lines 67-68 still return bareopencodewhile Lines 71-72 report the CLI as unavailable. That split makes availability checks disagree with the actual command path, and on Windows it just defers the failure until the PTY layer rejects the non-.execommand.Suggested fix
export function getOpenCodeCliPath(): { command: string; args: string[] } { const resolved = resolveCliPath(getCliResolverConfig()); if (resolved) { return { command: resolved.cliPath, args: [] }; } - console.log('[CLI Path] Falling back to opencode command on PATH'); - return { command: 'opencode', args: [] }; + throw new Error('[CLI Path] OpenCode CLI could not be resolved'); }Based on learnings: Windows now requires CLI commands to resolve to a real
.exepath, and desktop CLI resolution fails fast when no concrete CLI path is found instead of falling back to bareopencode.Also applies to: 71-73
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/desktop/src/main/opencode/electron-options.ts` around lines 62 - 68, getOpenCodeCliPath currently falls back to the bare "opencode" command when resolveCliPath(getCliResolverConfig()) returns null, which causes mismatched availability checks and Windows failures; instead, change getOpenCodeCliPath to fail fast by throwing a descriptive Error (e.g., "OpenCode CLI not found; expected bundled .exe") when resolved is falsy, and remove the fallback to 'opencode'; apply the same change to the other fallback branch in this file that currently returns 'opencode' so both availability checks and command resolution are consistent with Windows .exe requirements.apps/web/src/client/components/settings/providers/index.ts (1)
1-1:⚠️ Potential issue | 🟡 MinorStale path comment.
The comment references
apps/desktop/src/renderer/...but this file is located atapps/web/src/client/....🧹 Proposed fix
-// apps/desktop/src/renderer/components/settings/providers/index.ts +// apps/web/src/client/components/settings/providers/index.ts🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/src/client/components/settings/providers/index.ts` at line 1, Update the stale top-of-file path comment that currently reads "// apps/desktop/src/renderer/components/settings/providers/index.ts": either remove the incorrect comment or replace it with the correct path comment for this file (or a neutral description) so the header no longer points to the desktop renderer location; locate the comment string in apps/web/src/client/components/settings/providers/index.ts and edit the single-line comment accordingly.apps/desktop/src/main/index.ts (1)
352-361:⚠️ Potential issue | 🟡 MinorAsync
stopHuggingFaceServermay not complete before process exit.The
before-quitevent handler callsstopHuggingFaceServer()without awaiting it. Sincebefore-quitis synchronous, Electron may terminate the process before the server fully shuts down, potentially leaving the port bound or causing resource leaks.Consider converting the handler to properly await async cleanup, or use
will-quitwithevent.preventDefault()to delay exit until cleanup completes.🔧 Option: Block quit until cleanup completes
-app.on('before-quit', () => { +app.on('before-quit', async (event) => { + event.preventDefault(); disposeTaskManager(); // Also cleans up proxies internally cleanupVertexServiceAccountKey(); oauthBrowserFlow.dispose(); closeStorage(); shutdownLogCollector(); - // Stop the HF inference server so its port is released before process exit - stopHuggingFaceServer().catch((err: unknown) => { - console.warn('[Main] Failed to stop HuggingFace server on quit:', err); - }); + try { + await stopHuggingFaceServer(); + } catch (err: unknown) { + console.warn('[Main] Failed to stop HuggingFace server on quit:', err); + } + app.exit(); });Note: This pattern requires care to avoid re-triggering
before-quit. You may need a guard flag or usewill-quitinstead.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/desktop/src/main/index.ts` around lines 352 - 361, The before-quit handler is calling stopHuggingFaceServer() without awaiting it, so the process may exit before shutdown completes; change to an async shutdown flow by moving this cleanup into a will-quit handler (or make before-quit async) that calls event.preventDefault(), awaits stopHuggingFaceServer() and other async cleanup (disposeTaskManager/cleanupVertexServiceAccountKey/oauthBrowserFlow.dispose/closeStorage/shutdownLogCollector), then calls app.quit() (or resumes exit) after completion; add a simple guard flag (e.g., isQuitting) to avoid re-entrancy so you don't re-trigger the quit loop.
♻️ Duplicate comments (1)
packages/agent-core/src/providers/huggingface-local.ts (1)
92-95:⚠️ Potential issue | 🟠 MajorMap the Hub response into
HuggingFaceHubModelbefore returning it.Line 94 casts the raw JSON straight to
HuggingFaceHubModel[], butmodelIdis required by that interface. If the Hub payload only carriesid, callers get an invalid contract at runtime while TypeScript treats it as safe.🛠️ Suggested mapping fix
- if (!response.ok) throw new Error(`HuggingFace Hub API error: ${response.status}`); - const data = (await response.json()) as HuggingFaceHubModel[]; - return data; + if (!response.ok) { + throw new Error(`HuggingFace Hub API error: ${response.status}`); + } + + const data = (await response.json()) as Array<{ + id?: string; + modelId?: string; + tags?: string[]; + downloads?: number; + cardData?: { summary?: string }; + }>; + + return data + .filter((model) => typeof (model.modelId ?? model.id) === 'string') + .map((model) => ({ + id: model.modelId ?? model.id!, + modelId: model.modelId ?? model.id!, + description: model.cardData?.summary, + tags: Array.isArray(model.tags) ? model.tags : undefined, + downloads: typeof model.downloads === 'number' ? model.downloads : undefined, + }));Run this to confirm the raw Hub payload shape:
#!/bin/bash set -euo pipefail python - <<'PY' import json, urllib.parse, urllib.request params = urllib.parse.urlencode({ "search": "phi-3", "filter": "onnx", "limit": "1", "sort": "downloads", }) url = f"https://huggingface.co/api/models?{params}" with urllib.request.urlopen(url, timeout=15) as resp: item = json.load(resp)[0] print("keys:", sorted(item.keys())) print("id:", item.get("id")) print("modelId:", item.get("modelId")) PYAs per coding guidelines,
**/*.{ts,tsx,js,jsx}: Always use braces forif/else/for/whilestatements — no single-line braceless statements (enforced bycurlyESLint rule).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/agent-core/src/providers/huggingface-local.ts` around lines 92 - 95, The code currently casts the raw Hub JSON directly to HuggingFaceHubModel[] (via response.json() -> data) which can miss the required modelId property; replace the direct cast with an explicit map over the parsed JSON: parse json into a generic any[] (e.g., const raw = await response.json()), then return raw.map(item => ({ ...item, modelId: item.modelId ?? item.id })) ensuring every returned object conforms to HuggingFaceHubModel, and keep braces on any if/else per the project's curly rule when adding this mapping logic (reference the variables response, response.json(), data and the HuggingFaceHubModel type/modelId field).
🧹 Nitpick comments (5)
apps/desktop/src/main/providers/huggingface-local/model-manager.ts (3)
11-11: Import from package root instead of deep path.Per project conventions, types should be imported from the package root
@accomplish_ai/agent-corerather than deep paths like@accomplish_ai/agent-core/common.♻️ Proposed fix
-import type { HuggingFaceLocalModelInfo } from '@accomplish_ai/agent-core/common'; +import type { HuggingFaceLocalModelInfo } from '@accomplish_ai/agent-core';Based on learnings: "Enforce importing Sandbox types (e.g., SandboxConfig) from the package root 'accomplish_ai/agent-core' instead of deep paths like 'accomplish_ai/agent-core/common'."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/desktop/src/main/providers/huggingface-local/model-manager.ts` at line 11, The import is using a deep path; change the import of HuggingFaceLocalModelInfo to come from the package root instead of '@accomplish_ai/agent-core/common' — update the import statement that references HuggingFaceLocalModelInfo in model-manager.ts to import from '@accomplish_ai/agent-core' so types are loaded from the package root per project conventions.
267-284: Consider guarding against symlink traversal.
getDirSizerecursively traverses directories. If an attacker could place a symlink pointing outside the cache directory, the function would follow it. While the cache directory is app-controlled, consider usinglstatSyncto detect symlinks.🛡️ Optional hardening
function getDirSize(dirPath: string): number { let total = 0; try { const entries = fs.readdirSync(dirPath, { withFileTypes: true }); for (const entry of entries) { const fullPath = path.join(dirPath, entry.name); - if (entry.isFile()) { + if (entry.isSymbolicLink()) { + // Skip symlinks to avoid following links outside cache + continue; + } else if (entry.isFile()) { total += fs.statSync(fullPath).size; } else if (entry.isDirectory()) { total += getDirSize(fullPath); } } } catch { // Ignore errors (permission issues etc.) } return total; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/desktop/src/main/providers/huggingface-local/model-manager.ts` around lines 267 - 284, The getDirSize function currently follows symlinks during recursion; modify it to detect and skip symbolic links by using fs.lstatSync (or Dirent.isSymbolicLink()) on each fullPath and skip any entries where isSymbolicLink() is true so the function does not traverse or count files outside the intended cache directory; keep the existing error handling and recursion in getDirSize but add this symlink guard before treating entries as files or directories.
84-87: AbortController is ineffective — consider removing or clarifying the UX impact.The code stores an
AbortControllerand provides acancelDownloadfunction, but the comments acknowledge that Transformers.js doesn't support aborting downloads. This creates a misleading API where callingcancelDownloadhas no real effect — the download continues and files remain in cache.Consider either:
- Removing the cancellation API entirely until upstream support exists
- Making the limitation more visible in the function signature/return type (e.g., returning
{ cancelled: false, reason: 'not_supported' })Also applies to: 159-172
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/desktop/src/main/providers/huggingface-local/model-manager.ts` around lines 84 - 87, The AbortController stored in activeDownloads and the cancelDownload API are misleading because transformers.js's from_pretrained doesn't support aborting; either remove the cancellation machinery (drop AbortController usage and activeDownloads entries) or make cancelDownload explicitly indicate no-op by returning a structured result like { cancelled: false, reason: 'not_supported' } and update callers accordingly; locate references to AbortController, activeDownloads, cancelDownload, and from_pretrained in model-manager.ts (including the later block around lines ~159-172) and implement one of the two approaches consistently across both places.apps/desktop/src/main/ipc/task-callbacks.ts (1)
181-191: Consider removing dead code left over from browser recovery removal.Setting
browserRecoveryInFlight = trueon line 181 and building thereasonstring on lines 182-184 are now dead code since recovery is no longer performed. These assignments are immediately cleared on lines 190-191 without being used.♻️ Proposed cleanup
if (browserFailureCount < BROWSER_FAILURE_THRESHOLD || browserRecoveryInFlight) { return; } - browserRecoveryInFlight = true; - const reason = `Detected repeated browser connection failures (${browserFailureCount} in ${Math.ceil( - (now - browserFailureWindowStart) / 1000, - )}s). Reconnecting browser...`; - - console.warn(`[TaskCallbacks] ${reason}`); console.warn( - `[TaskCallbacks] recoverDevBrowserServer is currently unavailable. Please restart the browser.`, + `[TaskCallbacks] Detected repeated browser connection failures (${browserFailureCount} in ${Math.ceil( + (now - browserFailureWindowStart) / 1000, + )}s). recoverDevBrowserServer is currently unavailable. Please restart the browser.`, ); - browserRecoveryInFlight = false; resetBrowserFailureState(); },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/desktop/src/main/ipc/task-callbacks.ts` around lines 181 - 191, Remove the dead browser-recovery code: delete the unused assignments and calculations around browserRecoveryInFlight and reason (including the interpolation that uses browserFailureCount and browserFailureWindowStart) since recovery is no longer performed and those values are immediately reset; keep the remaining logs/messages that are still relevant and ensure you do not remove the call to resetBrowserFailureState() or references to recoverDevBrowserServer if they are used elsewhere (locate these symbols: browserRecoveryInFlight, reason, browserFailureCount, browserFailureWindowStart, resetBrowserFailureState, recoverDevBrowserServer).apps/desktop/__tests__/unit/main/ipc/handlers.unit.test.ts (1)
535-542: Cover the config IPC channels here as well.Preload now exposes
huggingface-local:get-configandhuggingface-local:set-config, but Lines 535-542 only lock down the lifecycle/model-management channels. A missing config handler would still leave this test green.🧪 Suggested assertion additions
expect(handlers.has('huggingface-local:server-status')).toBe(true); expect(handlers.has('huggingface-local:test-connection')).toBe(true); + expect(handlers.has('huggingface-local:get-config')).toBe(true); + expect(handlers.has('huggingface-local:set-config')).toBe(true); expect(handlers.has('huggingface-local:download-model')).toBe(true); expect(handlers.has('huggingface-local:list-models')).toBe(true); expect(handlers.has('huggingface-local:delete-model')).toBe(true);Based on learnings, IPC handlers in
apps/desktop/src/main/ipc/handlers.tsmust matchwindow.accomplishAPI in preload.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/desktop/__tests__/unit/main/ipc/handlers.unit.test.ts` around lines 535 - 542, Add assertions to the test that verify the config IPC channels are registered: ensure handlers.has('huggingface-local:get-config') and handlers.has('huggingface-local:set-config') are asserted true alongside the existing lifecycle/model-management checks; update the test containing the existing expectations (the block that checks handlers.has for 'huggingface-local:start-server', 'stop-server', 'server-status', 'test-connection', 'download-model', 'list-models', 'delete-model') to include these two config channel names so the preload API (window.accomplish) mapping is fully covered.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.gitignore:
- Line 77: Remove the duplicate .worktrees ignore entry from .gitignore by
deleting either the `.worktrees` or `.worktrees/` line so only one remains;
locate the redundant pattern (the duplicate `.worktrees`/`.worktrees/` entries)
and remove the second occurrence to keep the file clean and avoid duplicate
patterns.
In `@apps/desktop/src/main/ipc/handlers.ts`:
- Around line 1447-1449: Rename the IPC handler string from
'get-user-skills-path' to 'skills:get-user-skills-path' to match the other
skills handlers and tests; update the corresponding preload invocation that
registers or invokes this handler to use 'skills:get-user-skills-path' as well,
leaving the implementation (skillsManager.getUserSkillsPath) unchanged so the
handler still returns the same value.
In `@apps/desktop/src/main/opencode/electron-options.ts`:
- Around line 160-169: The code currently falls back to the configured
OpenAI-compatible URL when the selected provider is huggingface-local but the
local server is not running; instead, in the block that evaluates hfProvider
(using activeModel, selectedModel, getHuggingFaceServerStatus, and hfBaseUrl),
do not assign or reuse configuredOpenAiBaseUrl for hfBaseUrl when
hfStatus.running is false or hfStatus.port is missing; instead surface a clear
failure path (throw or return an error/notification indicating "HuggingFace
Local server not running") so callers can handle it safely and avoid leaking
prompts to a remote OpenAI-compatible endpoint; update callers that expect
hfBaseUrl to handle this explicit error/undefined state rather than silently
using configuredOpenAiBaseUrl.
In `@apps/desktop/src/main/providers/huggingface-local/server.ts`:
- Around line 360-385: startServer can run concurrently and both callers may
pass the state.server null check, causing two http.Server instances where the
last one overwrites state.server and the first is orphaned; fix by serializing
starts: add a per-process "starting" lock/promise (e.g., state.starting) that
any startServer call awaits so only one start proceeds at a time, set
state.starting before calling loadModel/create the server, assign state.server
(and state.port) as soon as you create the http.Server (or resolve the starting
promise) so other callers wait on that single instance, and ensure you clear
state.starting and perform cleanup on errors; apply the same guard around the
listen callback logic that currently assigns state.server/state.port so
overlapping starts cannot orphan a server.
- Around line 82-150: The race occurs because stopServer() clears
state.isStopping while loadModelPromise (the async loader in
loadModel()/loadModelPromise) is still running, allowing the loader to pass its
stop check and assign state.tokenizer/state.model/state.loadedModelId after
shutdown; fix by changing stopServer() to wait for the in-flight
loadModelPromise to finish before clearing state.isStopping (i.e., set
state.isStopping = true, trigger disposal, then if loadModelPromise exists await
loadModelPromise.catch(() => {}) or otherwise wait until loadModelPromise ===
null before setting state.isStopping = false and completing shutdown), or
alternatively hold a local stoppedAtStart token in stopServer() and avoid
clearing the global flag until loadModelPromise resolves; reference:
stopServer(), loadModelPromise, state.isStopping, state.model, state.tokenizer,
state.loadedModelId.
- Around line 102-115: The loader currently hardcodes dtype values when calling
AutoModelForCausalLM.from_pretrained (using modelId and cacheDir) and ignores
the model's metadata.quantization; update the load sequence to first read the
model metadata (the quantization field) and pass that value as the dtype to
AutoModelForCausalLM.from_pretrained, falling back only if that specific dtype
fails; preserve the existing local_files_only and cache_dir options, and ensure
error handling still attempts a safe fallback (e.g., metadata dtype -> other
metadata-compatible dtypes -> 'fp32') while logging the attempted dtype and
errors for debugging.
---
Outside diff comments:
In `@apps/desktop/src/main/index.ts`:
- Around line 352-361: The before-quit handler is calling
stopHuggingFaceServer() without awaiting it, so the process may exit before
shutdown completes; change to an async shutdown flow by moving this cleanup into
a will-quit handler (or make before-quit async) that calls
event.preventDefault(), awaits stopHuggingFaceServer() and other async cleanup
(disposeTaskManager/cleanupVertexServiceAccountKey/oauthBrowserFlow.dispose/closeStorage/shutdownLogCollector),
then calls app.quit() (or resumes exit) after completion; add a simple guard
flag (e.g., isQuitting) to avoid re-entrancy so you don't re-trigger the quit
loop.
In `@apps/desktop/src/main/opencode/electron-options.ts`:
- Around line 62-68: getOpenCodeCliPath currently falls back to the bare
"opencode" command when resolveCliPath(getCliResolverConfig()) returns null,
which causes mismatched availability checks and Windows failures; instead,
change getOpenCodeCliPath to fail fast by throwing a descriptive Error (e.g.,
"OpenCode CLI not found; expected bundled .exe") when resolved is falsy, and
remove the fallback to 'opencode'; apply the same change to the other fallback
branch in this file that currently returns 'opencode' so both availability
checks and command resolution are consistent with Windows .exe requirements.
In `@apps/web/src/client/components/settings/providers/index.ts`:
- Line 1: Update the stale top-of-file path comment that currently reads "//
apps/desktop/src/renderer/components/settings/providers/index.ts": either remove
the incorrect comment or replace it with the correct path comment for this file
(or a neutral description) so the header no longer points to the desktop
renderer location; locate the comment string in
apps/web/src/client/components/settings/providers/index.ts and edit the
single-line comment accordingly.
---
Duplicate comments:
In `@packages/agent-core/src/providers/huggingface-local.ts`:
- Around line 92-95: The code currently casts the raw Hub JSON directly to
HuggingFaceHubModel[] (via response.json() -> data) which can miss the required
modelId property; replace the direct cast with an explicit map over the parsed
JSON: parse json into a generic any[] (e.g., const raw = await response.json()),
then return raw.map(item => ({ ...item, modelId: item.modelId ?? item.id }))
ensuring every returned object conforms to HuggingFaceHubModel, and keep braces
on any if/else per the project's curly rule when adding this mapping logic
(reference the variables response, response.json(), data and the
HuggingFaceHubModel type/modelId field).
---
Nitpick comments:
In `@apps/desktop/__tests__/unit/main/ipc/handlers.unit.test.ts`:
- Around line 535-542: Add assertions to the test that verify the config IPC
channels are registered: ensure handlers.has('huggingface-local:get-config') and
handlers.has('huggingface-local:set-config') are asserted true alongside the
existing lifecycle/model-management checks; update the test containing the
existing expectations (the block that checks handlers.has for
'huggingface-local:start-server', 'stop-server', 'server-status',
'test-connection', 'download-model', 'list-models', 'delete-model') to include
these two config channel names so the preload API (window.accomplish) mapping is
fully covered.
In `@apps/desktop/src/main/ipc/task-callbacks.ts`:
- Around line 181-191: Remove the dead browser-recovery code: delete the unused
assignments and calculations around browserRecoveryInFlight and reason
(including the interpolation that uses browserFailureCount and
browserFailureWindowStart) since recovery is no longer performed and those
values are immediately reset; keep the remaining logs/messages that are still
relevant and ensure you do not remove the call to resetBrowserFailureState() or
references to recoverDevBrowserServer if they are used elsewhere (locate these
symbols: browserRecoveryInFlight, reason, browserFailureCount,
browserFailureWindowStart, resetBrowserFailureState, recoverDevBrowserServer).
In `@apps/desktop/src/main/providers/huggingface-local/model-manager.ts`:
- Line 11: The import is using a deep path; change the import of
HuggingFaceLocalModelInfo to come from the package root instead of
'@accomplish_ai/agent-core/common' — update the import statement that references
HuggingFaceLocalModelInfo in model-manager.ts to import from
'@accomplish_ai/agent-core' so types are loaded from the package root per
project conventions.
- Around line 267-284: The getDirSize function currently follows symlinks during
recursion; modify it to detect and skip symbolic links by using fs.lstatSync (or
Dirent.isSymbolicLink()) on each fullPath and skip any entries where
isSymbolicLink() is true so the function does not traverse or count files
outside the intended cache directory; keep the existing error handling and
recursion in getDirSize but add this symlink guard before treating entries as
files or directories.
- Around line 84-87: The AbortController stored in activeDownloads and the
cancelDownload API are misleading because transformers.js's from_pretrained
doesn't support aborting; either remove the cancellation machinery (drop
AbortController usage and activeDownloads entries) or make cancelDownload
explicitly indicate no-op by returning a structured result like { cancelled:
false, reason: 'not_supported' } and update callers accordingly; locate
references to AbortController, activeDownloads, cancelDownload, and
from_pretrained in model-manager.ts (including the later block around lines
~159-172) and implement one of the two approaches consistently across both
places.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: c871ac75-a235-4a6e-a1c1-3cb916a40bd0
⛔ Files ignored due to path filters (3)
apps/desktop/public/assets/ai-logos/huggingface.svgis excluded by!**/*.svgapps/web/public/assets/ai-logos/huggingface.svgis excluded by!**/*.svgpnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (36)
.gitignoreapps/desktop/__tests__/unit/main/ipc/handlers.unit.test.tsapps/desktop/package.jsonapps/desktop/src/main/index.tsapps/desktop/src/main/ipc/handlers.tsapps/desktop/src/main/ipc/task-callbacks.tsapps/desktop/src/main/opencode/electron-options.tsapps/desktop/src/main/opencode/index.tsapps/desktop/src/main/providers/huggingface-local/index.tsapps/desktop/src/main/providers/huggingface-local/model-manager.tsapps/desktop/src/main/providers/huggingface-local/server.tsapps/desktop/src/preload/index.tsapps/web/locales/en/settings.jsonapps/web/src/client/components/settings/ProviderGrid.tsxapps/web/src/client/components/settings/ProviderSettingsPanel.tsxapps/web/src/client/components/settings/providers/HuggingFaceProviderForm.tsxapps/web/src/client/components/settings/providers/index.tsapps/web/src/client/components/ui/ProviderIcon.tsxapps/web/src/client/lib/accomplish.tsapps/web/src/client/lib/provider-logos.tspackage.jsonpackages/agent-core/src/common.tspackages/agent-core/src/common/constants/model-display.tspackages/agent-core/src/common/index.tspackages/agent-core/src/common/types/provider.tspackages/agent-core/src/common/types/providerSettings.tspackages/agent-core/src/factories/storage.tspackages/agent-core/src/index.tspackages/agent-core/src/providers/huggingface-local.tspackages/agent-core/src/providers/index.tspackages/agent-core/src/storage/index.tspackages/agent-core/src/storage/migrations/index.tspackages/agent-core/src/storage/migrations/v010-huggingface-local.tspackages/agent-core/src/storage/repositories/appSettings.tspackages/agent-core/src/storage/repositories/index.tspackages/agent-core/src/types/storage.ts
💤 Files with no reviewable changes (1)
- apps/desktop/src/main/opencode/index.ts
🚧 Files skipped from review as they are similar to previous changes (12)
- apps/desktop/package.json
- packages/agent-core/src/types/storage.ts
- apps/web/src/client/components/settings/ProviderGrid.tsx
- apps/web/src/client/lib/provider-logos.ts
- packages/agent-core/src/common/index.ts
- apps/web/locales/en/settings.json
- packages/agent-core/src/storage/repositories/appSettings.ts
- apps/web/src/client/lib/accomplish.ts
- apps/web/src/client/components/ui/ProviderIcon.tsx
- packages/agent-core/src/storage/repositories/index.ts
- packages/agent-core/src/factories/storage.ts
- packages/agent-core/src/common/types/provider.ts
| # Test local agent Chrome profile | ||
| .accomplish-test-local-agent-chrome/ | ||
| .worktrees | ||
|
|
There was a problem hiding this comment.
Remove duplicate .worktrees entry.
Line 77 duplicates the .worktrees/ pattern already present on line 69. In .gitignore, both .worktrees and .worktrees/ are functionally equivalent for ignoring directories.
🧹 Proposed fix to remove duplicate
-.worktrees🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.gitignore at line 77, Remove the duplicate .worktrees ignore entry from
.gitignore by deleting either the `.worktrees` or `.worktrees/` line so only one
remains; locate the redundant pattern (the duplicate `.worktrees`/`.worktrees/`
entries) and remove the second occurrence to keep the file clean and avoid
duplicate patterns.
| handle('get-user-skills-path', async () => { | ||
| return skillsManager.getUserSkillsPath(); | ||
| }); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify the IPC channel name consistency and usage
rg -n "get-user-skills-path|skills:get-user-skills-path" --type=tsRepository: accomplish-ai/accomplish
Length of output: 599
🏁 Script executed:
sed -n '1390,1450p' apps/desktop/src/main/ipc/handlers.tsRepository: accomplish-ai/accomplish
Length of output: 1781
🏁 Script executed:
cat -n apps/desktop/__tests__/integration/preload/preload.integration.test.ts | sed -n '300,310p'Repository: accomplish-ai/accomplish
Length of output: 512
Rename handler to use consistent 'skills:' prefix to match the test expectations and other skills handlers.
The handler at line 1447 is named 'get-user-skills-path', but the test at line 305 expects 'skills:get-user-skills-path'. Additionally, all other skills handlers (lines 1390-1444) consistently use the 'skills:' prefix (e.g., 'skills:list', 'skills:set-enabled', 'skills:delete'). Update the handler name and preload invocation to use 'skills:get-user-skills-path'.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/desktop/src/main/ipc/handlers.ts` around lines 1447 - 1449, Rename the
IPC handler string from 'get-user-skills-path' to 'skills:get-user-skills-path'
to match the other skills handlers and tests; update the corresponding preload
invocation that registers or invokes this handler to use
'skills:get-user-skills-path' as well, leaving the implementation
(skillsManager.getUserSkillsPath) unchanged so the handler still returns the
same value.
| // Determine HuggingFace Local server URL | ||
| const hfProvider = | ||
| activeModel?.provider === 'huggingface-local' || | ||
| selectedModel?.provider === 'huggingface-local'; | ||
| let hfBaseUrl: string | undefined; | ||
| if (hfProvider) { | ||
| const hfStatus = getHuggingFaceServerStatus(); | ||
| if (hfStatus.running && hfStatus.port) { | ||
| hfBaseUrl = `http://127.0.0.1:${hfStatus.port}/v1`; | ||
| } |
There was a problem hiding this comment.
Don't silently fail over HuggingFace Local to the configured OpenAI-compatible URL.
If huggingface-local is selected but the local server is down, Line 198 now reuses configuredOpenAiBaseUrl. For users who also have a remote OpenAI-compatible endpoint configured, that turns an offline/local request into a request to a different backend and can leak prompts off-box. Prefer surfacing a clear “local server not running” error here instead of automatic failover.
Safer routing change
if (hfProvider) {
const hfStatus = getHuggingFaceServerStatus();
- if (hfStatus.running && hfStatus.port) {
- hfBaseUrl = `http://127.0.0.1:${hfStatus.port}/v1`;
- }
+ if (!hfStatus.running || !hfStatus.port) {
+ throw new Error('HuggingFace Local server is not running');
+ }
+ hfBaseUrl = `http://127.0.0.1:${hfStatus.port}/v1`;
}
@@
- openAiBaseUrl: hfProvider && hfBaseUrl ? hfBaseUrl : configuredOpenAiBaseUrl || undefined,
+ openAiBaseUrl: hfProvider ? hfBaseUrl : configuredOpenAiBaseUrl || undefined,Also applies to: 198-198
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/desktop/src/main/opencode/electron-options.ts` around lines 160 - 169,
The code currently falls back to the configured OpenAI-compatible URL when the
selected provider is huggingface-local but the local server is not running;
instead, in the block that evaluates hfProvider (using activeModel,
selectedModel, getHuggingFaceServerStatus, and hfBaseUrl), do not assign or
reuse configuredOpenAiBaseUrl for hfBaseUrl when hfStatus.running is false or
hfStatus.port is missing; instead surface a clear failure path (throw or return
an error/notification indicating "HuggingFace Local server not running") so
callers can handle it safely and avoid leaking prompts to a remote
OpenAI-compatible endpoint; update callers that expect hfBaseUrl to handle this
explicit error/undefined state rather than silently using
configuredOpenAiBaseUrl.
| loadModelPromise = (async () => { | ||
| state.isLoading = true; | ||
| // Capture stop flag at start so we can detect a concurrent stopServer() call | ||
| const stoppedAtStart = state.isStopping; | ||
| console.log(`[HF Server] Loading model: ${modelId}`); | ||
|
|
||
| try { | ||
| const { env, AutoTokenizer, AutoModelForCausalLM } = | ||
| await import('@huggingface/transformers'); | ||
|
|
||
| const cacheDir = path.join(app.getPath('userData'), 'hf-models'); | ||
| env.cacheDir = cacheDir; | ||
| env.allowLocalModels = true; | ||
|
|
||
| // Stage new model and tokenizer | ||
| const tokenizer = await AutoTokenizer.from_pretrained(modelId, { | ||
| cache_dir: cacheDir, | ||
| local_files_only: true, | ||
| }); | ||
|
|
||
| let model; | ||
| try { | ||
| model = await AutoModelForCausalLM.from_pretrained(modelId, { | ||
| cache_dir: cacheDir, | ||
| dtype: 'q4', | ||
| local_files_only: true, | ||
| }); | ||
| } catch (err) { | ||
| console.warn(`[HF Server] Failed to load q4 model, trying fp32: ${err}`); | ||
| model = await AutoModelForCausalLM.from_pretrained(modelId, { | ||
| cache_dir: cacheDir, | ||
| dtype: 'fp32', | ||
| local_files_only: true, | ||
| }); | ||
| } | ||
|
|
||
| // If stopServer() was called while we were loading, dispose the freshly | ||
| // created resources and skip state mutation to avoid stale references. | ||
| if (state.isStopping || stoppedAtStart) { | ||
| console.log(`[HF Server] Stop requested during load of ${modelId}; discarding.`); | ||
| try { | ||
| await model.dispose?.(); | ||
| } catch { | ||
| // Ignore dispose errors | ||
| } | ||
| return; | ||
| } | ||
|
|
||
| // Successfully loaded new model, safe to dispose old one | ||
| if (state.model) { | ||
| try { | ||
| await state.model.dispose?.(); | ||
| } catch { | ||
| // Ignore dispose errors | ||
| } | ||
| } | ||
|
|
||
| state.tokenizer = tokenizer; | ||
| state.model = model; | ||
|
|
||
| state.loadedModelId = modelId; | ||
| console.log(`[HF Server] Model loaded: ${modelId}`); | ||
| } catch (error) { | ||
| console.error(`[HF Server] Failed to load model: ${modelId}`, error); | ||
| throw error; | ||
| } finally { | ||
| state.isLoading = false; | ||
| loadModelPromise = null; | ||
| } |
There was a problem hiding this comment.
Don't clear isStopping before the current load finishes.
If stopServer() runs while the initial loadModel() is still in flight, Line 621 resets isStopping before Line 120 makes its stop check. The canceled startup can then still assign state.tokenizer, state.model, and state.loadedModelId after shutdown, leaving a stopped server with a live model in memory.
🧯 Minimal race fix
export async function stopServer(): Promise<void> {
// Signal any in-flight loadModel IIFE to abort state mutation
state.isStopping = true;
+ const pendingLoad = loadModelPromise;
if (state.server) {
await new Promise<void>((resolve) => {
...
});
}
// Dispose model only after HTTP server is fully closed
if (state.model) {
try {
await state.model.dispose?.();
} catch {
// Ignore dispose errors
}
}
state.server = null;
state.port = null;
state.loadedModelId = null;
state.pipeline = null;
state.tokenizer = null;
state.model = null;
state.isLoading = false;
- state.isStopping = false;
+ if (pendingLoad) {
+ await pendingLoad.catch(() => undefined);
+ }
+ state.isStopping = false;
}Also applies to: 585-621
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/desktop/src/main/providers/huggingface-local/server.ts` around lines 82
- 150, The race occurs because stopServer() clears state.isStopping while
loadModelPromise (the async loader in loadModel()/loadModelPromise) is still
running, allowing the loader to pass its stop check and assign
state.tokenizer/state.model/state.loadedModelId after shutdown; fix by changing
stopServer() to wait for the in-flight loadModelPromise to finish before
clearing state.isStopping (i.e., set state.isStopping = true, trigger disposal,
then if loadModelPromise exists await loadModelPromise.catch(() => {}) or
otherwise wait until loadModelPromise === null before setting state.isStopping =
false and completing shutdown), or alternatively hold a local stoppedAtStart
token in stopServer() and avoid clearing the global flag until loadModelPromise
resolves; reference: stopServer(), loadModelPromise, state.isStopping,
state.model, state.tokenizer, state.loadedModelId.
| let model; | ||
| try { | ||
| model = await AutoModelForCausalLM.from_pretrained(modelId, { | ||
| cache_dir: cacheDir, | ||
| dtype: 'q4', | ||
| local_files_only: true, | ||
| }); | ||
| } catch (err) { | ||
| console.warn(`[HF Server] Failed to load q4 model, trying fp32: ${err}`); | ||
| model = await AutoModelForCausalLM.from_pretrained(modelId, { | ||
| cache_dir: cacheDir, | ||
| dtype: 'fp32', | ||
| local_files_only: true, | ||
| }); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo '== Quantization metadata in this PR =='
rg -n -C1 'quantization' packages/agent-core/src/providers/huggingface-local.ts apps/desktop/src/main/providers/huggingface-local
echo
echo '== Loader dtype selection =='
sed -n '102,115p' apps/desktop/src/main/providers/huggingface-local/server.tsRepository: accomplish-ai/accomplish
Length of output: 1996
Loader must honor the quantization from model metadata instead of hard-coding q4/fp32 fallback.
The model metadata includes quantizations like q4f16 and fp16, but the loader at lines 102-115 only attempts dtype: 'q4' followed by dtype: 'fp32'. This causes:
- Models marked as
fp16orq4f16to fail loading with their intended quantization - Silent fallback to a larger weight set than the user selected
- Already-downloaded models in the correct format to still fail to load
The quantization field is defined in the model metadata but ignored by the loader. Update the loader to read and respect the quantization from model metadata instead of guessing.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/desktop/src/main/providers/huggingface-local/server.ts` around lines 102
- 115, The loader currently hardcodes dtype values when calling
AutoModelForCausalLM.from_pretrained (using modelId and cacheDir) and ignores
the model's metadata.quantization; update the load sequence to first read the
model metadata (the quantization field) and pass that value as the dtype to
AutoModelForCausalLM.from_pretrained, falling back only if that specific dtype
fails; preserve the existing local_files_only and cache_dir options, and ensure
error handling still attempts a safe fallback (e.g., metadata dtype -> other
metadata-compatible dtypes -> 'fp32') while logging the attempted dtype and
errors for debugging.
| export async function startServer( | ||
| modelId: string, | ||
| ): Promise<{ success: boolean; port?: number; error?: string }> { | ||
| if (state.server) { | ||
| // Server already running - just load the new model | ||
| try { | ||
| await loadModel(modelId); | ||
| return { success: true, port: state.port! }; | ||
| } catch (error) { | ||
| return { | ||
| success: false, | ||
| error: error instanceof Error ? error.message : 'Failed to load model', | ||
| }; | ||
| } | ||
| } | ||
|
|
||
| try { | ||
| await loadModel(modelId); | ||
| } catch (error) { | ||
| return { | ||
| success: false, | ||
| error: error instanceof Error ? error.message : 'Failed to load model', | ||
| }; | ||
| } | ||
|
|
||
| return new Promise((resolve) => { |
There was a problem hiding this comment.
Serialize startServer() so overlapping starts can't orphan a server.
Line 363 only checks state.server, and that field stays null until Line 556 inside the listen() callback. Two concurrent starts can both pass the guard, both finish loadModel(), and each create their own http.Server. The last callback wins state.server, leaving the first server running without any reference or shutdown path.
🔒 Suggested lifecycle guard
+let startServerPromise:
+ | Promise<{ success: boolean; port?: number; error?: string }>
+ | null = null;
+
export async function startServer(
modelId: string,
): Promise<{ success: boolean; port?: number; error?: string }> {
- if (state.server) {
- // Server already running - just load the new model
- try {
- await loadModel(modelId);
- return { success: true, port: state.port! };
- } catch (error) {
- return {
- success: false,
- error: error instanceof Error ? error.message : 'Failed to load model',
- };
- }
- }
+ if (startServerPromise) {
+ return startServerPromise;
+ }
+
+ startServerPromise = (async () => {
+ if (state.server) {
+ try {
+ await loadModel(modelId);
+ return { success: true, port: state.port! };
+ } catch (error) {
+ return {
+ success: false,
+ error: error instanceof Error ? error.message : 'Failed to load model',
+ };
+ }
+ }
- try {
- await loadModel(modelId);
- } catch (error) {
- return {
- success: false,
- error: error instanceof Error ? error.message : 'Failed to load model',
- };
- }
+ try {
+ await loadModel(modelId);
+ } catch (error) {
+ return {
+ success: false,
+ error: error instanceof Error ? error.message : 'Failed to load model',
+ };
+ }
- return new Promise((resolve) => {
- const server = http.createServer(async (req, res) => {
- ...
- });
- ...
- });
+ return await new Promise((resolve) => {
+ const server = http.createServer(async (req, res) => {
+ ...
+ });
+ ...
+ });
+ })();
+
+ try {
+ return await startServerPromise;
+ } finally {
+ startServerPromise = null;
+ }
}Also applies to: 552-579
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/desktop/src/main/providers/huggingface-local/server.ts` around lines 360
- 385, startServer can run concurrently and both callers may pass the
state.server null check, causing two http.Server instances where the last one
overwrites state.server and the first is orphaned; fix by serializing starts:
add a per-process "starting" lock/promise (e.g., state.starting) that any
startServer call awaits so only one start proceeds at a time, set state.starting
before calling loadModel/create the server, assign state.server (and state.port)
as soon as you create the http.Server (or resolve the starting promise) so other
callers wait on that single instance, and ensure you clear state.starting and
perform cleanup on errors; apply the same guard around the listen callback logic
that currently assigns state.server/state.port so overlapping starts cannot
orphan a server.
- Rename IPC channel get-user-skills-path → skills:get-user-skills-path in both preload and handlers to match test expectations - Make getOpenCodeCliPath() throw 'OpenCode CLI executable not found' instead of silently falling back to system opencode - Export isOpenCodeCliAvailable() from electron-options - Guard getBundledOpenCodeVersion() against the new throw Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
♻️ Duplicate comments (2)
apps/desktop/src/main/opencode/electron-options.ts (2)
124-146:⚠️ Potential issue | 🟠 MajorFail packaged startup when the bundled Node runtime is missing.
This now keeps going when
getBundledNodePaths()returns nothing. Line 204 and Line 278 then passundefinedinto the CLI/browser-server config, so packaged behavior becomes machine-dependent instead of failing immediately.As per coding guidelines, "Do NOT spawn `npx`/`node` without adding bundled Node.js bin to PATH (see [architecture.md](docs/architecture.md#spawning-npxnode-in-main-process))".🔒 Suggested hard fail
+function getRequiredBundledNodePaths() { + const bundledNode = getBundledNodePaths(); + if (app.isPackaged && !bundledNode) { + throw new Error('Bundled Node.js runtime not found'); + } + return bundledNode; +} + export async function buildEnvironment(taskId: string): Promise<NodeJS.ProcessEnv> { @@ - const bundledNode = getBundledNodePaths(); + const bundledNode = getRequiredBundledNodePaths(); @@ - const bundledNode = getBundledNodePaths(); + const bundledNode = getRequiredBundledNodePaths(); @@ function getBrowserServerConfig(): BrowserServerConfig { - const bundledPaths = getBundledNodePaths(); + const bundledPaths = getRequiredBundledNodePaths(); return { mcpToolsPath: getMcpToolsPath(), bundledNodeBinPath: bundledPaths?.binDir, devBrowserPort: DEV_BROWSER_PORT, }; }Also applies to: 152-153, 204-205, 274-279, 291-292
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/desktop/src/main/opencode/electron-options.ts` around lines 124 - 146, When app.isPackaged is true and getBundledNodePaths() returns undefined, fail fast instead of continuing with a missing bundled Node runtime: after calling getBundledNodePaths() (used in the packaged startup branch) check for a falsy return and throw or process.exit(1) with a clear error via processLogger/console (so env.PATH and any subsequent CLI/browser-server config never receive undefined); update the same guardpoints where bundledNode is consumed (the packaged startup block that sets env.PATH and the places that pass bundledNode into CLI/browser-server config) to enforce this hard fail.
168-177:⚠️ Potential issue | 🟠 MajorResolve HuggingFace Local routing from the effective model, and fail closed.
buildCliArgs()later uses the effective model (activeModel || storage.getSelectedModel()), but this block treats either slot beinghuggingface-localas enough to overrideopenAiBaseUrl. A staleselectedModelcan hijack another active provider, and Line 206 silently falls back to the configured remote URL if the local server is down.🛡️ Safer routing change
const activeModel = storage.getActiveProviderModel(); const selectedModel = storage.getSelectedModel(); + const effectiveModel = activeModel || selectedModel; @@ - const hfProvider = - activeModel?.provider === 'huggingface-local' || - selectedModel?.provider === 'huggingface-local'; + const hfProvider = effectiveModel?.provider === 'huggingface-local'; let hfBaseUrl: string | undefined; if (hfProvider) { const hfStatus = getHuggingFaceServerStatus(); - if (hfStatus.running && hfStatus.port) { - hfBaseUrl = `http://127.0.0.1:${hfStatus.port}/v1`; - } + if (!hfStatus.running || !hfStatus.port) { + throw new Error('HuggingFace Local server is not running'); + } + hfBaseUrl = `http://127.0.0.1:${hfStatus.port}/v1`; } @@ - openAiBaseUrl: hfProvider && hfBaseUrl ? hfBaseUrl : configuredOpenAiBaseUrl || undefined, + openAiBaseUrl: hfProvider ? hfBaseUrl : configuredOpenAiBaseUrl || undefined,Also applies to: 206-206
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/desktop/src/main/opencode/electron-options.ts` around lines 168 - 177, The code incorrectly decides to route to a local HuggingFace server if either activeModel or selectedModel has provider 'huggingface-local' and silently falls back to remote when the local server is down; change the logic to derive the effective model the same way buildCliArgs does (use activeModel || storage.getSelectedModel()), then check that model's provider for 'huggingface-local', call getHuggingFaceServerStatus() only when that effective model is local, set hfBaseUrl only if hfStatus.running and hfStatus.port, and otherwise leave openAiBaseUrl unchanged (fail closed) so a stale selectedModel cannot hijack routing; update references to hfProvider, hfBaseUrl, activeModel, selectedModel, getHuggingFaceServerStatus, and buildCliArgs accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@apps/desktop/src/main/opencode/electron-options.ts`:
- Around line 124-146: When app.isPackaged is true and getBundledNodePaths()
returns undefined, fail fast instead of continuing with a missing bundled Node
runtime: after calling getBundledNodePaths() (used in the packaged startup
branch) check for a falsy return and throw or process.exit(1) with a clear error
via processLogger/console (so env.PATH and any subsequent CLI/browser-server
config never receive undefined); update the same guardpoints where bundledNode
is consumed (the packaged startup block that sets env.PATH and the places that
pass bundledNode into CLI/browser-server config) to enforce this hard fail.
- Around line 168-177: The code incorrectly decides to route to a local
HuggingFace server if either activeModel or selectedModel has provider
'huggingface-local' and silently falls back to remote when the local server is
down; change the logic to derive the effective model the same way buildCliArgs
does (use activeModel || storage.getSelectedModel()), then check that model's
provider for 'huggingface-local', call getHuggingFaceServerStatus() only when
that effective model is local, set hfBaseUrl only if hfStatus.running and
hfStatus.port, and otherwise leave openAiBaseUrl unchanged (fail closed) so a
stale selectedModel cannot hijack routing; update references to hfProvider,
hfBaseUrl, activeModel, selectedModel, getHuggingFaceServerStatus, and
buildCliArgs accordingly.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: d7b1a81b-5404-410a-a71b-c7b7cc8b5dd4
📒 Files selected for processing (3)
apps/desktop/src/main/ipc/handlers.tsapps/desktop/src/main/opencode/electron-options.tsapps/desktop/src/preload/index.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- apps/desktop/src/preload/index.ts
Summary
Adds a HuggingFace Local inference provider enabling fully offline LLM inference via
@huggingface/transformers, without relying on external APIs.The provider runs an OpenAI-compatible HTTP server inside the Electron main process and integrates with the existing provider settings and opencode CLI.
Changes
huggingface-localprovider with model manager and inference serverHuggingFaceProviderFormUI component in settingshuggingface_local_configcolumnContributors
This PR consolidates work from:
Fixes #183
Fixes ENG-687
Summary by CodeRabbit