feat: add HuggingFace Transformers.js local inference provider#604
Conversation
…plish-ai#183) Add full-stack support for running local HuggingFace models via Transformers.js with ONNX Runtime. Includes provider types, IPC handlers, config-builder wiring, DB migration v009, UI form component, and logo assets.
✅ Snyk checks have passed. No issues have been found so far.
💻 Catch issues earlier using the plugins for VS Code, JetBrains IDEs, Visual Studio, and Eclipse. |
|
Caution Review failedAn error occurred during the review process. Please try again later. 📝 WalkthroughWalkthroughAdds HuggingFace Local provider support across desktop IPC/preload, web settings UI, core provider implementation (connection, model fetch, validation), storage layer (types, migration, accessors), and OpenCode config builder integration. Changes
Sequence Diagram(s)sequenceDiagram
participant Renderer as Renderer (Settings UI)
participant IPC as Main Process (IPC)
participant HFServer as HuggingFace Local Server
participant Storage as AppSettings Storage
Renderer->>IPC: huggingface-local:test-connection(url)
IPC->>HFServer: GET /v1/models (with timeout)
alt Success
HFServer-->>IPC: 200 + models JSON
IPC-->>Renderer: { success: true, models: [...] }
Renderer->>IPC: huggingface-local:set-config(config)
IPC->>Storage: setHuggingFaceLocalConfig(config)
Storage-->>IPC: saved
IPC-->>Renderer: void
else Failure
HFServer-->>IPC: error / timeout
IPC-->>Renderer: { success: false, error: "..." }
end
sequenceDiagram
participant OpenCode as OpenCode Config Builder
participant Storage as AppSettings Storage
participant Core as Provider Core Logic
OpenCode->>Storage: getHuggingFaceLocalConfig()
alt Config present
Storage-->>OpenCode: { serverUrl, enabled, models, ... }
OpenCode->>Core: map & validate models
Core-->>OpenCode: provider entry (modelId, toolSupport)
OpenCode->>OpenCode: add to config.providers
else No config
Storage-->>OpenCode: null
OpenCode->>OpenCode: skip registration
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~30 minutes Suggested reviewers
🚥 Pre-merge checks | ✅ 3 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (2)
apps/web/src/client/components/settings/providers/HuggingFaceLocalProviderForm.tsx (2)
90-101: Complex type mapping could be simplified.The model mapping logic handles two different shapes (ConnectedProvider's
availableModelsand localHuggingFaceLocalModel[]) with verbose type assertions. Consider normalizing the model shape earlier inhandleConnectto avoid runtime type checks here.The current implementation works but the
as { name?: string }andas { toolSupport?: ToolSupportStatus }casts reduce type safety. Since you control both paths, consider ensuring a consistent shape at the source.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/src/client/components/settings/providers/HuggingFaceLocalProviderForm.tsx` around lines 90 - 101, Normalize the model shape in handleConnect so connectedProvider.availableModels matches the HuggingFaceLocalModel shape and you can drop ad-hoc casts in HuggingFaceLocalProviderForm.tsx: in handleConnect map each incoming model to an object with id (strip /^huggingface-local\//), displayName (prefer displayName, fall back to name or id), size, and toolSupport (ensure a ToolSupportStatus value) and assign that to connectedProvider.availableModels; then update the models computation in HuggingFaceLocalProviderForm (the const models mapping) to assume HuggingFaceLocalModel fields exist and remove the `as { name?: string }` and `as { toolSupport?: ToolSupportStatus }` casts.
20-25: Consider importingHuggingFaceLocalModelfrom agent-core instead of duplicating.This local interface duplicates
HuggingFaceLocalModelfrompackages/agent-core/src/providers/huggingface-local.ts. Consider importing it to maintain a single source of truth.+import type { HuggingFaceLocalModel } from '@accomplish_ai/agent-core'; import type { ConnectedProvider, HuggingFaceLocalCredentials, ToolSupportStatus, } from '@accomplish_ai/agent-core/common'; - -interface HuggingFaceLocalModel { - id: string; - displayName: string; - size: number; - toolSupport?: ToolSupportStatus; -}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/src/client/components/settings/providers/HuggingFaceLocalProviderForm.tsx` around lines 20 - 25, Remove the duplicated local interface definition of HuggingFaceLocalModel and instead import HuggingFaceLocalModel from the agent-core provider module where it is defined; update this file to import the shared HuggingFaceLocalModel (and ToolSupportStatus if not already imported) and use that type for props/variables so there is a single source of truth for the model shape.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/agent-core/src/opencode/config-builder.ts`:
- Around line 455-501: The HuggingFace Local branches add entries to
providerConfigs but never append 'huggingface-local' to enabledProviders; update
both the connected-provider branch (where hfLocalProvider is used and
providerConfigs.push is called) and the legacy branch (where
getHuggingFaceLocalConfig(), hfLocalModels and providerConfigs.push are used) to
also push 'huggingface-local' into the enabledProviders array (same
place/approach used for Azure Foundry) so enabledProviders stays in sync with
providerConfigs.
In `@packages/agent-core/src/providers/huggingface-local.ts`:
- Around line 88-93: Sanitized URL may contain a trailing slash causing a double
slash when building `${sanitizedUrl}/v1/models`; update the logic around the
fetch call (the code that computes sanitizedUrl used by the fetchWithTimeout
call) to normalize/remove any trailing slash from sanitizedUrl before
concatenation (or join paths safely) so the request URL becomes
`${sanitizedUrl}/v1/models` without a double slash; adjust the code near the
fetchWithTimeout invocation (and any helper that produces sanitizedUrl) to trim
trailing '/' from sanitizedUrl.
---
Nitpick comments:
In
`@apps/web/src/client/components/settings/providers/HuggingFaceLocalProviderForm.tsx`:
- Around line 90-101: Normalize the model shape in handleConnect so
connectedProvider.availableModels matches the HuggingFaceLocalModel shape and
you can drop ad-hoc casts in HuggingFaceLocalProviderForm.tsx: in handleConnect
map each incoming model to an object with id (strip /^huggingface-local\//),
displayName (prefer displayName, fall back to name or id), size, and toolSupport
(ensure a ToolSupportStatus value) and assign that to
connectedProvider.availableModels; then update the models computation in
HuggingFaceLocalProviderForm (the const models mapping) to assume
HuggingFaceLocalModel fields exist and remove the `as { name?: string }` and `as
{ toolSupport?: ToolSupportStatus }` casts.
- Around line 20-25: Remove the duplicated local interface definition of
HuggingFaceLocalModel and instead import HuggingFaceLocalModel from the
agent-core provider module where it is defined; update this file to import the
shared HuggingFaceLocalModel (and ToolSupportStatus if not already imported) and
use that type for props/variables so there is a single source of truth for the
model shape.
| // HuggingFace Local provider | ||
| const hfLocalProvider = providerSettings.connectedProviders['huggingface-local']; | ||
| if ( | ||
| hfLocalProvider?.connectionStatus === 'connected' && | ||
| hfLocalProvider.credentials.type === 'huggingface-local' && | ||
| hfLocalProvider.selectedModelId | ||
| ) { | ||
| const modelId = hfLocalProvider.selectedModelId.replace(/^huggingface-local\//, ''); | ||
| const modelInfo = hfLocalProvider.availableModels?.find( | ||
| (m) => m.id === hfLocalProvider.selectedModelId || m.id === modelId, | ||
| ); | ||
| const supportsTools = (modelInfo as { toolSupport?: string })?.toolSupport === 'supported'; | ||
| providerConfigs.push({ | ||
| id: 'huggingface-local', | ||
| npm: '@ai-sdk/openai-compatible', | ||
| name: 'HuggingFace Local', | ||
| options: { | ||
| baseURL: `${hfLocalProvider.credentials.serverUrl}/v1`, | ||
| }, | ||
| models: { | ||
| [modelId]: { name: modelId, tools: supportsTools }, | ||
| }, | ||
| }); | ||
| console.log( | ||
| `[OpenCode Config Builder] HuggingFace Local configured: ${modelId} (tools: ${supportsTools})`, | ||
| ); | ||
| } else { | ||
| const hfLocalConfig = getHuggingFaceLocalConfig(); | ||
| const hfLocalModels = hfLocalConfig?.models; | ||
| if (hfLocalConfig?.enabled && hfLocalModels && hfLocalModels.length > 0) { | ||
| const models: Record<string, ProviderModelConfig> = {}; | ||
| for (const model of hfLocalModels) { | ||
| models[model.id] = { name: model.displayName, tools: model.toolSupport === 'supported' }; | ||
| } | ||
| providerConfigs.push({ | ||
| id: 'huggingface-local', | ||
| npm: '@ai-sdk/openai-compatible', | ||
| name: 'HuggingFace Local', | ||
| options: { baseURL: `${hfLocalConfig.serverUrl}/v1` }, | ||
| models, | ||
| }); | ||
| console.log( | ||
| '[OpenCode Config Builder] HuggingFace Local (legacy) configured:', | ||
| Object.keys(models), | ||
| ); | ||
| } | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check where enabledProviders is consumed
rg -n -C3 "enabledProviders" packages/agent-core/src -g '*.ts'Repository: accomplish-ai/accomplish
Length of output: 7295
🏁 Script executed:
#!/bin/bash
# Confirm provider ID mapping exists
rg -n -C3 "PROVIDER_ID_TO_OPENCODE" packages/agent-core/src/common -g '*.ts'Repository: accomplish-ai/accomplish
Length of output: 1681
🏁 Script executed:
#!/bin/bash
# Search for huggingface-local references in the codebase
rg -n -C3 "huggingface-local" packages/agent-core/src -g '*.ts'Repository: accomplish-ai/accomplish
Length of output: 8387
Ensure HuggingFace Local is added to enabledProviders when configured.
The connected and legacy HuggingFace Local paths push provider configs without explicitly adding 'huggingface-local' to enabledProviders. This creates an inconsistency with how Azure Foundry handles the same scenario (lines 519–521, 539–541), where it explicitly maintains the sync. While the provider ID is correctly mapped in PROVIDER_ID_TO_OPENCODE, the missing enabledProviders update should be added to ensure consistent behavior and prevent potential issues with downstream logic that depends on this field.
Suggested fix
if (
hfLocalProvider?.connectionStatus === 'connected' &&
hfLocalProvider.credentials.type === 'huggingface-local' &&
hfLocalProvider.selectedModelId
) {
const modelId = hfLocalProvider.selectedModelId.replace(/^huggingface-local\//, '');
const modelInfo = hfLocalProvider.availableModels?.find(
(m) => m.id === hfLocalProvider.selectedModelId || m.id === modelId,
);
const supportsTools = (modelInfo as { toolSupport?: string })?.toolSupport === 'supported';
providerConfigs.push({
id: 'huggingface-local',
npm: '@ai-sdk/openai-compatible',
name: 'HuggingFace Local',
options: {
baseURL: `${hfLocalProvider.credentials.serverUrl}/v1`,
},
models: {
[modelId]: { name: modelId, tools: supportsTools },
},
});
+ if (!enabledProviders.includes('huggingface-local')) {
+ enabledProviders.push('huggingface-local');
+ }
console.log(
`[OpenCode Config Builder] HuggingFace Local configured: ${modelId} (tools: ${supportsTools})`,
);
} else {
const hfLocalConfig = getHuggingFaceLocalConfig();
const hfLocalModels = hfLocalConfig?.models;
if (hfLocalConfig?.enabled && hfLocalModels && hfLocalModels.length > 0) {
const models: Record<string, ProviderModelConfig> = {};
for (const model of hfLocalModels) {
models[model.id] = { name: model.displayName, tools: model.toolSupport === 'supported' };
}
providerConfigs.push({
id: 'huggingface-local',
npm: '@ai-sdk/openai-compatible',
name: 'HuggingFace Local',
options: { baseURL: `${hfLocalConfig.serverUrl}/v1` },
models,
});
+ if (!enabledProviders.includes('huggingface-local')) {
+ enabledProviders.push('huggingface-local');
+ }
console.log(
'[OpenCode Config Builder] HuggingFace Local (legacy) configured:',
Object.keys(models),
);
}
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // HuggingFace Local provider | |
| const hfLocalProvider = providerSettings.connectedProviders['huggingface-local']; | |
| if ( | |
| hfLocalProvider?.connectionStatus === 'connected' && | |
| hfLocalProvider.credentials.type === 'huggingface-local' && | |
| hfLocalProvider.selectedModelId | |
| ) { | |
| const modelId = hfLocalProvider.selectedModelId.replace(/^huggingface-local\//, ''); | |
| const modelInfo = hfLocalProvider.availableModels?.find( | |
| (m) => m.id === hfLocalProvider.selectedModelId || m.id === modelId, | |
| ); | |
| const supportsTools = (modelInfo as { toolSupport?: string })?.toolSupport === 'supported'; | |
| providerConfigs.push({ | |
| id: 'huggingface-local', | |
| npm: '@ai-sdk/openai-compatible', | |
| name: 'HuggingFace Local', | |
| options: { | |
| baseURL: `${hfLocalProvider.credentials.serverUrl}/v1`, | |
| }, | |
| models: { | |
| [modelId]: { name: modelId, tools: supportsTools }, | |
| }, | |
| }); | |
| console.log( | |
| `[OpenCode Config Builder] HuggingFace Local configured: ${modelId} (tools: ${supportsTools})`, | |
| ); | |
| } else { | |
| const hfLocalConfig = getHuggingFaceLocalConfig(); | |
| const hfLocalModels = hfLocalConfig?.models; | |
| if (hfLocalConfig?.enabled && hfLocalModels && hfLocalModels.length > 0) { | |
| const models: Record<string, ProviderModelConfig> = {}; | |
| for (const model of hfLocalModels) { | |
| models[model.id] = { name: model.displayName, tools: model.toolSupport === 'supported' }; | |
| } | |
| providerConfigs.push({ | |
| id: 'huggingface-local', | |
| npm: '@ai-sdk/openai-compatible', | |
| name: 'HuggingFace Local', | |
| options: { baseURL: `${hfLocalConfig.serverUrl}/v1` }, | |
| models, | |
| }); | |
| console.log( | |
| '[OpenCode Config Builder] HuggingFace Local (legacy) configured:', | |
| Object.keys(models), | |
| ); | |
| } | |
| } | |
| // HuggingFace Local provider | |
| const hfLocalProvider = providerSettings.connectedProviders['huggingface-local']; | |
| if ( | |
| hfLocalProvider?.connectionStatus === 'connected' && | |
| hfLocalProvider.credentials.type === 'huggingface-local' && | |
| hfLocalProvider.selectedModelId | |
| ) { | |
| const modelId = hfLocalProvider.selectedModelId.replace(/^huggingface-local\//, ''); | |
| const modelInfo = hfLocalProvider.availableModels?.find( | |
| (m) => m.id === hfLocalProvider.selectedModelId || m.id === modelId, | |
| ); | |
| const supportsTools = (modelInfo as { toolSupport?: string })?.toolSupport === 'supported'; | |
| providerConfigs.push({ | |
| id: 'huggingface-local', | |
| npm: '@ai-sdk/openai-compatible', | |
| name: 'HuggingFace Local', | |
| options: { | |
| baseURL: `${hfLocalProvider.credentials.serverUrl}/v1`, | |
| }, | |
| models: { | |
| [modelId]: { name: modelId, tools: supportsTools }, | |
| }, | |
| }); | |
| if (!enabledProviders.includes('huggingface-local')) { | |
| enabledProviders.push('huggingface-local'); | |
| } | |
| console.log( | |
| `[OpenCode Config Builder] HuggingFace Local configured: ${modelId} (tools: ${supportsTools})`, | |
| ); | |
| } else { | |
| const hfLocalConfig = getHuggingFaceLocalConfig(); | |
| const hfLocalModels = hfLocalConfig?.models; | |
| if (hfLocalConfig?.enabled && hfLocalModels && hfLocalModels.length > 0) { | |
| const models: Record<string, ProviderModelConfig> = {}; | |
| for (const model of hfLocalModels) { | |
| models[model.id] = { name: model.displayName, tools: model.toolSupport === 'supported' }; | |
| } | |
| providerConfigs.push({ | |
| id: 'huggingface-local', | |
| npm: '@ai-sdk/openai-compatible', | |
| name: 'HuggingFace Local', | |
| options: { baseURL: `${hfLocalConfig.serverUrl}/v1` }, | |
| models, | |
| }); | |
| if (!enabledProviders.includes('huggingface-local')) { | |
| enabledProviders.push('huggingface-local'); | |
| } | |
| console.log( | |
| '[OpenCode Config Builder] HuggingFace Local (legacy) configured:', | |
| Object.keys(models), | |
| ); | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/agent-core/src/opencode/config-builder.ts` around lines 455 - 501,
The HuggingFace Local branches add entries to providerConfigs but never append
'huggingface-local' to enabledProviders; update both the connected-provider
branch (where hfLocalProvider is used and providerConfigs.push is called) and
the legacy branch (where getHuggingFaceLocalConfig(), hfLocalModels and
providerConfigs.push are used) to also push 'huggingface-local' into the
enabledProviders array (same place/approach used for Azure Foundry) so
enabledProviders stays in sync with providerConfigs.
| try { | ||
| const response = await fetchWithTimeout( | ||
| `${sanitizedUrl}/v1/models`, | ||
| { method: 'GET' }, | ||
| HF_LOCAL_API_TIMEOUT_MS, | ||
| ); |
There was a problem hiding this comment.
Handle potential trailing slash in URL.
If sanitizedUrl ends with /, the constructed URL becomes http://localhost:8787//v1/models with a double slash. While most servers handle this gracefully, it's cleaner to normalize the URL.
🛡️ Proposed fix
try {
+ const normalizedUrl = sanitizedUrl.replace(/\/+$/, '');
const response = await fetchWithTimeout(
- `${sanitizedUrl}/v1/models`,
+ `${normalizedUrl}/v1/models`,
{ method: 'GET' },
HF_LOCAL_API_TIMEOUT_MS,
);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| try { | |
| const response = await fetchWithTimeout( | |
| `${sanitizedUrl}/v1/models`, | |
| { method: 'GET' }, | |
| HF_LOCAL_API_TIMEOUT_MS, | |
| ); | |
| try { | |
| const normalizedUrl = sanitizedUrl.replace(/\/+$/, ''); | |
| const response = await fetchWithTimeout( | |
| `${normalizedUrl}/v1/models`, | |
| { method: 'GET' }, | |
| HF_LOCAL_API_TIMEOUT_MS, | |
| ); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/agent-core/src/providers/huggingface-local.ts` around lines 88 - 93,
Sanitized URL may contain a trailing slash causing a double slash when building
`${sanitizedUrl}/v1/models`; update the logic around the fetch call (the code
that computes sanitizedUrl used by the fetchWithTimeout call) to
normalize/remove any trailing slash from sanitizedUrl before concatenation (or
join paths safely) so the request URL becomes `${sanitizedUrl}/v1/models`
without a double slash; adjust the code near the fetchWithTimeout invocation
(and any helper that produces sanitizedUrl) to trim trailing '/' from
sanitizedUrl.
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
|
@coderabbitai, do you have any minor suggestions or nitpicks? |
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chain🏁 Script executed: Length of output: 18382
1. Hardcoded default URL in the form componentFile: The default server URL is hardcoded as import { HF_LOCAL_DEFAULT_URL } from '@accomplish_ai/agent-core';
// Then use it:
const [serverUrl, setServerUrl] = useState(HF_LOCAL_DEFAULT_URL);Also on line 173 where it's used as a fallback. 2. Missing internationalization (i18n)File: The OllamaProviderForm uses
3. Simplify model display name extractionFile: The current implementation uses a simple split: displayName: m.id.split('/').pop() || m.id,Consider using the existing 4. Simplify model mapping logicFile: The model mapping has some complex type casting. Consider simplifying by normalizing the types earlier or using a type guard function: const models: HuggingFaceLocalModel[] = (
connectedProvider?.availableModels || availableModels
).map((m) => ({
id: m.id.replace(/^huggingface-local\//, ''),
displayName: m.name || m.displayName || m.id,
size: 0,
toolSupport: m.toolSupport || 'unknown',
}));5. Console.log statementsFile: There are console.log statements for HuggingFace Local configuration. While these match the pattern used for other providers, consider whether they should remain or if you want to use a proper logging utility for production. 6. Missing JSDoc for public functionFile: The These are all minor polish items — the core implementation looks solid! The functionality follows the established patterns from Ollama and LM Studio providers nicely. 🎉 |
…SDoc - Replace hardcoded 'http://localhost:8787' with imported HF_LOCAL_DEFAULT_URL - Simplify model display name extraction in provider form - Add comprehensive JSDoc to fetchHuggingFaceLocalModels
|
@coderabbitai, does everything look good now, I have worked on the minor suggestions too! |
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chain🏁 Script executed: Length of output: 15739
✅ What's been fixed:
📝 Minor items still present:
If items 1-3 are intentional choices or planned for future work, everything looks good to merge! The core functionality is solid and follows the established patterns well. Let me know if you'd like me to take another look after any additional changes, or if you're ready to proceed as-is. 👍 |
…vider Replace hardcoded English strings with useTranslation keys matching the pattern used by all other provider forms. Use the shared getModelDisplayName utility for consistent model name formatting.
|
@coderabbitai, does everything look good now, I have worked on the minor suggestions too! |
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chain🏁 Script executed: Length of output: 12217
I can confirm that you've addressed all the remaining suggestions: ✅ Recently Fixed:
✅ Previously Fixed (still good):
The implementation is solid, consistent with the rest of the codebase, and ready to merge! Great job on the thorough implementation and addressing all the feedback. 🚀 |
|
@coderabbitai Thanks so much for confirming! |
|
Tip For best results, initiate chat on the files or code changes.
It was a pleasure reviewing this feature. You did an excellent job implementing the HuggingFace Transformers.js local inference provider, and the way you addressed all the feedback shows great attention to detail. The integration follows the established patterns in the codebase perfectly and adds a valuable capability for users who want to run models locally without cloud APIs. Best of luck with the merge, and feel free to tag me if you need any future reviews or have questions! 🚀 |
There was a problem hiding this comment.
🧹 Nitpick comments (2)
apps/web/src/client/components/settings/providers/HuggingFaceLocalProviderForm.tsx (2)
22-27: ImportHuggingFaceLocalModelfrom agent-core instead of redefining locally.This interface duplicates the one already exported from
@accomplish_ai/agent-core(atpackages/agent-core/src/providers/huggingface-local.ts), which also includes aquantization?: stringfield that's missing here. Using the shared type avoids drift and ensures consistency.Suggested fix
import type { ConnectedProvider, HuggingFaceLocalCredentials, ToolSupportStatus, } from '@accomplish_ai/agent-core/common'; -import { HF_LOCAL_DEFAULT_URL } from '@accomplish_ai/agent-core'; +import { HF_LOCAL_DEFAULT_URL, type HuggingFaceLocalModel } from '@accomplish_ai/agent-core'; import { ConnectButton, ... } from '../shared'; import huggingfaceLogo from '/assets/ai-logos/huggingface.png'; -interface HuggingFaceLocalModel { - id: string; - displayName: string; - size: number; - toolSupport?: ToolSupportStatus; -} - interface HuggingFaceLocalProviderFormProps {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/src/client/components/settings/providers/HuggingFaceLocalProviderForm.tsx` around lines 22 - 27, The local HuggingFaceLocalModel interface duplicates the shared type and omits the quantization field; remove the local definition in HuggingFaceLocalProviderForm.tsx and instead import HuggingFaceLocalModel from `@accomplish_ai/agent-core`, updating any usages in the file to rely on the imported type (e.g., props, state, and functions that reference HuggingFaceLocalModel) so the shared quantization?: string and other fields are preserved and type drift is prevented.
117-117: Use i18n for provider name consistency.The provider name is hardcoded as
"HuggingFace Local"while other strings in this component useuseTranslation. The localization keyproviders.huggingface-localalready exists insettings.json.Suggested fix
- <ProviderFormHeader logoSrc={huggingfaceLogo} providerName="HuggingFace Local" /> + <ProviderFormHeader logoSrc={huggingfaceLogo} providerName={t('providers.huggingface-local')} />🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/src/client/components/settings/providers/HuggingFaceLocalProviderForm.tsx` at line 117, Replace the hardcoded provider name passed to ProviderFormHeader with the i18n key: call the existing useTranslation hook's t function and pass t('providers.huggingface-local') as the providerName prop to ProviderFormHeader (ensure the component uses the t function already imported/initialized in this file and reference ProviderFormHeader and the localization key providers.huggingface-local).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In
`@apps/web/src/client/components/settings/providers/HuggingFaceLocalProviderForm.tsx`:
- Around line 22-27: The local HuggingFaceLocalModel interface duplicates the
shared type and omits the quantization field; remove the local definition in
HuggingFaceLocalProviderForm.tsx and instead import HuggingFaceLocalModel from
`@accomplish_ai/agent-core`, updating any usages in the file to rely on the
imported type (e.g., props, state, and functions that reference
HuggingFaceLocalModel) so the shared quantization?: string and other fields are
preserved and type drift is prevented.
- Line 117: Replace the hardcoded provider name passed to ProviderFormHeader
with the i18n key: call the existing useTranslation hook's t function and pass
t('providers.huggingface-local') as the providerName prop to ProviderFormHeader
(ensure the component uses the t function already imported/initialized in this
file and reference ProviderFormHeader and the localization key
providers.huggingface-local).
Import HuggingFaceLocalModel from agent-core instead of redefining locally. Use translation key for provider name in form header.
|
@coderabbitai I've worked on it now! Thank you :) |
|
Tip For best results, initiate chat on the files or code changes.
|
|
Jira ticket: https://accomplish-ai.atlassian.net/browse/ENG-687 |
Adds a HuggingFace Local inference provider enabling fully offline LLM inference via @huggingface/transformers, without relying on external APIs. The provider runs an OpenAI-compatible HTTP server inside the Electron main process and integrates with the existing provider settings. Key changes: - New huggingface-local provider with model manager and inference server - HuggingFaceProviderForm UI component in settings - v009 migration adding huggingface_local_config column - IPC handlers for model download/management - Provider icon and logo assets Credit: GunaPalanivel (#650), nancysangani (#488), SaaiAravindhRaja (#604) Closes #183
|
Thank you for your contribution! 🙏 This feature has been consolidated into PR #705 which includes work from multiple contributors. Your implementation was a key reference for the final solution. Closing in favor of the consolidated PR. |
1 similar comment
|
Thank you for your contribution! 🙏 This feature has been consolidated into PR #705 which includes work from multiple contributors. Your implementation was a key reference for the final solution. Closing in favor of the consolidated PR. |
|
Thank you for your contribution! 🙏 This feature has been consolidated into PR #705 which includes work from multiple contributors. Your implementation was a key reference for the final solution. Closing in favor of the consolidated PR. |
- From #604 (SaaiAravindhRaja): i18n locale strings for HuggingFace Local settings panel (provider name, labels, server URL hint, model selection), 'huggingface-local/' prefix in PROVIDER_PREFIXES, testHuggingFaceLocalConnection(), fetchHuggingFaceLocalModels(), and curated HF_RECOMMENDED_MODELS list - From #488 (nancysangani): HuggingFaceHubModel type, searchHuggingFaceHubModels() for discovering ONNX-compatible models via HuggingFace Hub API, and additional Xenova mirror models in HF_RECOMMENDED_MODELS
Adds a HuggingFace Local inference provider enabling fully offline LLM inference via @huggingface/transformers, without relying on external APIs. The provider runs an OpenAI-compatible HTTP server inside the Electron main process and integrates with the existing provider settings. Key changes: - New huggingface-local provider with model manager and inference server - HuggingFaceProviderForm UI component in settings - v009 migration adding huggingface_local_config column - IPC handlers for model download/management - Provider icon and logo assets Credit: GunaPalanivel (#650), nancysangani (#488), SaaiAravindhRaja (#604) Closes #183
- From #604 (SaaiAravindhRaja): i18n locale strings for HuggingFace Local settings panel (provider name, labels, server URL hint, model selection), 'huggingface-local/' prefix in PROVIDER_PREFIXES, testHuggingFaceLocalConnection(), fetchHuggingFaceLocalModels(), and curated HF_RECOMMENDED_MODELS list - From #488 (nancysangani): HuggingFaceHubModel type, searchHuggingFaceHubModels() for discovering ONNX-compatible models via HuggingFace Hub API, and additional Xenova mirror models in HF_RECOMMENDED_MODELS
Description
Add HuggingFace Transformers.js as a local inference provider, enabling users to run ONNX models locally via a lightweight inference server.
Demo
demo-issue-183.mov
Changes
huggingface-localprovider type in agent-core with connection testing and model fetchinghuggingface_local_configcolumn to provider settingstestHuggingFaceLocalConnectionandfetchHuggingFaceLocalModelsType of Change
Checklist
Related Issues
Fixes #183
Summary by CodeRabbit
New Features
Chores