Open
Conversation
Collaborator
Author
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.



What this PR does
Before this PR:
Poe provider relied on
model.endpoint_typeto determine which SDK backend to use, but Poe models fetched from API never have this field set, causing all models to fall back to the OpenAICompatible Chat path. Claude models on Poe couldn't use the Anthropic SDK, and GPT/O-series models couldn't use the Responses API.
After this PR:
Poe provider routes models to the correct AI SDK backend based on model ID:
AnthropicMessagesLanguageModel(Anthropic Compatible API)OpenAIResponsesLanguageModel(Responses API)OpenAICompatibleChatLanguageModel(Chat Completions API)Also adds proper web search support (
extra_body.web_searchfor chat-path models, native SDK web search for Claude/GPT) and removes redundant Poe-specific reasoning logic now handled by thestandard pipeline.
Fixes #
Why we need it and why it was done in this way
The following tradeoffs were made:
modelId.toLowerCase().startsWith('claude')instead of importingisAnthropicModelutility to avoid pulling in heavy renderer dependency chains during module initialization.@renderer/config/models/openai) instead of barrel import (@renderer/config/models) for the same reason.The following alternatives were considered:
o1-mini,o1-preview, and search-preview models only support Chat CompletionsBreaking changes
None
Special notes for your reviewer
aihubmixprovider which also uses{ id: modelId } as Modelto call model detection utilities.poe-provider.tsis registered as aProviderExtensionwith LRU caching, consistent with other custom providers.Checklist
Release note