feat(llm): add LiteLLM as a supported client#461
Open
RheagalFire wants to merge 1 commit intoMemoriLabs:mainfrom
Open
feat(llm): add LiteLLM as a supported client#461RheagalFire wants to merge 1 commit intoMemoriLabs:mainfrom
RheagalFire wants to merge 1 commit intoMemoriLabs:mainfrom
Conversation
0060b13 to
4993174
Compare
Contributor
|
Thanks for the update here. The LiteLLM wrapper approach looks reasonable overall, but CI is currently failing because In a clean environment this fails during test collection with: Could you either add litellm to the appropriate test/dev dependency path used by CI, or adjust the tests to avoid importing the real package at module scope and use fake module objects instead? Once that’s fixed and CI is green, I’m happy to take another look. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Adds support for the LiteLLM SDK alongside the existing native client wrappers (Anthropic, OpenAI, Google, xAI, PydanticAi, LangChain, Agno). LiteLLM is functional rather than
client-class-based, since users invoke
litellm.completion(...)/litellm.acompletion(...)directly. This PR wraps both functions at the module level viamemori.llm.register(litellm). After registration,every call into either function flows through Memori's existing invoke pipeline (conversation capture + recall injection) regardless of which of LiteLLM's 100+ backing providers (OpenAI, Anthropic, Bedrock,
Vertex AI, Cohere, Mistral, Groq, Perplexity, Together, Fireworks, Cerebras, Databricks, IBM Watsonx, ...) the model spec routes to.
LiteLLM normalizes every backing's response to OpenAI shape, so the existing OpenAI adapter handles the parsed payload without duplication. This PR just decorates
OpenaiAdapterwith one extra@Registry.register_adapter(llm_is_litellm)line.Usage
The same
registercall also coverslitellm.acompletionfor async use.Files
memori/llm/clients/direct.py(+76):LiteLLM(BaseClient)withregister()that wrapslitellm.completionandlitellm.acompletion, stores backups on the module (_completion,_acompletion), and setsclient._memori_installedfor idempotency. SDK version is read fromlitellm.__version__orimportlib.metadata.memori/llm/clients/__init__.py(+5): exportLiteLLM.memori/llm/_utils.py(+18):client_is_litellm(module)matcher (accepts the litellm module or any submodule);llm_is_litellm(provider, title)adapter matcher.memori/llm/_constants.py(+1):LITELLM_LLM_PROVIDER = "litellm".memori/llm/adapters/openai/_adapter.py(+3):@Registry.register_adapter(llm_is_litellm)decorator added to the existingOpenaiAdapter(LiteLLM responses are OpenAI-shaped, so no separate adapter file isneeded).
tests/test_litellm_client.py(new, 132 LOC): 9 tests covering matcher behavior, register validation, idempotency, provider-metadata wiring, registry dispatch.Tests
Unit tests (9 / 9 pass)
Live E2E
The wrapped call returned
'pong'and the conversation flowed through Memori's invoke pipeline. Routed via Anthropic (proven bymodel='claude-sonnet-4-6') without any Memori code being aware of which backingwas selected. That is the LiteLLM value-add.
Why
Today the supported_clients matrix has Anthropic / OpenAI / Google / xAI / PydanticAi / LangChain / Agno. Adding Cohere or Mistral or Together AI etc. requires writing another wrapper from scratch. LiteLLM
solves this once: it accepts
model="<provider>/<name>"and resolves credentials from each backing's standard env vars. One wrapper file gives Memori coverage of every provider LiteLLM supports.