Skip to content

feat(llm): add LiteLLM as a supported client#461

Open
RheagalFire wants to merge 1 commit intoMemoriLabs:mainfrom
RheagalFire:feat/add-litellm-client
Open

feat(llm): add LiteLLM as a supported client#461
RheagalFire wants to merge 1 commit intoMemoriLabs:mainfrom
RheagalFire:feat/add-litellm-client

Conversation

@RheagalFire
Copy link
Copy Markdown

Summary

Adds support for the LiteLLM SDK alongside the existing native client wrappers (Anthropic, OpenAI, Google, xAI, PydanticAi, LangChain, Agno). LiteLLM is functional rather than
client-class-based, since users invoke litellm.completion(...) / litellm.acompletion(...) directly. This PR wraps both functions at the module level via memori.llm.register(litellm). After registration,
every call into either function flows through Memori's existing invoke pipeline (conversation capture + recall injection) regardless of which of LiteLLM's 100+ backing providers (OpenAI, Anthropic, Bedrock,
Vertex AI, Cohere, Mistral, Groq, Perplexity, Together, Fireworks, Cerebras, Databricks, IBM Watsonx, ...) the model spec routes to.

LiteLLM normalizes every backing's response to OpenAI shape, so the existing OpenAI adapter handles the parsed payload without duplication. This PR just decorates OpenaiAdapter with one extra
@Registry.register_adapter(llm_is_litellm) line.

Usage

import litellm                         
from memori import Memori
                                                                                                                                                                                                                   
memori = Memori(...)
memori.llm.register(litellm)   # patches litellm.completion + litellm.acompletion                                                                                                                                  
                                         
# Subsequent calls flow through Memori; the routed backing is opaque
response = litellm.completion(                                
    model="anthropic/claude-sonnet-4-6",                    
    messages=[{"role": "user", "content": "hello"}],          
)                                                             

The same register call also covers litellm.acompletion for async use.

Files

  • memori/llm/clients/direct.py (+76): LiteLLM(BaseClient) with register() that wraps litellm.completion and litellm.acompletion, stores backups on the module (_completion, _acompletion), and sets
    client._memori_installed for idempotency. SDK version is read from litellm.__version__ or importlib.metadata.
  • memori/llm/clients/__init__.py (+5): export LiteLLM.
  • memori/llm/_utils.py (+18): client_is_litellm(module) matcher (accepts the litellm module or any submodule); llm_is_litellm(provider, title) adapter matcher.
  • memori/llm/_constants.py (+1): LITELLM_LLM_PROVIDER = "litellm".
  • memori/llm/adapters/openai/_adapter.py (+3): @Registry.register_adapter(llm_is_litellm) decorator added to the existing OpenaiAdapter (LiteLLM responses are OpenAI-shaped, so no separate adapter file is
    needed).
  • tests/test_litellm_client.py (new, 132 LOC): 9 tests covering matcher behavior, register validation, idempotency, provider-metadata wiring, registry dispatch.

Tests

Unit tests (9 / 9 pass)

$ pytest tests/test_litellm_client.py -v --override-ini="addopts="                                                                                                                                                 
test_client_is_litellm_matches_module PASSED                                                                                                                                                                       
test_client_is_litellm_rejects_other_modules PASSED                                                                                                                                                                
test_client_is_litellm_rejects_arbitrary_objects PASSED       
test_client_is_litellm_accepts_submodule PASSED                                                                                                                                                                    
test_litellm_register_requires_completion_attr PASSED                                                                                                                                                              
test_litellm_register_wraps_completion_and_acompletion PASSED
test_litellm_register_is_idempotent PASSED                                                                                                                                                                         
test_litellm_register_sets_provider_metadata PASSED           
test_litellm_registered_in_registry PASSED                                                                                                                                                                         
9 passed in 0.73s                                             

Live E2E

import litellm                                                                                                                                                                                                     
from memori._config import Config          
from memori.llm.clients import LiteLLM                                                                                                                                                                             
                                         
config = Config()                                                                                                                                                                                                  
LiteLLM(config).register(litellm)                                                                                                                                                                                  
                                                  
resp = litellm.completion(                                                                                                                                                                                         
    model="anthropic/claude-sonnet-4-6",          
    messages=[{"role": "user", "content": "Reply with the single word: pong."}],
    max_tokens=20,                                          
)                                                                                                                                                                                                                  
# content: 'pong'                                             
# model: 'claude-sonnet-4-6'                                                                                                                                                                                       
# usage prompt/completion: 16 / 5                           

The wrapped call returned 'pong' and the conversation flowed through Memori's invoke pipeline. Routed via Anthropic (proven by model='claude-sonnet-4-6') without any Memori code being aware of which backing
was selected. That is the LiteLLM value-add.

Why

Today the supported_clients matrix has Anthropic / OpenAI / Google / xAI / PydanticAi / LangChain / Agno. Adding Cohere or Mistral or Together AI etc. requires writing another wrapper from scratch. LiteLLM
solves this once: it accepts model="<provider>/<name>" and resolves credentials from each backing's standard env vars. One wrapper file gives Memori coverage of every provider LiteLLM supports.

@RheagalFire RheagalFire force-pushed the feat/add-litellm-client branch from 0060b13 to 4993174 Compare May 1, 2026 18:52
@devwdave
Copy link
Copy Markdown
Contributor

devwdave commented May 4, 2026

Thanks for the update here. The LiteLLM wrapper approach looks reasonable overall, but CI is currently failing because tests/test_litellm_client.py imports litellm at module import time and the PR does not add litellm to the project/test dependencies.

In a clean environment this fails during test collection with:

ModuleNotFoundError: No module named 'litellm'

Could you either add litellm to the appropriate test/dev dependency path used by CI, or adjust the tests to avoid importing the real package at module scope and use fake module objects instead? Once that’s fixed and CI is green, I’m happy to take another look.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants