-
-
Notifications
You must be signed in to change notification settings - Fork 6.4k
Description
Check for existing issues
- I have searched the existing issues and checked that my issue is not a duplicate.
What happened?
When using a model that goes through the Chat Completions → Responses API bridge (models with mode: responses in model cost map), LiteLLM's internal litellm_params dict — including the proxy-enriched metadata — is spread verbatim into the upstream API request body. additional_drop_params cannot prevent this.
Two concrete consequences:
-
Internal data forwarded to upstream providers: In the proxy path,
metadatais enriched with fields likeuser_api_key(hashed),user_api_key_user_email,requester_ip_address,headers, budget/spend info, the fullUserAPIKeyAuthobject, etc. For providers that don't explicitly filter metadata (OpenAI, Azure, Manus), this data is included in the HTTP request body sent upstream. -
Requests fail on strict backends: Backends that reject unknown fields (e.g. ChatGPT Codex backend) return 400 errors because they receive an unexpected
metadataobject (and otherlitellm_paramskeys likeproxy_server_request,model_info,preset_cache_key, etc.).additional_drop_params: ["metadata"]cannot prevent this becausemetadatain the bridge path comes fromlitellm_params, notoptional_params, bypassing the drop mechanism entirely.
Note: Some providers already work around this ad-hoc — chatgpt provider explicitly pop("metadata") in transform_responses_api_request(), and volcengine does the same with a comment "Ensure metadata never reaches provider" — but this is not systematic.
Relationship to metadata → litellm_metadata migration (#6022)
This bug appears to be a direct consequence of the incomplete migration from metadata to litellm_metadata for LiteLLM-internal parameters.
The proxy already correctly uses litellm_metadata for endpoints listed in LITELLM_METADATA_ROUTES (including /responses, /v1/messages, batches, files), keeping the metadata field clean for user/OpenAI data. However, the /v1/chat/completions endpoint still stores internal proxy data under metadata (litellm_pre_call_utils.py:85):
LITELLM_METADATA_ROUTES = ("batches", "/v1/messages", "responses", "files")
def _get_metadata_variable_name(request: Request) -> str:
# ...
if any(route in request.url.path for route in LITELLM_METADATA_ROUTES):
return "litellm_metadata"
return "metadata" # <-- /v1/chat/completions hits this pathWhen a chat completion request is then routed through the Responses API bridge, the internal-data-laden metadata is spread into the upstream request via litellm_params (transformation.py:311). The bridge has no logic to:
- Separate user-provided metadata from LiteLLM-internal metadata
- Filter
litellm_paramskeys before spreading intorequest_data - Ensure
additional_drop_paramsis respected in this path
The infrastructure for separation already exists — get_litellm_params() accepts both metadata and litellm_metadata as independent parameters — but the bridge doesn't leverage it.
Steps to Reproduce
-
Configure a model that triggers the Responses API bridge (any model with
mode: responses). -
Call via proxy or SDK:
import litellm
response = litellm.completion(
model="gpt-5.2-codex",
messages=[{"role": "user", "content": "Hello"}],
metadata={"custom_key": "value"},
additional_drop_params=["metadata"], # has no effect
)-
Enable debug logging or inspect the network. Observe that the upstream request body contains
metadatawith internal LiteLLM fields, plus otherlitellm_paramskeys. -
For backends that reject unknown fields (e.g. Codex backend), the request fails with a 400 error.
Relevant log output
What part of LiteLLM is this about?
SDK (litellm Python package)
What LiteLLM version are you on ?
v1.81.7
Twitter / LinkedIn details
No response