Skip to content

[Bug]: Internal metadata in litellm_params leaks to upstream API via Chat → Responses bridge #20419

@ZeroClover

Description

@ZeroClover

Check for existing issues

  • I have searched the existing issues and checked that my issue is not a duplicate.

What happened?

When using a model that goes through the Chat Completions → Responses API bridge (models with mode: responses in model cost map), LiteLLM's internal litellm_params dict — including the proxy-enriched metadata — is spread verbatim into the upstream API request body. additional_drop_params cannot prevent this.

Two concrete consequences:

  1. Internal data forwarded to upstream providers: In the proxy path, metadata is enriched with fields like user_api_key (hashed), user_api_key_user_email, requester_ip_address, headers, budget/spend info, the full UserAPIKeyAuth object, etc. For providers that don't explicitly filter metadata (OpenAI, Azure, Manus), this data is included in the HTTP request body sent upstream.

  2. Requests fail on strict backends: Backends that reject unknown fields (e.g. ChatGPT Codex backend) return 400 errors because they receive an unexpected metadata object (and other litellm_params keys like proxy_server_request, model_info, preset_cache_key, etc.). additional_drop_params: ["metadata"] cannot prevent this because metadata in the bridge path comes from litellm_params, not optional_params, bypassing the drop mechanism entirely.

Note: Some providers already work around this ad-hoc — chatgpt provider explicitly pop("metadata") in transform_responses_api_request(), and volcengine does the same with a comment "Ensure metadata never reaches provider" — but this is not systematic.

Relationship to metadatalitellm_metadata migration (#6022)

This bug appears to be a direct consequence of the incomplete migration from metadata to litellm_metadata for LiteLLM-internal parameters.

The proxy already correctly uses litellm_metadata for endpoints listed in LITELLM_METADATA_ROUTES (including /responses, /v1/messages, batches, files), keeping the metadata field clean for user/OpenAI data. However, the /v1/chat/completions endpoint still stores internal proxy data under metadata (litellm_pre_call_utils.py:85):

LITELLM_METADATA_ROUTES = ("batches", "/v1/messages", "responses", "files")

def _get_metadata_variable_name(request: Request) -> str:
    # ...
    if any(route in request.url.path for route in LITELLM_METADATA_ROUTES):
        return "litellm_metadata"
    return "metadata"   # <-- /v1/chat/completions hits this path

When a chat completion request is then routed through the Responses API bridge, the internal-data-laden metadata is spread into the upstream request via litellm_params (transformation.py:311). The bridge has no logic to:

  1. Separate user-provided metadata from LiteLLM-internal metadata
  2. Filter litellm_params keys before spreading into request_data
  3. Ensure additional_drop_params is respected in this path

The infrastructure for separation already exists — get_litellm_params() accepts both metadata and litellm_metadata as independent parameters — but the bridge doesn't leverage it.

Steps to Reproduce

  1. Configure a model that triggers the Responses API bridge (any model with mode: responses).

  2. Call via proxy or SDK:

import litellm

response = litellm.completion(
    model="gpt-5.2-codex",
    messages=[{"role": "user", "content": "Hello"}],
    metadata={"custom_key": "value"},
    additional_drop_params=["metadata"],  # has no effect
)
  1. Enable debug logging or inspect the network. Observe that the upstream request body contains metadata with internal LiteLLM fields, plus other litellm_params keys.

  2. For backends that reject unknown fields (e.g. Codex backend), the request fails with a 400 error.

Relevant log output

What part of LiteLLM is this about?

SDK (litellm Python package)

What LiteLLM version are you on ?

v1.81.7

Twitter / LinkedIn details

No response

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions