Skip to content

LangSmith Prompt Hub: UI Model config shows base_url (OpenRouter) but pullPromptCommit manifest omits base_url / wrong secret id for same commit #37018

@sotarak

Description

@sotarak

Submission checklist

  • This is a bug, not a usage question.
  • I added a clear and descriptive title that summarizes this issue.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.
  • The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
  • This is not related to the langchain-community package.
  • I posted a self-contained, minimal, reproducible example. A maintainer can copy it and run it AS IS.

Package (Required)

  • langchain
  • langchain-openai
  • langchain-anthropic
  • langchain-classic
  • langchain-core
  • langchain-model-profiles
  • langchain-tests
  • langchain-text-splitters
  • langchain-chroma
  • langchain-deepseek
  • langchain-exa
  • langchain-fireworks
  • langchain-groq
  • langchain-huggingface
  • langchain-mistralai
  • langchain-nomic
  • langchain-ollama
  • langchain-openrouter
  • langchain-perplexity
  • langchain-qdrant
  • langchain-xai
  • Other / not sure / general

Related Issues / PRs

No response

Reproduction Steps / Example Code (Python)

"""
Minimal repro: compare LangSmith UI model config vs Hub API manifest.

Prereqs:
  pip install "langsmith>=0.5"

Env:
  export LANGSMITH_API_KEY="lsv2_..."
  # Optional (non-default host):
  # export LANGSMITH_ENDPOINT="https://api.smith.langchain.com"
"""

from __future__ import annotations

import json
import os
import sys
from typing import Any

from langsmith import Client


PROMPT_IDENTIFIER = os.environ.get("PROMPT_IDENTIFIER", "summary-session")
# Use full name if needed, e.g. "my-org/summary-session"


def _walk(obj: Any, path: str = "$") -> None:
    if isinstance(obj, dict):
        lc_id = obj.get("id")
        if lc_id == ["langchain", "chat_models", "openai", "ChatOpenAI"]:
            kwargs = obj.get("kwargs") or {}
            print(f"\n--- Found ChatOpenAI at {path} ---")
            print("model:", kwargs.get("model"))
            print("base_url:", kwargs.get("base_url"))
            print("openai_api_key:", kwargs.get("openai_api_key"))
            print("use_responses_api:", kwargs.get("use_responses_api"))
        for k, v in obj.items():
            _walk(v, f"{path}.{k}")
    elif isinstance(obj, list):
        for i, item in enumerate(obj):
            _walk(item, f"{path}[{i}]")


def main() -> int:
    if not os.environ.get("LANGSMITH_API_KEY"):
        print("Missing LANGSMITH_API_KEY", file=sys.stderr)
        return 2

    client = Client(
        api_key=os.environ["LANGSMITH_API_KEY"],
        api_url=os.environ.get("LANGSMITH_ENDPOINT"),
    )

    commit = client.pull_prompt_commit(
        PROMPT_IDENTIFIER,
        include_model=True,
        skip_cache=True,
    )

    print("prompt_identifier:", PROMPT_IDENTIFIER)
    print("commit_hash:", getattr(commit, "commit_hash", None))

    manifest = getattr(commit, "manifest", None)
    if manifest is None:
        print("No manifest on commit object", file=sys.stderr)
        return 3

    print("\n=== Raw manifest (JSON) ===")
    print(json.dumps(manifest, indent=2, default=str))

    print("\n=== ChatOpenAI kwargs scan ===")
    _walk(manifest)

    return 0


if __name__ == "__main__":
    raise SystemExit(main())

Error Message and Stack Trace (if applicable)

Description

Summary

When configuring a prompt in LangSmith UI with OpenAI-compatible provider, OpenRouter base_url, and OPENROUTER_API_KEY, the Playground from the prompt works. However, Client.pullPromptCommit(...) for the same commit hash returns a manifest where the hydrated ChatOpenAI block does not include base_url and the secret is resolved as OPENAI_API_KEY instead of OPENROUTER_API_KEY, causing 401 / wrong endpoint behavior in application code that trusts the hub manifest.

This suggests a desync between UI-stored model configuration and the serialized manifest returned by the Hub API for the same commit.


Environment

  • LangSmith region / deployment: (cloud / self-hosted — specify)
  • SDK: langsmith v0.5.23 (or your exact version)
  • LangChain hub pull: langchain/hub/node v1.3.4 (or your exact version)
  • OS / Node: (e.g. Node 22, macOS)

Steps to reproduce

  1. Create or open a private prompt repo, e.g. summary-session.
  2. In Model configuration, set:
    • Provider: OpenAI-compatible / OpenAI endpoint
    • Base URL: https://openrouter.ai/api/v1
    • API key env name: OPENROUTER_API_KEY
    • Model: e.g. google/gemini-3-flash-preview (or any OpenRouter model id)
  3. Save and create a new commit; note the commit hash shown in UI.
  4. In Playground opened from the prompt (not from an old trace), run — confirm it succeeds.
  5. In application code (or a minimal script), call:
import { Client } from "langsmith";

const client = new Client({
  apiKey: process.env.LANGSMITH_API_KEY,
  apiUrl: process.env.LANGSMITH_ENDPOINT, // if applicable
});

const commit = await client.pullPromptCommit("summary-session", {
  includeModel: true,
  skipCache: true,
});

console.log("commit_hash:", commit.commit_hash);
console.log(JSON.stringify(commit.manifest, null, 2));
  1. Inspect the ChatOpenAI / model kwargs inside commit.manifest.

Expected behavior

The manifest returned by pullPromptCommit for that commit should include the same runtime fields as configured in UI, at minimum:

  • base_url (or equivalent) = https://openrouter.ai/api/v1
  • Secret reference consistent with UI (e.g. OPENROUTER_API_KEY), or documented mapping rules if UI uses a different internal representation.

Actual behavior

For commit hash <PASTE_COMMIT_HASH>, the manifest shows e.g.:

  • ChatOpenAI kwargs include model as configured but
  • base_url is missing
  • openai_api_key secret id is OPENAI_API_KEY (not OPENROUTER_API_KEY)

(attached JSON below)


Evidence (required)

1) Commit hash (single source of truth)

  • Commit hash from UI: <PASTE_FULL_HASH>
  • Commit hash from API response (commit.commit_hash): <PASTE_SAME_HASH>
    (They should match; if they differ, note both.)

2) Raw manifest from API

Paste full redacted JSON from pullPromptCommit (redact any non-secret fields if needed; do not paste real API keys):

{
  "lc": 1,
  "type": "constructor",
  "id": [
    "langchain_core",
    "runnables",
    "RunnableSequence"
  ],
  "kwargs": {
    "first": {
      "lc": 1,
      "type": "constructor",
      "id": [
        "langchain_core",
        "prompts",
        "structured",
        "StructuredPrompt"
      ],
      "kwargs": {
        "messages": [
          {
            "lc": 1,
            "type": "constructor",
            "id": [
              "langchain_core",
              "prompts",
              "chat",
              "SystemMessagePromptTemplate"
            ],
            "kwargs": {
              "prompt": {
                "lc": 1,
                "type": "constructor",
                "id": [
                  "langchain_core",
                  "prompts",
                  "prompt",
                  "PromptTemplate"
                ],
                "kwargs": {
                  "input_variables": [],
                  "template_format": "mustache",
                  "template": "You are an expert Customer Support Data Extractor. Your task is to analyze a raw chat transcript between a customer and an agent (or AI) and distill it into a highly concise, structured session summary. This summary acts as the short-term memory (Level 2) for an AI agent.\n\n### INSTRUCTIONS\n1. Analyze Intent: Identify the primary reason the customer initiated the conversation.\n2. Determine Outcome: Evaluate the final state of the conversation (e.g., Resolved, Escalated, Dropped, Pending, Unresolved).\n3. Extract Entities: Identify specific, actionable data points such as order IDs, phone numbers, email addresses, product names, or appointment dates. Exclude conversational filler.\n4. Summarize: Write a chronological, factual overview of the session. Focus purely on the core problem, actions taken, and the resolution. Do not include greetings or pleasantries.\n5. Language Constraint: You MUST translate and write the values for `summary`, `intent`, `outcome`, and `keyEntities` entirely in Vietnamese.\n\n### OUTPUT FORMAT\nYou must respond strictly with a valid JSON object matching the exact schema below. Do not include markdown code blocks (e.g., ```json) or any conversational text.\n\n{\n  \"summary\": \"String (A factual summary of the conversation in Vietnamese)\",\n  \"intent\": \"String (The primary goal of the user in Vietnamese)\",\n  \"outcome\": \"String (The final resolution or state in Vietnamese)\",\n  \"keyEntities\": [\"String\", \"String\"] // Array of strings. Use an empty array [] if no entities are found.\n}"
                }
              }
            }
          },
          {
            "lc": 1,
            "type": "constructor",
            "id": [
              "langchain_core",
              "prompts",
              "chat",
              "HumanMessagePromptTemplate"
            ],
            "kwargs": {
              "prompt": {
                "lc": 1,
                "type": "constructor",
                "id": [
                  "langchain_core",
                  "prompts",
                  "prompt",
                  "PromptTemplate"
                ],
                "kwargs": {
                  "input_variables": [
                    "messages"
                  ],
                  "template_format": "mustache",
                  "template": "Here is the raw chat transcript:\n<transcript>\n{{messages}}\n</transcript>"
                }
              }
            }
          }
        ],
        "input_variables": [
          "messages"
        ],
        "template_format": "mustache",
        "schema_": {
          "title": "SessionSummaryResult",
          "type": "object",
          "properties": {
            "summary": {
              "type": "string",
              "description": "A concise paragraph summarizing the interaction."
            },
            "intent": {
              "type": "string",
              "description": "The primary goal or reason the customer reached out."
            },
            "outcome": {
              "type": "string",
              "description": "The final status or result of the session."
            },
            "keyEntities": {
              "type": "array",
              "items": {
                "type": "string"
              },
              "description": "Important entities like product names, prices, dates, etc."
            }
          },
          "required": [
            "summary",
            "intent",
            "outcome",
            "keyEntities"
          ]
        }
      }
    },
    "last": {
      "lc": 1,
      "type": "constructor",
      "id": [
        "langchain_core",
        "runnables",
        "RunnableBinding"
      ],
      "kwargs": {
        "bound": {
          "lc": 1,
          "type": "constructor",
          "id": [
            "langchain_core",
            "runnables",
            "RunnableSequence"
          ],
          "kwargs": {
            "first": {
              "lc": 1,
              "type": "constructor",
              "id": [
                "langchain",
                "chat_models",
                "openai",
                "ChatOpenAI"
              ],
              "kwargs": {
                "model": "google/gemini-3-flash-preview",
                "use_responses_api": true,
                "openai_api_key": {
                  "lc": 1,
                  "type": "secret",
                  "id": [
                    "OPENAI_API_KEY"
                  ]
                }
              }
            },
            "last": {
              "lc": 1,
              "type": "constructor",
              "id": [
                "langchain_core",
                "output_parsers",
                "JsonOutputParser"
              ],
              "kwargs": {}
            }
          }
        },
        "kwargs": {}
      }
    },
    "metadata": {
      "lc_hub_owner": "-",
      "lc_hub_repo": "summary-session",
      "lc_hub_commit_hash": "d1cff76d6fbadfd146185721658f2bf611a68c64f98c6b6efc8fb70b697ff98e"
    }
  }
}

3) Screenshot of UI for the same commit

Attach screenshot showing:

  • Prompt name / repo
  • Selected commit d1cff7... (or full hash)
  • Model panel with Base URL = https://openrouter.ai/api/v1
  • API key name = OPENROUTER_API_KEY

4) Optional: metadata block from manifest

If present, include kwargs.metadata with lc_hub_commit_hash matching UI.


Impact

  • includeModel: true pulls in app deserialize to OpenAI-default routing unless clients apply manual env overrides (OPENAI_BASE_URL, secret aliasing).
  • Breaks “single source of truth” for model config in LangSmith Hub vs application runtime.

Request

Please confirm whether this is:

  • A) Hub API not serializing base_url / custom endpoint into manifest for OpenAI-compatible configs, or
  • B) UI displaying config that is not yet written into the commit manifest, or
  • C) expected behavior with documented workaround.

If (A) or (B), we’d like a fix or explicit contract for how OpenRouter / custom base URL must be stored so pullPromptCommit round-trips correctly.


Checklist before submit

  • Same prompt identifier in UI and in code (summary-session vs owner/summary-session documented)
  • skipCache: true on pullPromptCommit to rule out client cache
  • LangSmith API key is for the same workspace as the UI
  • No PII / secrets in screenshots or JSON

System Info

Runtime

  • OS: macOS Darwin 24.6.0
  • Node.js: v22.14.0
  • Package manager: pnpm 10.12.1

Dependencies (workspace root, pnpm list --depth 0)

  • langsmith: 0.5.23
  • langchain: 1.3.4
  • @langchain/core: 1.1.41

Package.json ranges (reference)

  • langsmith: ^0.5.25
  • langchain: ^1.3.4
  • @langchain/core: ^1.1.41

Service

  • App: service-agent (NestJS), pulls prompts via langchain/hub/node + langsmith Client.pullPromptCommit

LangSmith

  • Deployment: LangSmith Cloud (EU)

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugRelated to a bug, vulnerability, unexpected error with an existing featureexternallangchain`langchain` package issues & PRs

    Type

    No fields configured for Bug.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions