|
| 1 | +--- |
| 2 | +title: "Trace API POST Requests: Roo Code + Braintrust AI Guide" |
| 3 | +slug: how-to-trace-and-log-roo-code-api-post-requests-with-braintrust-ai-proxy |
| 4 | +description: Learn how to configure Roo Code with Braintrust AI Proxy for enhanced API request tracing, caching, and OpenTelemetry monitoring - a step-by-step implementation guide. |
| 5 | +date: 2025-05-19T09:55:00Z |
| 6 | +tags: |
| 7 | + - AI |
| 8 | + - Roo Code |
| 9 | +categories: |
| 10 | + - Software Development |
| 11 | +thumbnail: |
| 12 | + url: /img/blog/how-to-trace-and-log-roo-code-api-post-requests-with-braintrust-ai-proxy.jpg |
| 13 | + author: d.o. (flux) |
| 14 | +draft: false |
| 15 | +--- |
| 16 | + |
| 17 | +Here’s how to configure **Roo Code**’s UI to route all your OpenAI–compatible requests through the Braintrust AI Proxy—complete with transparent caching, built-in tracing, and full OpenTelemetry support—without writing a single line of code. By simply adjusting the **API Provider**, **Base URL**, and **Custom Headers** fields in your Roo Code settings panel (as shown in your screenshots), you unlock: |
| 18 | + |
| 19 | +* **Cache modes** (`auto`, `always`, `never`) that let you reuse deterministic outputs when `temperature=0` |
| 20 | +* **Tracing headers** (`x-bt-parent: project_id:<YOUR_PROJECT_ID>`) that generate root spans in Braintrust’s monitoring dashboard |
| 21 | +* **OpenTelemetry export** via OTLP endpoints, compatible with any observability platform (including Traceloop’s OpenLLMetry) |
| 22 | + |
| 23 | +## Configuring Roo Code’s UI |
| 24 | + |
| 25 | +1. **API Provider** |
| 26 | + |
| 27 | + * Select **OpenAI Compatible** from the dropdown. This tells Roo Code to treat the endpoint exactly like the OpenAI API, so you can continue using the same SDK calls internally. |
| 28 | + |
| 29 | +2. **Base URL**: https://api.braintrust.dev/v1/proxy |
| 30 | + |
| 31 | + * This single endpoint proxies across multiple model providers while adding caching and observability. [List of supported models and providers](https://www.braintrust.dev/docs/guides/proxy?utm_source=chatgpt.com#list-of-supported-models-and-providers) |
| 32 | + |
| 33 | +3. **OpenAI API Key** |
| 34 | + |
| 35 | + * Paste your **Braintrust** API key here. |
| 36 | + * Whenever you include tracing headers, Braintrust uses this key to authenticate and associate spans with your project. |
| 37 | + |
| 38 | + {{< image class="rounded lightbox" src="/img/blog/RooCodeBraintrustTrace/RooCodeBraintrustTraceCacheConfiguration.png" caption="Click the image to enlarge and view the full configuration: Insert the Braintrust proxy endpoint into Roo Code's Base URL field. Add the Braintrust-specific headers to enable tracing and caching." >}} |
| 39 | + |
| 40 | +4. **Custom Headers** |
| 41 | + |
| 42 | + * Click **+** to add headers exactly as shown in your UI: |
| 43 | + |
| 44 | + * `x-bt-use-cache` = `auto` |
| 45 | + * `x-bt-parent` = `project_id:<YOUR_PROJECT_ID>` |
| 46 | + * These instruct the proxy to cache deterministic calls and to start a new trace under your Braintrust project. |
| 47 | + |
| 48 | + {{< image class="rounded lightbox" src="/img/blog/RooCodeBraintrustTrace/BraintrustProjectId.png" caption="Get project id in braintrust web ui." >}} |
| 49 | + |
| 50 | +5. **Enable Streaming** (optional) |
| 51 | + |
| 52 | + * Check **Enable streaming** if you want to stream partial responses back to Roo Code’s editor. |
| 53 | + |
| 54 | +6. **Other toggles** |
| 55 | + |
| 56 | + * Leave **Use Azure** and **Enable R1 parameters** unchecked unless explicitly needed by your custom provider. |
| 57 | + |
| 58 | +--- |
| 59 | + |
| 60 | +## How Caching Works |
| 61 | + |
| 62 | +Braintrust supports three cache modes set via `x-bt-use-cache`: |
| 63 | + |
| 64 | +* **`auto`**: caches only when `temperature=0` or a deterministic `seed` is provided (the default mode) |
| 65 | +* **`always`**: caches every supported request (e.g. `/chat/completions`, `/completions`, `/embeddings`) regardless of randomness settings |
| 66 | +* **`never`**: bypasses cache entirely—useful for debugging or non-deterministic experiments |
| 67 | + |
| 68 | +> **Pro tip:** To get a cache hit, set **Temperature** to **0** in your Roo Code run configuration. This makes the output deterministic and eligible for replay. |
| 69 | +
|
| 70 | +--- |
| 71 | + |
| 72 | +## How Tracing Works |
| 73 | + |
| 74 | +Every request that includes `x-bt-parent` will emit a root span into Braintrust’s tracing backend: |
| 75 | + |
| 76 | +* **Header**: `x-bt-parent: project_id:<YOUR_PROJECT_ID>` |
| 77 | +* **Key**: your **Braintrust API Key** in the **OpenAI API Key** field |
| 78 | +* **Outcome**: each API call is logged as an LLM span under your project, visible in the Braintrust dashboard |
| 79 | + |
| 80 | +This gives you full visibility into prompt inputs, response tokens, timings, and cache hits—all without modifying any application code. |
| 81 | + |
| 82 | +{{< image class="rounded lightbox" src="/img/blog/RooCodeBraintrustTrace/BraintrustRooCodeChatTrace.png" caption="Braintrust chat trace" >}} |
| 83 | + |
| 84 | +## Further Reading |
| 85 | + |
| 86 | +* **Braintrust AI Proxy Guide** (caching, headers, endpoints) {{< link "https://github.com/braintrustdata/braintrust-proxy" >}}Braintrust AI proxy{{< /link >}} |
| 87 | +* **Traceloop OpenLLMetry** (semantic conventions for LLM) {{< link "https://github.com/traceloop/openllmetry" >}}Traceloop OpenLLMetry{{< /link >}} |
0 commit comments