Summary
parallel_chat() and parallel_chat_structured() currently return different kinds of objects:
parallel_chat() returns a list of Chat objects, NULLs, or error objects
parallel_chat_structured() returns a structured data object, typically a data frame
This makes downstream code more complex than it needs to be, because callers have to branch on which function they used and handle two different result shapes.
Why this matters
The two functions are very close in intent:
- both submit multiple prompts in parallel
- both can fail on individual rows
- both may need per-row recovery logic
But because the return shapes differ, callers cannot treat them as drop-in alternatives.
That makes it harder to:
- write generic post-processing
- salvage parse failures
- retry only failing rows
- keep uniform logging and metadata handling
Proposed direction
A more uniform design would be for both functions to return a list of Chat objects, with the raw assistant text preserved even when structured parsing fails.
For parallel_chat_structured(), that would mean:
- returning the raw unprocessed string on parse failure
- keeping enough metadata to allow ad hoc parsing or recovery later
- not collapsing the result directly into a data frame too early
That way callers could inspect and salvage failed turns instead of losing the raw output.
Even better
An even cleaner API would be to let parallel_chat() accept an optional type argument:
- if
type is absent, return normal chat outputs
- if
type is present, return structured outputs
That would make the structured and unstructured paths true drop-ins with a single entry point.
Example
A model such as:
https://openrouter.ai/openai/gpt-oss-20b:free
can return JSON-like output that is close to valid but not always perfectly parseable. In those cases, preserving the raw string would let callers recover the response with custom parsing logic.
Summary
parallel_chat()andparallel_chat_structured()currently return different kinds of objects:parallel_chat()returns a list ofChatobjects,NULLs, or error objectsparallel_chat_structured()returns a structured data object, typically a data frameThis makes downstream code more complex than it needs to be, because callers have to branch on which function they used and handle two different result shapes.
Why this matters
The two functions are very close in intent:
But because the return shapes differ, callers cannot treat them as drop-in alternatives.
That makes it harder to:
Proposed direction
A more uniform design would be for both functions to return a list of
Chatobjects, with the raw assistant text preserved even when structured parsing fails.For
parallel_chat_structured(), that would mean:That way callers could inspect and salvage failed turns instead of losing the raw output.
Even better
An even cleaner API would be to let
parallel_chat()accept an optionaltypeargument:typeis absent, return normal chat outputstypeis present, return structured outputsThat would make the structured and unstructured paths true drop-ins with a single entry point.
Example
A model such as:
https://openrouter.ai/openai/gpt-oss-20b:free
can return JSON-like output that is close to valid but not always perfectly parseable. In those cases, preserving the raw string would let callers recover the response with custom parsing logic.