Skip to content

Commit c4676ce

Browse files
committed
docs: langauge model request doc
1 parent e8502c5 commit c4676ce

File tree

4 files changed

+223
-9
lines changed

4 files changed

+223
-9
lines changed

apps/docs/content/docs/(get-started)/basic-usage.mdx

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ cargo add aisdk --features openai # anthropic, google or any other provider
1818

1919
To generate text, you can use the `LanguageModelRequest` builder and call the `generate_text()` method. This method returns a `GenerateTextResponse` which contains the generated text and other information such as token usage statistics, stop reason, and tool call results
2020

21-
you can find more info about generating text [here](/docs/concepts/generating-text).
21+
you can find more info about generating text [here](/docs/concepts/language-model-request#generate_text).
2222

2323
<CustomCodeTabs tabsData={[
2424
{
@@ -45,7 +45,7 @@ you can find more info about generating text [here](/docs/concepts/generating-te
4545

4646
To stream text, you can use the `LanguageModelRequest` builder and call the `stream_text()` method. This method returns a `StreamTextResponse` which contains the generated text and other information such as token usage statistics, stop reason, and tool call results.
4747

48-
You can find more info about streaming text [here](/docs/concepts/generating-text#streaming-text).
48+
You can find more info about streaming text [here](/docs/concepts/language-model-request#stream_text).
4949

5050
<CustomCodeTabs tabsData={[
5151
{
@@ -68,4 +68,4 @@ You can find more info about streaming text [here](/docs/concepts/generating-tex
6868
},
6969
]} />
7070

71-
These examples cover the basic usage of the AI SDK. To explore more advanced capabilities, dive into the core concepts such as [Tool Calling](/docs/concepts/tools) and [Agents](/docs/concepts/agents) or take a deeper look at text generation features [generating text](/docs/concepts/generating-text) and [streaming text](/docs/concepts/generating-text#streaming-text).
71+
These examples cover the basic usage of the AI SDK. To explore more advanced capabilities, dive into the core concepts such as [Tool Calling](/docs/concepts/tools) and [Agents](/docs/concepts/agents) or take a deeper look at text generation features [generating text](/docs/concepts/language-model-request#generate_text) and [streaming text](/docs/concepts/language-model-request#stream_text).

apps/docs/content/docs/concepts/generating-text.mdx

Lines changed: 0 additions & 5 deletions
This file was deleted.
Lines changed: 219 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,219 @@
1+
---
2+
title: Language Model Request
3+
---
4+
5+
The `LanguageModelRequest` is the central interface for interacting with Large Language Models (LLMs) in `aisdk`. It provides a type-safe, fluent API for configuring and executing generation tasks.
6+
7+
## Model Providers
8+
9+
To interact with a model, you first need a **Provider**. Providers act as a bridge to different AI services (like OpenAI, Google Gemini, or Anthropic). You can see the full list of [Available Providers](/docs#model-providers).
10+
11+
### Enable the provider of your choice
12+
13+
For this example, we'll use OpenAI as our provider. But you can enable the provider you want by using the feature flag of the [Provider](/docs/#model-providers).
14+
15+
```bash
16+
cargo add aisdk --features openai # anthropic, google or any other provider.
17+
```
18+
19+
All providers share a consistent interface and can be initialized using dedicated model methods:
20+
21+
```rust
22+
let openai = OpenAI::gpt_5();
23+
```
24+
This initializes the OpenAI provider with the GPT-5 model and its full range of capabilities (tool calling, structured output, etc..
25+
26+
### The Capability System
27+
28+
Instead of discovering "Model Unsupported" errors at runtime, AISDK leverages Rust's type system to enforce model-specific constraints at compile time. This Capability System ensures that every request you build is guaranteed to be valid for the selected model before your code even runs.
29+
30+
The Capability system will garentee the following:
31+
- **Tool calling**: is only available on models that support it.
32+
- **Reasoning**: is only available on models that support it.
33+
- **Structured output**: is only available on models that support it.
34+
- **Multimodal I/O**: Image, audio, and video Input/Output are only available on models that support them.
35+
36+
Example:
37+
38+
```rust
39+
// ✅ THIS WORKS: GPT-5 supports tool calls
40+
let request = LanguageModelRequest::builder()
41+
.model(OpenAI::gpt_5())
42+
.with_tool(my_tool) // Valid!
43+
.build();
44+
45+
// ❌ THIS FAILS TO COMPILE: Because O1 Mini doesn't support tool calls
46+
let request = LanguageModelRequest::builder()
47+
.model(OpenAI::o1_mini())
48+
.with_tool(my_tool) // ERROR: The trait `ToolCallSupport` is not implemented..
49+
.build();
50+
```
51+
52+
## The Type-State Builder
53+
54+
To ensure requests are constructed correctly, `LanguageModelRequest` uses a **type-state builder pattern**. This catches configuration errors at compile time by enforcing a specific order of operations.
55+
56+
The builder flows through several stages:
57+
58+
1. **ModelStage**: Initialize the builder and specify the model using `.model(M)`.
59+
2. **SystemStage**: (Optional) Provide context or instructions using `.system("...")`.
60+
3. **ConversationStage**: Provide the core input using either `.prompt("...")` or `.messages(msgs)`.
61+
> These two are **mutually exclusive**. The type-state builder ensures that once you choose one, the other becomes unavailable at compile time.
62+
4. **OptionsStage**: Configure optional parameters like temperature or tools, then call `.build()`.
63+
64+
## Available Methods
65+
66+
### Input Methods
67+
68+
```rust
69+
// System Prompt/Instructions
70+
.system("You are a helpful assistant that speaks like a pirate.")
71+
72+
// Simple text prompt
73+
.prompt("What is the capital of France?")
74+
75+
// Full conversation history using Message::builder()
76+
let messages = Message::builder()
77+
.user("Oh great and wise Borrow Checker, why do you reject my humble reference?")
78+
.assistant("Your reference's lifespan is shorter than a mayfly's existence in this scope.")
79+
.user("But I promised to use 'unsafe' only on weekends!")
80+
.assistant("Safety is a lifestyle, not a part-time job.")
81+
.build();
82+
83+
builder.messages(messages)
84+
```
85+
86+
- `system(impl Into<String>)`: Sets the system prompt (available in `SystemStage`).
87+
- `prompt(impl Into<String>)`: Sets a simple user prompt.
88+
- `messages(Messages)`: Sets a full conversation history for multi-turn interactions.
89+
90+
### Generation Configuration
91+
92+
Parameters that accept a `u32` (0-100) are automatically scaled: `0` is the minimum, and `100` is the maximum. These values are converted to **provider-specific configurations** under the hood.
93+
94+
```rust
95+
let request = LanguageModelRequest::builder()
96+
.model(OpenAI::gpt_5())
97+
.prompt("Verify this complex algorithm.")
98+
.temperature(20) // More deterministic
99+
.top_p(95)
100+
.max_retries(5)
101+
.build();
102+
```
103+
104+
- `temperature(u32)`: Controls randomness (0-100).
105+
- `top_p(u32)`: Nucleus sampling (0-100).
106+
- `top_k(u32)`: Limits the model to the top-K most likely tokens.
107+
- `seed(u32)`: Sets a random seed for deterministic outputs.
108+
- `max_retries(u32)`: Number of times to retry failed requests.
109+
- `frequency_penalty(f32)`: Reduces repetition.
110+
- `stop_sequences(Vec<String>)`: Sequences that trigger early termination.
111+
112+
### Advanced Features
113+
- `with_tool(Tool)`: Registers a tool. See [Tool Calling](/docs/concepts/tools).
114+
- `schema<T: JsonSchema>()`: Configures model for [structured output](/docs/concepts/structuredoutput).
115+
- `reasoning_effort(ReasoningEffort)`: Sets reasoning level for supported models.
116+
117+
## Advanced Orchestration (Agentic Systems)
118+
119+
For complex use cases like building [Agents](/docs/concepts/agents), AISDK provides deep hooks into the request lifecycle and granular access to every interaction step.
120+
121+
- **Lifecycle Hooks**: Methods like `on_step_start`, `on_step_finish`, and `stop_when` allow you to intercept and control the model's decision-making loop.
122+
- **Steps**: Every interaction (including intermediate tool calls) is captured as a `Step`, providing its own `usage()`, `messages()`, and `tool_calls()`.
123+
124+
Detailed coverage of these features is available in the [Agents](/docs/concepts/agents#lifecylehooks) documentation.
125+
126+
## Execution
127+
128+
After configuring your options, you **must** call `.build()` to finalize the request before execution.
129+
130+
### `generate_text()`
131+
A non-streaming method that returns the final result after all steps are completed.
132+
133+
```rust
134+
let mut response = request.generate_text().await?;
135+
```
136+
137+
### `stream_text()`
138+
139+
AISDK provides real-time updates via a stream of chunks. Before initiating the stream, it is useful to understand the possible chunk types:
140+
141+
**LanguageModelStreamChunkType:**
142+
- `Start`: Indicates the beginning of the stream.
143+
- `Text(String)`: A partial text delta.
144+
- `Reasoning(String)`: A partial reasoning delta.
145+
- `ToolCall(String)`: A partial tool call argument delta.
146+
- `End(AssistantMessage)`: The final terminal message containing the full result and usage.
147+
148+
Consume the stream using a loop:
149+
150+
```rust
151+
let response = request.stream_text().await?;
152+
let mut stream = response.stream;
153+
154+
while let Some(chunk) = stream.next().await {
155+
if let LanguageModelStreamChunkType::Text(text) = chunk {
156+
print!("{}", text);
157+
}
158+
}
159+
```
160+
161+
## Response Types Reference
162+
163+
Both `GenerateTextResponse` and `StreamTextResponse` provide methods to inspect the final state.
164+
165+
> [!NOTE]
166+
> Methods on `StreamTextResponse` are **async** and should be called after consumption for final metadata. Detailed information is available in the API reference.
167+
168+
| Method | Description |
169+
| :--- | :--- |
170+
| `text()` | The text content of the last assistant message. |
171+
| `content()` | The content of the last assistant message (excluding reasoning). |
172+
| `usage()` | Aggregated token usage across all steps. |
173+
| `messages()` | Returns all messages in the conversation history. |
174+
| `stop_reason()` | The reason generation stopped (e.g., `Finish`, `Hook`, `Error`). |
175+
| `options()` | Access the configuration options that generated this response. |
176+
| `steps()` | Returns all `Step`s in chronological order. |
177+
| `last_step()` | Returns the most recent `Step`. |
178+
| `step(id)` | Returns a specific `Step` by its ID. |
179+
| `tool_calls()` | All tool calls requested during the entire process. |
180+
| `tool_results()` | All tool results obtained during the entire process. |
181+
182+
### Non-Streaming Example
183+
184+
```rust
185+
let response = LanguageModelRequest::builder()
186+
.model(OpenAI::gpt_5())
187+
.prompt("What is 2+2?")
188+
.build()
189+
.generate_text()
190+
.await?;
191+
192+
println!("Text: {}", response.text().unwrap());
193+
println!("Usage: {:?}", response.usage());
194+
println!("Stop Reason: {:?}", response.stop_reason());
195+
println!("Final Content: {:?}", response.content());
196+
```
197+
198+
### Streaming Example
199+
200+
```rust
201+
let response = LanguageModelRequest::builder()
202+
.model(OpenAI::gpt_5())
203+
.prompt("Write a short story.")
204+
.build()
205+
.stream_text()
206+
.await?;
207+
208+
let mut stream = response.stream;
209+
while let Some(chunk) = stream.next().await {
210+
if let LanguageModelStreamChunkType::Text(text) = chunk {
211+
print!("{}", text);
212+
}
213+
}
214+
215+
// Access final metadata after stream consumption
216+
let final_usage = response.usage().await;
217+
let steps = response.steps().await;
218+
let reason = response.stop_reason().await;
219+
```

apps/docs/content/docs/concepts/meta.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,5 +2,5 @@
22
"title": "Concepts",
33
"icon": "SquareLibrary",
44
"defaultOpen": true,
5-
"pages": ["generating-text", "tools", "prompts"]
5+
"pages": ["language-model-request", "tools", "prompts"]
66
}

0 commit comments

Comments
 (0)