-
Notifications
You must be signed in to change notification settings - Fork 2.8k
Description
Bug Description
When a user sends a text message while the agent is speaking, the interrupted assistant message is missing from
chat_ctx for the next LLM call.
The default text input callback in voice/room_io/types.py: calls sess.interrupt() without awaiting the
returned future, then immediately calls generate_reply(). Since generate_reply copies chat_ctx before the
interrupt cleanup has a chance to commit the truncated assistant message, the LLM never sees what the assistant was
saying when it got interrupted.
def _default_text_input_cb(sess, ev):
sess.interrupt() # returns Future[None], not awaited
sess.generate_reply(...) # copies chat_ctx before cleanup commitsVoice interruptions don't have this problem because STT turn detection introduces enough delay for the cleanup to
finish.
Expected Behavior
After a text interruption, the next LLM call should include the truncated assistant message in chat context, same as voice interruptions.
Reproduction Steps
1. Start a voice agent session with text input enabled
2. Let the agent start speaking a response
3. Send a text message mid-response via the data channel
4. Observe that the next LLM call's chat context is missing the interrupted assistant messageOperating System
Linux (also reproducible on macOS)
Models Used
Any LLM/TTS/STT combination, not model-specific.
Package Versions
livekit-agents 1.4.1Session/Room/Call IDs
No response
Proposed Solution
Make the default callback async and await the interrupt future:
async def _default_text_input_cb(sess, ev):
await sess.interrupt()
sess.generate_reply(user_input=ev.text)Additional Context
No response
Screenshots and Recordings
No response