-
Notifications
You must be signed in to change notification settings - Fork 863
Python: Feature/fix context provider lifecycle agentic mode #2650
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Python: Feature/fix context provider lifecycle agentic mode #2650
Conversation
Python Test Coverage Report •
Python Unit Test Overview
|
||||||||||||||||||||||||||||||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR fixes a critical bug in the context provider lifecycle management that prevented multi-turn conversations when using AzureAISearchContextProvider in agentic mode (Knowledge Base retrieval). The bug caused the second and subsequent queries to fail because the retrieval client connection was being closed after each query.
Key Changes:
- Removed the
async with self.context_providerwrapper in_prepare_thread_and_messagesthat was incorrectly closing context provider resources after each query - Added a clarifying comment explaining that context provider lifecycle should be managed by the user
- Corrected the Azure AI Foundry project endpoint format in documentation
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 1 comment.
| File | Description |
|---|---|
python/packages/core/agent_framework/_agents.py |
Removed the async with wrapper around context_provider.invoking() that was causing premature resource cleanup after each query, and added a comment explaining the lifecycle management approach |
python/samples/getting_started/context_providers/azure_ai_search/README.md |
Fixed the Azure AI Foundry project endpoint URL format from incorrect AzureML format to correct AI Foundry format |
| # Note: We don't use 'async with' here because the context provider's lifecycle | ||
| # should be managed by the user (via async with) or persist across multiple invocations. | ||
| # Using async with here would close resources (like retrieval clients) after each query. | ||
| context = await self.context_provider.invoking(input_messages or [], **kwargs) |
Copilot
AI
Dec 5, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing test coverage for the multi-turn conversation bug that was fixed. While the fix correctly removes the async with wrapper that was closing resources after each query, there's no test verifying that multiple consecutive calls to invoking() work correctly with agentic mode context providers.
Consider adding a test that:
- Creates an agent with a context provider that tracks
__aenter__/__aexit__calls - Makes multiple consecutive queries via
agent.run() - Verifies the context provider's
__aexit__is only called when the agent's context exits, not after each query
Motivation and Context
When using
AzureAISearchContextProviderin agentic mode (Knowledge Base retrieval), the second and subsequent queries fail with:This bug prevents users from having multi-turn conversations with agents that use Knowledge Base-powered RAG. The first query works, but any follow-up questions fail because the
KnowledgeBaseRetrievalClientconnection is prematurely closed.Root Cause: The
ChatAgent._prepare_thread_and_messagesmethod wraps the context provider invocation withasync with self.context_provider:, which calls__aexit__after each query. This closes stateful resources (like the retrieval client) after the first query completes.Why semantic mode was unaffected: The
SearchClientused in semantic mode is stateless - each search is independent. The agentic mode'sKnowledgeBaseRetrievalClientmaintains a persistent connection that gets closed by the erroneousasync withwrapper.Description
Changes in
python/packages/core/agent_framework/_agents.py:Removed the
async with self.context_provider:wrapper in_prepare_thread_and_messages. The context provider's lifecycle should be managed by the user (viaasync with ChatAgent(...) as agent) or persist across multiple invocations - not be opened/closed on every query.Before (incorrect):
After (correct):
This aligns with how
AggregateContextProviderworks - it enters providers once in__aenter__and exits once in__aexit__, whileinvoking()does not useasync with.Additional fix in
python/samples/getting_started/context_providers/azure_ai_search/README.md:Corrected the Azure AI Foundry project endpoint format in documentation from
https://myproject.api.azureml.mstohttps://<resource-name>.services.ai.azure.com/api/projects/<project-name>.Contribution Checklist