-
Notifications
You must be signed in to change notification settings - Fork 2
Labels
Description
Title: Add backend unit tests for LangGraph agent
Description:
The backend currently has no automated tests. We need comprehensive unit tests to verify the LangGraph agent behaves correctly.
Requirements:
-
Test infrastructure setup
- Add pytest with async support for testing async agent operations
- Add respx for mocking HTTP requests to OpenAI-compatible APIs
- Add a
moon backend:testtask to run tests
-
Test scenarios to cover
- Basic conversation: Agent receives a user message and returns an AI response
- State maintenance: Conversation history is preserved across multiple invocations within the same thread
- Tool invocation: Agent recognizes when to call the
get_weathertool, executes it, and incorporates the result into its response - Tool output verification: The tool returns the expected format and content
-
Test parametrization
- Tests should run against multiple OpenAI-compatible base URLs (OpenAI default, OpenRouter, Ollama) to ensure provider flexibility
- Tests should run with different model names to ensure model configuration works correctly
-
Determinism requirements
- The
get_weathertool has random sleep for realism in production, but tests must be fast and deterministic - Mock the tool to return immediately with predictable output
- The
Acceptance criteria:
moon backend:testpasses- Tests cover all four scenarios above
- Tests are parametrized across providers and models
- No flaky tests due to randomness or timing