A multi-agent AI research assistant for condominium fintech, powered by LangGraph.
Upload your condominium financial reports or ask questions about Brazilian housing regulations — CondoResearch AI routes your query through a parallel agent pipeline:
User Query
├── 🌐 Web Searcher → Tavily (regulations, market news)
└── 📄 Doc Reader → pgvector RAG (uploaded PDFs)
↓
🧠 Synthesizer → OpenRouter / MiniMax 2.5
↓
Structured response with sources
Real-time agent status is streamed to the UI via Server-Sent Events and visualized as a live graph.
condoresearch-ai/
├── specs/ # Spec-first: OpenAPI + Pydantic agent contracts
│ ├── openapi/ # REST API spec (api.yaml)
│ └── agents/ # Agent I/O schemas (GraphState, node inputs/outputs)
├── backend/
│ └── src/
│ ├── domain/ # Entities + repository interfaces (no framework deps)
│ ├── application/ # Use cases + ports (pure business logic)
│ ├── infrastructure/ # LangGraph pipeline, pgvector, Tavily, FastAPI impls
│ └── presentation/ # FastAPI routes + dependency injection
└── frontend/ # Next.js 15, NextAuth, React Flow graph visualization
Clean Architecture — dependency arrows always point inward. Domain knows nothing about FastAPI. Use cases know nothing about LangGraph.
| Layer | Choice |
|---|---|
| Backend | Python 3.12, FastAPI, LangGraph, LangChain |
| LLM | OpenRouter → MiniMax 2.5 (configurable) |
| Web Search | Tavily |
| Vector DB | PostgreSQL + pgvector (HNSW index) |
| Tracing | LangSmith |
| Frontend | Next.js 15, NextAuth, React Flow, Tailwind |
| Tests | pytest + pytest-asyncio, Vitest |
| Infra | Docker Compose |
- Docker + Docker Compose
- API keys: OpenRouter, Tavily, LangSmith (optional)
cp backend/.env.example backend/.env
# Fill in your API keyscd infra
docker compose up -dThis starts PostgreSQL (with pgvector + migrations auto-applied) + backend + frontend.
- API: http://localhost:8000
- API docs: http://localhost:8000/docs
- Frontend: http://localhost:3000
cd backend
pip install -e ".[dev]"
pytest tests/unit -v- Spec-first: OpenAPI + Pydantic agent schemas written before implementation
- All agents always run: web searcher and doc reader execute in parallel; synthesizer merges both
- Human-readable traces: every agent node emits structured traces, stored with messages, visualized live
- S3 batch ingestion: currently file upload only — architecture supports evolving to S3 bucket indexing with minimal changes (swap
IngestDocumentUseCaseinput source)