Practical design patterns for distributed multi-agent systems
From a single LangGraph pipeline to enterprise-grade, cloud-deployed agent architectures.
The AI agent landscape today is where microservices were in 2014. Frameworks multiply weekly -- LangGraph, CrewAI, AutoGen -- but few teams have shipped production multi-agent systems. The gap isn't tooling. The gap is architectural knowledge: how do you actually structure, deploy, and operate agents that work together across services, trust boundaries, and cloud environments?
This project closes that gap with 9 design patterns, each solving a named architectural problem with working code you can run, study, and adapt.
This is a set of exmaples you can call it Design Patterns presenting practical design and implementation for multi-agent systems. Each pattern:
- Solves a real architectural problem that the previous pattern cannot handle
- Has clear "when to use / when to avoid" criteria
- Shows trade-offs, not just happy paths
- Builds on the previous pattern -- you experience the limitation before learning the solution
The progression itself tells a story. In Pattern 01 you build a familiar POST /run REST endpoint. By Pattern 05 that same FastAPI server hosts A2A JSON-RPC protocol endpoints. By Pattern 09 agents discover each other dynamically, authenticate via JWT, and deploy as independent cloud services. HTTP is still the transport -- but what travels over it has fundamentally changed.
This is the Software 2.0 → 3.0 transition, demonstrated through code.
Abstract patterns are hard to internalize. Concrete stories stick.
All nine patterns share a single, evolving domain: a crypto intelligence platform with three specialized teams that emerge as complexity demands them. Each team's arrival creates a genuine architectural challenge that motivates the next pattern. Teams that emerge as complexity demands them:
- Team 1: Intelligence - Fundamentals Research, News Scanner, Project Profiler, Community Analyst, Intelligence Compiler
- Team 2: Technical Analysis - Price Collector, Indicator Calculator, Level Analyst, Technical Reporter
- Team 3: Trading Signals - Signal Synthesizer, Risk Assessor
Patterns 01-04
You are Team 1: Intelligence. Three agents research crypto projects inside a single LangGraph pipeline. It works -- until you realize tools are hardcoded, every request starts from scratch, and memory grows unbounded. Each limitation drives the next pattern: MCP for standardized tools, PostgreSQL-backed checkpointers for persistence, a Memory Refiner for lifecycle management.
Patterns 05-06
Team 2: Technical Analysis arrives -- a separate service, separate codebase, separate container. You can't import their code. You need a protocol. A2A enters. Then Team 3: Trading Signals needs data from both teams simultaneously. Sequential calls take 50+ seconds. Async communication and SSE streaming become the only viable path.
Patterns 07-09
Team 2 moves to an external partner. Implicit trust is gone -- JWT authentication on every call. New agents appear and need dynamic discovery. Three teams deploy to Azure as independent Container Apps with Infrastructure as Code, Managed Identity, and per-team CI/CD pipelines.
From a single Python file to a cloud-deployed, authenticated, observable, dynamically-discoverable multi-agent system.
| Pattern | What It Solves | Key Concepts | |
|---|---|---|---|
| Foundation Tier · Single Docker network, one team, agent internals | |||
| 01 | Orchestrator Pipeline | Decomposing tasks across specialized agents | LangGraph StateGraph, tool use, LangSmith tracing |
| 02 | MCP Tool Integration | Standardized tool access for agents & AI clients | MCP servers/clients, Claude Code integration |
| 03 | Persistent Memory | Remembering across conversations | Checkpointer, PostgreSQL, thread management |
| 04 | Memory Lifecycle optional | Managing growing knowledge bases | Memory refiner, fact TTL, hierarchical memory |
| Distribution Tier · Multi-service, multi-team, real distributed systems | |||
| 05 | Distributed A2A | Cross-team agent communication | A2A protocol, Agent Cards, JSON-RPC |
| 06 | Async & Streaming | Non-blocking multi-team coordination | Async A2A, SSE streaming, parallel requests |
| 07 | Cross-Network Auth | Securing agents across trust boundaries | Auth0 OIDC, JWT, M2M tokens, zero-trust |
| Enterprise Tier · Production readiness, cloud deployment | |||
| 08 | Discovery & Observability | Finding agents and monitoring the system | Agent registry, OpenTelemetry, distributed tracing |
| 09 | Cloud Deployment | Production infrastructure on Azure | Container Apps, Bicep IaC, Managed Identity, CI/CD |
Every pattern exists because the previous one creates a real limitation:
P01 ─── Hardcoded tools can't be shared ──────────────── P02
P02 ─── Every request starts from scratch ────────────── P03
P03 ─┬─ Memory grows unbounded ──────────────────────── P04 (optional)
└─ A second team arrives, can't import their code ─ P05
P05 ─── Third team needs both, sequential is too slow ── P06
P06 ─── Team 2 moves to external partner, no trust ──── P07
P07 ─── New agents appear, consumers need code changes ─ P08
P08 ─── Docker Compose doesn't work in production ───── P09
┌─────────────────────────┐ ┌─────────────────────────┐ ┌─────────────────────────┐
│ TEAM 1: INTELLIGENCE │ │ TEAM 2: TECHNICAL │ │ TEAM 3: TRADING │
│ (Patterns 01-04) │ │ ANALYSIS (Pattern 05+) │ │ SIGNALS (Pattern 06+) │
│ │ │ │ │ │
│ Research Planner │ │ Price Collector │ │ Signal Synthesizer │
│ News Scanner │ │ Indicator Calculator │ │ Risk Assessor │
│ Project Profiler │ │ Level Analyst │ │ Trade Advisor │
│ Community Analyst │ │ Technical Reporter │ │ │
│ Intelligence Compiler │ │ │ │ │
└────────────┬────────────┘ └────────────┬────────────┘ └────────────┬────────────┘
│ │ │
│ A2A Protocol │ A2A Protocol │
└────────────────────────────┴────────────────────────────┘
- Team 1 researches fundamentals -- news, team, roadmap, community sentiment
- Team 2 crunches numbers -- price action, indicators, support/resistance levels
- Team 3 combines both into actionable trading signals with confidence levels
Each team deploys independently, communicates via A2A protocol, and authenticates across trust boundaries.
git clone https://github.com/ksopyla/agent-patterns-lab.git
cd agent-patterns-lab
cp .env.example .env
# Fill in your API keys (Azure OpenAI or Anthropic, LangSmith)
# Run Pattern 01 from inside the example folder
cd examples/01-orchestrator-pipeline
docker compose up --build
# Verify it's running
curl http://localhost:8000/health
# Submit a research request
curl -X POST http://localhost:8000/run \
-H "Content-Type: application/json" \
-d '{"input": "Research the Arbitrum crypto project"}'- Python 3.14+ (managed via uv)
- Docker and Docker Compose
- API keys: Azure OpenAI or Anthropic (for LLM), LangSmith (for tracing)
If you prefer running examples from the repository root:
# Install local dependencies for development
make setup
# Docker shortcut for an example
make example EX=01-orchestrator-pipelineExamples are designed to be run from inside their own folders, but they currently
share the repository root .env file and the workspace libs/common package.
Container images are built from infra/docker/base/Dockerfile.agent with
per-example build arguments set in each docker-compose.yml.
agent-patterns-lab/
├── examples/ # One folder per pattern (self-contained)
│ ├── 01-orchestrator-pipeline/
│ │ ├── README.md # Pattern documentation (theory + walkthrough)
│ │ ├── pyproject.toml
│ │ ├── docker-compose.yml
│ │ ├── endpoints.http # Ready-to-run REST requests
│ │ ├── src/
│ │ └── tests/
│ │ ├── unit/
│ │ ├── api/
│ │ └── e2e/
│ ├── 02-mcp-tool-integration/
│ └── ...
├── libs/common/ # Shared utilities (agent_common package)
│ └── src/agent_common/ # LLM config, tracing, MCP, A2A, auth helpers
├── docs/
│ ├── curriculum.md # Technical pattern-by-pattern breakdown
│ ├── vision.md # Full narrative, philosophy, and roadmap
│ └── CHANGELOG.md
├── infra/ # Docker base images, Azure Bicep templates
└── .github/ # CI/CD workflows, PR templates
Every pattern supports VERBOSE=true (set in .env), which logs:
- Agent reasoning steps with timestamps
- Tool call inputs and outputs
- Inter-agent message payloads
- LangSmith trace IDs for quick lookup
This is a first-class feature, not an afterthought. Reading verbose output is how you learn what agents are actually doing.
Each pattern maintains three test tiers:
tests/unit/-- individual agent nodes with mocked LLM responsestests/api/-- HTTP/protocol endpoints with mocked agent graphtests/e2e/-- full pipeline with mocked LLM (graph compilation, state flow)
# Run full test suite
python scripts/testing/run_test_suite.py
# Install pre-commit hooks
uv run pre-commit install --install-hooks --hook-type pre-commit --hook-type commit-msg| Layer | Technology | Role |
|---|---|---|
| Language | Python 3.14+ / uv | Package and version management |
| Orchestration | LangGraph | Agent state graphs with typed state |
| Server | FastAPI | HTTP/protocol endpoints (REST → MCP → A2A) |
| Infrastructure | Docker Compose | Local multi-container environments |
| Observability | LangSmith | Tracing, debugging, performance monitoring |
| Tools | MCP | Standardized tool access (Pattern 02+) |
| Communication | A2A | Agent-to-agent protocol (Pattern 05+) |
| Authentication | Auth0 | OIDC-based M2M auth (Pattern 07+) |
| Cloud | Azure Container Apps | Production deployment (Pattern 09) |
- Full Curriculum -- detailed technical breakdown of each pattern with architecture diagrams
- Vision & Roadmap -- the complete narrative, architectural philosophy, and future direction
- Blog -- in-depth write-ups for each pattern
- Changelog -- what changed and when
MIT -- built by Krzysztof Sopyła