The companion library for Build a Multi-Agent System — With MCP and A2A (Manning). Learn how LLM agents work by building one yourself, from first principles, step by step.
Available now through Manning's Early Access Program (MEAP) — buy today and get each chapter as it's completed. Buy the Book →
Multi-agent systems and the LLM agents that power them are among the most discussed topics in AI today. There are already many capable frameworks out there — the goal of this book isn't to replace them, but to help you deeply understand how they work by having you build one yourself, from scratch.
All the code lives in the book's own hand-rolled agent framework, primarily designed for educational purposes rather than production deployment. It will give you the foundation to work more confidently with any other LLM agent framework of your choosing, or even to build your own specialised solutions.
Each chapter builds on the last, progressively deepening your understanding from core concepts to full multi-agent systems.
| Ch | Title | Notebook |
|---|---|---|
| 1 | What Are LLM Agents and Multi-Agent Systems? | — |
| 2 | Working with Tools | Ch 2 |
| 3 | Working with LLMs | Ch 3 |
| 4 | The LLM Agent Class | Ch 4 |
| Ch | Title | Notebook |
|---|---|---|
| 5 | MCP Tools | Ch 5 |
| 6 | Skills | Ch 6 |
| 7 | Memory | — |
| 8 | Human in the Loop | — |
| Ch | Title | Notebook |
|---|---|---|
| 9 | Multi-Agent Systems with Agent2Agent | — |
Capstones are larger, end-to-end projects that pull together what you have built in the book and apply it to something closer to a real-world system.
| Capstone | Description | Notebook |
|---|---|---|
| Monte Carlo Estimation of Pi | Orchestrate parallel tool calls to estimate π using the Monte Carlo method. | Open |
| Deep Research Agent | Coming soon. | — |
| OpenClaw Personal Assistant | Coming soon. | — |
Clone the repository:
# SSH
git clone git@github.com:nerdai/llm-agents-from-scratch.git
# HTTPS
git clone https://github.com/nerdai/llm-agents-from-scratch.git
cd llm-agents-from-scratchInstall dependencies:
uv sync --all-extras --devfrom llm_agents_from_scratch.llms import OllamaLLM
from llm_agents_from_scratch.agent import LLMAgentBuilder
from llm_agents_from_scratch.tools import SimpleFunctionTool
def add(a: int, b: int) -> int:
return a + b
llm = OllamaLLM(model="llama3.2")
tool = SimpleFunctionTool(fn=add)
agent = (
LLMAgentBuilder()
.with_llm(llm)
.with_tools([tool])
.build()
)
result = await agent.run("What is 3 + 5?")
print(result)# Run all tests
make test
# Lint and format
make lint
make format
# Coverage report
make coverage-reportSee CLAUDE.md for full development guidance.
Bug reports, feature requests, and community project submissions are welcome. See CONTRIBUTING.md for details.
- Found a bug? Open an issue
- Built something cool? Share it on GitHub Discussions
Apache 2.0 — see LICENSE for details.