This guide covers local development setup, editable installs, testing, and linting.
python3 -m venv .venv
source .venv/bin/activate
python -m pip install -U pip setuptoolsBamboo uses editable installs so that plugins register their entry points
correctly with importlib.metadata.
# Core server + ATLAS plugin (minimum for development)
pip install -e ./core
pip install -e ./packages/askpanda_atlas
# Additional plugins
pip install -e ./packages/cgsim
pip install -e ./packages/askpanda_verarubin
pip install -e ./packages/askpanda_epicYou must re-run pip install -e after changing:
- any
pyproject.toml - entry-point definitions (
bamboo.tools) - plugin dependencies
- package layout / module paths
You do NOT need to reinstall for plain Python source file changes.
Bamboo uses separate requirements files for optional features so the core package stays lightweight.
| File | Install when… |
|---|---|
requirements.txt |
Always — base MCP server dependencies |
requirements-dev.txt |
Running tests or linting |
requirements-mistral.txt |
Using LLM_DEFAULT_PROVIDER=mistral |
requirements-openai.txt |
Using LLM_DEFAULT_PROVIDER=openai or openai_compat |
requirements-anthropic.txt |
Using LLM_DEFAULT_PROVIDER=anthropic |
requirements-gemini.txt |
Using LLM_DEFAULT_PROVIDER=gemini |
requirements-rag.txt |
Using the RAG pipeline (panda_doc_search, panda_doc_bm25) |
requirements-otel.txt |
Exporting traces via OpenTelemetry (BAMBOO_OTEL_ENDPOINT) |
requirements-textual.txt |
Running the Textual TUI |
requirements-ui.txt |
Running the Streamlit UI |
Install dev dependencies first:
pip install -r requirements-dev.txtThis installs pytest, pytest-asyncio>=0.21, flake8, pylint, and the
circular-import detector. pytest-asyncio>=0.21 is required — the test
suite uses asyncio_mode = "strict" (set in pyproject.toml) and most tests
are async def. Without it every async test will fail with
"async def functions are not natively supported".
Run all tests from the repo root:
pytest tests/
# or quieter:
pytest -q tests/
# single file:
pytest tests/test_task_status.py
# single test:
pytest tests/test_task_status.py::test_task_status_success_jsonAll 324 tests run fully offline — no API keys, no network, no ChromaDB instance required.
tests/
├── conftest.py # sys.path setup for non-installed runs
│
├── test_task_status.py # panda_task_status — BigPanDA HTTP tool
├── test_job_status.py # panda_job_status
├── test_log_analysis.py # panda_log_analysis — failure classification
├── test_doc_rag.py # panda_doc_search — ChromaDB vector search
├── test_doc_bm25.py # panda_doc_bm25 — BM25 keyword search
├── test_topic_guard.py # two-stage topic guard
├── test_bamboo_answer_helpers.py # helper functions (_extract_task_id, _compact_json, etc.)
├── test_bamboo_answer_rag.py # bamboo_answer — routing, follow-up detection, guard bypass
├── test_bamboo_executor.py # execute_plan — tool resolution, evidence merging, synthesis
├── test_llm_error_handling.py # friendly LLM error messages, all routes
├── test_planner.py # bamboo_plan — LLM planner tool
├── test_context_memory.py # multi-turn history threading across all routes
├── test_narrow_waist.py # list[MCPContent] contract enforced by all tools
│
├── test_llm_providers.py # OpenAI / Anthropic / Gemini / compat clients
│
├── test_tracing.py # bamboo.tracing — NDJSON spans, file output
├── test_tracing_otel.py # bamboo.tracing — OpenTelemetry integration
│
├── test_panda_http_sync.py # _panda_http / _fallback_http parity
├── test_loader.py # plugin entry-point loader
└── test_cli.py # bamboo CLI
- Unit-test tools by mocking external services — BigPanDA HTTP, LLM
providers, ChromaDB, and upstream MCP servers are all mocked with
unittest.mock. - No real credentials needed — all async tests use
AsyncMock; all provider tests mock the vendor SDK at the module level. - Tracing tests mock
_start_otel_spanand_get_otel_tracerdirectly rather than patchingsys.modules, which is more reliable across pytest isolation modes. bamboo.toolsrule: tools must always return a result and never raise. Error-handling tests verify this contract end-to-end.
Copy the example file and fill in your credentials:
cp bamboo_env_example.sh bamboo_env.sh
source bamboo_env.shKey variables:
| Variable | Purpose |
|---|---|
LLM_DEFAULT_PROVIDER |
mistral, openai, anthropic, gemini, openai_compat |
LLM_DEFAULT_MODEL |
Model string for the chosen provider |
MISTRAL_API_KEY / OPENAI_API_KEY / … |
Provider API keys |
ASKPANDA_ENABLE_REAL_PANDA |
1 to use real BigPanDA API |
BAMBOO_TRACE |
1 to enable structured tracing |
BAMBOO_TRACE_FILE |
Write trace NDJSON to file (required for TUI) |
BAMBOO_OTEL_ENDPOINT |
OTLP/gRPC endpoint for OpenTelemetry export |
See bamboo_env_example.sh for the full list and docs/tracing.md for
tracing details.
# List all registered tools
python -m bamboo tools list
python -m bamboo tools list --json
# Start the MCP server (stdio)
python -m bamboo.server
# Inspect with MCP Inspector
npx @modelcontextprotocol/inspector python3 -m bamboo.server# Flake8 (max line length 200, complexity 15)
flake8 .
# Pylint
pylint core/ interfaces/ packages/
# Type checking
pyright .
# Pre-commit (runs flake8 + circular import detection on changed files)
pre-commit run --all-filesThe pre-commit hook checks for circular imports using
circular-import-detector==1.0.18. Run it before opening a PR.
- Create
core/bamboo/llm/providers/<name>_client.pyfollowing the pattern ofmistral_client.py— lazy SDK import, async semaphore, retry loop,LLMConfigError/LLMRateLimitErrorescape the retry loop immediately. - Register it in
core/bamboo/llm/factory.py→_PROVIDER_MAP. - Add
requirements-<name>.txtwith the SDK dependency. - Add tests in
tests/test_llm_providers.py— happy path, missing API key, missing SDK, rate limit, timeout, retry. - Document the new env vars in
bamboo_env_example.sh.
- Create
core/bamboo/tools/<name>.pyimplementingget_definition()and an asynccall(arguments)that always returns a result (never raises). - Register the singleton in
core/bamboo/core.py→TOOLS. - Wrap any LLM call sites with the tracing
span()context manager. - Add tests.
See docs/plugins.md.