chatterr-mcp is a local MCP stdio server that provides one tool, chatterr, to run deterministic back-and-forth debates between two local LLMs.
It is packaged to behave like other installable MCP servers (Playwright/Puppeteer style operational flow): install, register command, use in chat.
Not by repo path alone. Like other MCP servers, Copilot needs a configured server command.
This repo now provides everything needed:
- installable Python package (
pyproject.toml) - console entrypoint (
chatterr-server) - module entrypoint (
python -m chatterr_mcp) - MCP sample config (
mcp.sample.json) - tests
For a dedicated step-by-step install guide, see INSTALL.md.
python3 -m pip install -e .Then the server command is available:
chatterr-serverPYTHONPATH=src python3 -m chatterr_mcpUse the same pattern as other MCP servers: configure a command.
Example (installed command):
{
"servers": {
"chatterr": {
"command": "chatterr-server",
"args": [],
"env": {
"CHATTER_MODEL_CMD_TEMPLATE": "ollama run {model}"
}
}
}
}Source-mode template is in mcp.sample.json.
Input schema fields:
topic(string)model_a(string)model_b(string)max_turns(integer >= 1)stop_phrase(optional string)timeout_seconds(optional integer, default 60)
Output fields:
transcript(string)turns_completed(integer)status(completed | stopped | error)error(optional string)
Default per-call command:
ollama run <model>Override backend command template:
CHATTER_MODEL_CMD_TEMPLATE='your_command {model}'Each model call receives full prompt/transcript on stdin and must output plain text on stdout.
This server uses MCP JSON-RPC over stdio with Content-Length framing for compatibility with MCP clients.
python3 -m py_compile src/chatterr_mcp/server.py src/chatterr_mcp/__main__.py chatterr_server.py
PYTHONPATH=src python3 -m unittest discover -s tests -v