A powerful Model Context Protocol (MCP) server that enables AI collaboration through multiple providers with advanced strategies and comprehensive tooling.
- DeepSeek: Primary provider with optimized performance
- OpenAI: GPT models integration
- Anthropic: Claude models support
- O3: Next-generation model support
- Parallel: Execute requests across multiple providers simultaneously
- Sequential: Chain provider responses for iterative improvement
- Consensus: Build agreement through multiple provider opinions
- Iterative: Refine responses through multiple rounds
- collaborate: Multi-provider collaboration with strategy selection
- review: Content analysis and quality assessment
- compare: Side-by-side comparison of multiple items
- refine: Iterative content improvement
- Caching: Memory and Redis-compatible caching system
- Metrics: OpenTelemetry-compatible performance monitoring
- Search: Full-text search with inverted indexing
- Synthesis: Intelligent response aggregation
π New to MCP? Check out our Quick Start Guide for a 5-minute setup!
- Node.js 18.0.0 or higher
- pnpm 8.0.0 or higher
- TypeScript 5.3.0 or higher
# Clone the repository
git clone https://github.com/atsuki-sakai/ai_collaboration_mcp_server.git
cd ai_collaboration_mcp_server
# Install dependencies
pnpm install
# Build the project
pnpm run build
# Run tests
pnpm test-
Environment Variables:
# Required: Set your API keys export DEEPSEEK_API_KEY="your-deepseek-api-key" export OPENAI_API_KEY="your-openai-api-key" export ANTHROPIC_API_KEY="your-anthropic-api-key" # Optional: Configure other settings export MCP_DEFAULT_PROVIDER="deepseek" export MCP_PROTOCOL="stdio"
-
Configuration Files:
config/default.yaml: Default configurationconfig/development.yaml: Development settingsconfig/production.yaml: Production settings
# Start with default settings
pnpm start
# Start with specific protocol
node dist/index.js --protocol stdio
# Start with custom providers
node dist/index.js --providers deepseek,openai --default-provider deepseek
# Enable debug mode
NODE_ENV=development LOG_LEVEL=debug pnpm startTo use this MCP server with Claude Code, you need to configure Claude Code to recognize and connect to your server.
Use the automated setup script for easy configuration:
# Navigate to your project directory
cd /Users/atsukisakai/Desktop/ai_collaboration_mcp_server
# Run automated setup with your DeepSeek API key
./scripts/setup-claude-code.sh --api-key "your-deepseek-api-key"
# Or with multiple providers
./scripts/setup-claude-code.sh \
--api-key "your-deepseek-key" \
--openai-key "your-openai-key" \
--anthropic-key "your-anthropic-key"
# Alternative using pnpm
pnpm run setup:claude-code -- --api-key "your-deepseek-key"The setup script will:
- β Build the MCP server
- β Create Claude Code configuration file
- β Test the server connection
- β Provide next steps
If you prefer manual setup:
# Navigate to your project directory
cd /Users/atsukisakai/Desktop/ai_collaboration_mcp_server
# Install dependencies and build
pnpm install
pnpm run build
# Set your DeepSeek API key
export DEEPSEEK_API_KEY="your-deepseek-api-key"
# Test the server
pnpm run verify-deepseekCreate or update the Claude Code configuration file:
Note: There are two server options:
simple-server.js- Simple implementation with DeepSeek only (recommended for testing)index.js- Full implementation with all providers and features
macOS/Linux:
# Create config directory if it doesn't exist
mkdir -p ~/.config/claude-code
# Create configuration file (simple server - recommended for testing)
cat > ~/.config/claude-code/claude_desktop_config.json << 'EOF'
{
"mcpServers": {
"ai-collaboration": {
"command": "node",
"args": ["/Users/atsukisakai/Desktop/ai_collaboration_mcp_server/dist/simple-server.js"],
"env": {
"DEEPSEEK_API_KEY": "your-deepseek-api-key"
}
}
}
}
EOF
# Or use the full server for all features
# Replace simple-server.js with index.js in the args aboveWindows:
# Create config directory
mkdir "%APPDATA%\Claude"
# Create configuration file (use your preferred text editor)
# File: %APPDATA%\Claude\claude_desktop_config.json{
"mcpServers": {
"ai-collaboration": {
"command": "node",
"args": [
"/Users/atsukisakai/Desktop/ai_collaboration_mcp_server/dist/index.js",
"--default-provider", "deepseek",
"--providers", "deepseek,openai"
],
"env": {
"DEEPSEEK_API_KEY": "your-deepseek-api-key",
"OPENAI_API_KEY": "your-openai-api-key",
"ANTHROPIC_API_KEY": "your-anthropic-api-key",
"NODE_ENV": "production",
"LOG_LEVEL": "info",
"MCP_DISABLE_CACHING": "false",
"MCP_DISABLE_METRICS": "false"
}
}
}
}After restarting Claude Code, you'll have access to these powerful tools:
- π€ collaborate - Multi-provider AI collaboration
- π review - Content analysis and quality assessment
- βοΈ compare - Side-by-side comparison of multiple items
- β¨ refine - Iterative content improvement
# Use DeepSeek for code explanation
Please use the collaborate tool to explain this Python code with DeepSeek
# Review code quality
Use the review tool to analyze the quality of this code
# Compare multiple solutions
Use the compare tool to compare these 3 approaches to solving this problem
# Improve code iteratively
Use the refine tool to make this function more efficient
Check MCP server connectivity:
# Test if the server starts correctly
DEEPSEEK_API_KEY="your-key" node dist/index.js --helpView logs:
# Check application logs
tail -f logs/application-$(date +%Y-%m-%d).logVerify Claude Code configuration:
- Restart Claude Code completely
- In a new conversation, ask "What tools are available?"
- You should see the four MCP tools listed
- Test with a simple command like "Use collaborate to say hello"
- macOS:
~/.config/claude-code/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json - Linux:
~/.config/claude-code/claude_desktop_config.json
Execute multi-provider collaboration with strategy selection:
{
"jsonrpc": "2.0",
"id": 1,
"method": "tools/call",
"params": {
"name": "collaborate",
"arguments": {
"prompt": "Explain quantum computing in simple terms",
"strategy": "consensus",
"providers": ["deepseek", "openai"],
"config": {
"timeout": 30000,
"consensus_threshold": 0.7
}
}
}
}Analyze content quality and provide detailed feedback:
{
"jsonrpc": "2.0",
"id": 2,
"method": "tools/call",
"params": {
"name": "review",
"arguments": {
"content": "Your content here...",
"criteria": ["accuracy", "clarity", "completeness"],
"review_type": "comprehensive"
}
}
}Compare multiple items with detailed analysis:
{
"jsonrpc": "2.0",
"id": 3,
"method": "tools/call",
"params": {
"name": "compare",
"arguments": {
"items": [
{"id": "1", "content": "Option A"},
{"id": "2", "content": "Option B"}
],
"comparison_dimensions": ["quality", "relevance", "innovation"]
}
}
}Iteratively improve content quality:
{
"jsonrpc": "2.0",
"id": 4,
"method": "tools/call",
"params": {
"name": "refine",
"arguments": {
"content": "Content to improve...",
"refinement_goals": {
"primary_goal": "clarity",
"target_audience": "general public"
}
}
}
}- collaboration_history: Access past collaboration results
- provider_stats: Monitor provider performance metrics
- tool_usage: Track tool utilization statistics
src/
βββ core/ # Core framework components
β βββ types.ts # Dependency injection symbols
β βββ logger.ts # Structured logging
β βββ config.ts # Configuration management
β βββ container.ts # DI container setup
β βββ provider-manager.ts # AI provider orchestration
β βββ strategy-manager.ts # Execution strategy management
β βββ tool-manager.ts # MCP tool management
βββ providers/ # AI provider implementations
β βββ base-provider.ts # Common provider functionality
β βββ deepseek-provider.ts
β βββ openai-provider.ts
β βββ anthropic-provider.ts
β βββ o3-provider.ts
βββ strategies/ # Collaboration strategies
β βββ parallel-strategy.ts
β βββ sequential-strategy.ts
β βββ consensus-strategy.ts
β βββ iterative-strategy.ts
βββ tools/ # MCP tool implementations
β βββ collaborate-tool.ts
β βββ review-tool.ts
β βββ compare-tool.ts
β βββ refine-tool.ts
βββ services/ # Enterprise services
β βββ cache-service.ts
β βββ metrics-service.ts
β βββ search-service.ts
β βββ synthesis-service.ts
βββ server/ # MCP server implementation
β βββ mcp-server.ts
βββ types/ # Type definitions
βββ common.ts
βββ interfaces.ts
βββ index.ts
- Dependency Injection: Clean architecture with InversifyJS
- Strategy Pattern: Pluggable collaboration strategies
- Provider Abstraction: Unified interface for different AI services
- Performance: Efficient caching and rate limiting
- Observability: Comprehensive metrics and logging
- Extensibility: Easy to add new providers and strategies
The server uses YAML configuration files with JSON Schema validation. See config/schema.json for the complete schema.
- Server: Basic server settings (name, version, protocol)
- Providers: AI provider configurations and credentials
- Strategies: Strategy-specific settings and timeouts
- Cache: Caching behavior (memory, Redis, file)
- Metrics: Performance monitoring settings
- Logging: Log levels and output configuration
| Variable | Description | Default |
|---|---|---|
DEEPSEEK_API_KEY |
DeepSeek API key | Required |
OPENAI_API_KEY |
OpenAI API key | Optional |
ANTHROPIC_API_KEY |
Anthropic API key | Optional |
O3_API_KEY |
O3 API key (defaults to OPENAI_API_KEY) | Optional |
MCP_PROTOCOL |
Transport protocol | stdio |
MCP_DEFAULT_PROVIDER |
Default AI provider | deepseek |
NODE_ENV |
Environment mode | production |
LOG_LEVEL |
Logging level | info |
- Request Metrics: Response times, success rates, error counts
- Provider Metrics: Individual provider performance
- Tool Metrics: Usage statistics per MCP tool
- Cache Metrics: Hit rates, memory usage
- System Metrics: CPU, memory, and resource utilization
The server supports OpenTelemetry for distributed tracing and metrics collection:
metrics:
enabled: true
export:
enabled: true
format: "opentelemetry"
endpoint: "http://localhost:4317"- Unit Tests: 95+ individual component tests
- Integration Tests: End-to-end MCP protocol testing
- E2E Tests: Complete workflow validation
- API Tests: Direct provider API validation
# Run all tests
pnpm test
# Run with coverage
pnpm run test:coverage
# Run specific test suites
pnpm run test:unit
pnpm run test:integration
pnpm run test:e2e
# Verify API connectivity
pnpm run verify-deepseek# Build image
docker build -t claude-code-ai-collab-mcp .
# Run container
docker run -d \
-e DEEPSEEK_API_KEY=your-key \
-p 3000:3000 \
claude-code-ai-collab-mcp- Load Balancing: Multiple server instances for high availability
- Caching: Redis for distributed caching
- Monitoring: Prometheus/Grafana for metrics visualization
- Security: API key rotation and rate limiting
- Backup: Regular configuration and data backups
We welcome contributions! Please see CONTRIBUTING.md for guidelines.
# Fork and clone the repository
git clone https://github.com/atsuki-sakai/ai_collaboration_mcp_server.git
cd ai_collaboration_mcp_server
# Install dependencies
pnpm install
# Start development
pnpm run dev
# Run tests
pnpm test
# Lint and format
pnpm run lint
pnpm run lint:fix- GraphQL API support
- WebSocket transport protocol
- Advanced caching strategies
- Custom strategy plugins
- Multi-tenant support
- Enhanced security features
- Performance optimizations
- Additional AI providers
- Distributed architecture
- Advanced workflow orchestration
- Machine learning optimization
- Enterprise SSO integration
This project is licensed under the MIT License - see the LICENSE file for details.
- Documentation: Wiki
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Email: support@claude-code-ai-collab.com
- Model Context Protocol for the foundational protocol
- InversifyJS for dependency injection
- TypeScript for type safety
- All AI provider APIs for enabling collaboration
Built with β€οΈ by the Claude Code AI Collaboration Team# think_hub