A production-ready Model Context Protocol (MCP) provider for SQL databases, built using various vibe coding tools. Supports MySQL, MariaDB, PostgreSQL, and SQLite. Features robust connection pooling, secure AES-GCM credential storage, structured JSON logging, comprehensive schema introspection, and a full suite of 21 MCP tools. Built and tested with Go 1.26.0.
# Clone the repository
git clone https://github.com/guyinwonder168/database-mcp-server.git
cd database-mcp-server
# Build the server
go build -o mcp-server ./cmd/server/main.go
# Run the server
./mcp-server# Pull the release image
docker pull ghcr.io/guyinwonder168/database-mcp-server:v1.4.0
# Run with stdio transport
docker run --rm -i ghcr.io/guyinwonder168/database-mcp-server:v1.4.0# Persist config.yaml and logs on host
mkdir -p ./.mcp-data
docker run --rm -i \
-v "$(pwd)/.mcp-data:/app" \
ghcr.io/guyinwonder168/database-mcp-server:v1.4.0Package registry: https://github.com/guyinwonder168/database-mcp-server/pkgs/container/database-mcp-server
- π§ Interactive Setup - Auto-creates
config.yamlif missing; all configuration is managed via MCP actions - π₯ Profile Management - Create, update, delete, or clone database profiles via MCP
- β‘ SQL Execution - Run arbitrary SQL queries (with read-only enforcement)
- π Schema Introspection - List tables/views with schema info, describe table schemas, list databases, discover joins, and browse schemas
- ποΈ Multi-Schema Support - Work with tables in any PostgreSQL schema with automatic detection and schema-qualified queries
- π Sample Data Fetching - Fetch sample rows to infer data formats and value ranges
- π Automated Join Discovery - Suggest JOIN SQL for building complex queries
- π¦ Query Optimization - EXPLAIN-based analysis with findings and performance estimates
- π‘οΈ Query Validation - Syntax, logic, and security checks before execution
- π€ Smart Query Builder - Generate SQL queries programmatically (integrated into analyze-schema AIQuerySuggestions)
- π Read-only Profiles - Prevent write operations on selected profiles
- π Secure Credentials - Passwords are encrypted at rest using AES-GCM (256-bit)
- π Connection Pooling - Efficient, configurable pooling with max pool size
- π Structured Logging & Error Handling - All actions and errors are logged as structured JSON; actionable error responses
- π οΈ Tool Discovery -
list-toolsMCP action returns a machine-readable list of all available tools/actions - π Official MCP Protocol - Communication via stdio (not HTTP server; JSON is exchanged over stdio via official Go MCP SDK)
- π Business Intelligence - Discover KPIs, trends, anomalies, and distribution patterns via
discover-insights - π§ Data Lineage - Analyze upstream/downstream dependencies via
analyze-data-lineage - π§± Schema Evolution Tracking - Track schema snapshots, detect drift, and generate migration scripts via
track-schema-changes - 𧬠Advanced Data Profiling - Optional statistical/pattern profiling for
analyze-schemaviaprofiling: true - π Multi-Database Federation - Execute federated subqueries with cross-profile joins via
federated-query
| Tool | Description |
|---|---|
configure-profile |
Create, update, delete, or clone database connection profiles |
list-profiles |
List all configured database profiles |
execute-sql |
Execute arbitrary SQL queries with read-only enforcement |
list-tables |
List tables with schema information in selected database |
describe-table |
Describe comprehensive table schema with metadata (supports schema parameter) |
list-databases |
List accessible databases for profile |
list-schemas |
List all accessible database schemas with default schema |
get-search-path |
Get current search_path and effective schema (read-only diagnostic) |
smart-query-builder |
Generate SQL from high-level intent |
optimize-query |
Run EXPLAIN, return plan, findings, and performance estimate |
validate-query |
Validate SQL syntax and flag risky patterns before execution |
analyze-data-lineage |
Trace FK-based upstream/downstream table dependencies |
discover-joins |
Discover foreign key relationships and suggest JOINs |
sample-data |
Fetch sample rows to infer data formats |
discover-insights |
Discover KPIs, trends, anomalies, and distribution patterns in database tables |
track-schema-changes |
Track schema snapshots/history, generate migrations, and detect schema drift |
federated-query |
Execute read-only cross-profile subqueries with optional JOINs, aggregation, and partial-failure metadata |
list-tools |
List all available MCP tools and descriptions |
get-tool-help |
Return on-demand summary, examples, and common errors for a tool |
analyze-schema |
Comprehensive schema analysis with AI query suggestions and optional advanced profiling (profiling) |
mcp-info |
Show provider version and author |
- Default schema mode is
compactfor tool-first and strict declaration-budget clients. - Optional
standardmode keeps verbose tool descriptions for human-readable metadata. - All 21 MCP tools are always registered.
- Gemini Compatibility: Schemas are automatically sanitized to comply with Google Gemini's OpenAPI 3.0 subset requirements (single
typevalues, noadditionalProperties: false, properitemsschemas). - Use
get-tool-helpfor per-tool examples and troubleshooting without inflating startup metadata.
config.yaml:
schema_mode: compact # compact|standardRecommended startup configuration for strict tool-loading clients:
schema_mode: compactHelper tool request example:
{
"tool_name": "execute-sql",
"topic": "all"
}- π API Documentation - Detailed API specifications and examples
- π Implementation Status - Current implementation tracking
- ποΈ Technical Specifications - Architecture and design details
- π PRD Analysis - Product requirements analysis with AI perspective
- π Schema Introspection Queries - Database-specific queries
- π§ͺ Test Enhanced Schema - Test schema documentation
- πΊοΈ Enhancement Roadmap - Strategic development planning
The project includes a comprehensive memory bank system for AI assistants, located in .kilocode/rules/memory-bank/:
- ποΈ Architecture - System architecture and component relationships
- π Brief - Project overview and requirements
- π Context - Current state and recent changes
- π― Product - Problem statement and solution overview
- π» Tech - Technology stack and development setup
- πΊοΈ Roadmap - Consolidated enhancement plan
- π Vertical Slices (History) - Phase-by-phase development breakdown
- π Architecture Validation (History) - Technical compatibility analysis
- π MCP Tool Detection Fix (History) - Critical bug fix documentation
- π CHANGELOG - Detailed release notes and version history
We welcome contributions! Please see our Contributing Guidelines for development setup and workflow.
This project is licensed under the MIT License.
For security policies and vulnerability reporting, please see our Security Policy.
Please read our Code of Conduct for community guidelines.
# Run all tests
go test ./...
# Run tests with coverage
go test -cover ./...
# Run specific test suites
go test ./internal/mcp -run "TestDescribe" -v # MySQL/PostgreSQL descriptor tests
go test ./internal/mcp -run "TestLoadLineage" -v # Lineage edge tests- MySQL/PostgreSQL: Uses go-sqlmock for database-specific query testing without requiring real database connections
- SQLite: Uses real in-memory databases for actual SQLite functionality testing
- Live Integration: Optional live database tests via
DB_MCP_IT_*environment variables
-
Version: v1.4.0
-
Built with: Various vibe coding tools
-
Status: Production Ready β
-
All 21 MCP tools are fully implemented and OpenAPI-aligned.
-
Enhanced schema introspection and sample data features.
-
Optional advanced profiling in
analyze-schemafor column-level statistics, pattern detection, and quality scoring. -
AES-GCM encryption, connection pooling, and structured error handling are enforced.
-
Comprehensive unit and integration tests included.
-
Ready for production use.
-
Business Intelligence Discovery: Added
discover-insightstool for automatic KPI, trend, anomaly, and distribution analysis
Current Development Status: The Database MCP Server is production-ready with a comprehensive enhancement roadmap in progress.
Implementation Phases:
- Phase 1 (Completed): Query optimization, validation, and enhanced NLP
- Phase 2 (Completed): Data lineage and business intelligence
- Phase 3 (Completed): Schema evolution, advanced profiling, and multi-database federation
- Phase 4 (Planned): Cross-database data migration with async jobs, schema translation, and resume capability
Current Progress:
track-schema-changesis implemented with snapshot tracking, history, migration generation, and drift detection.- Advanced profiling for
analyze-schemais implemented with optionalprofilingparameter and backward-compatible response shape. federated-queryis implemented with parser/planner/join/executor/handler modules and dedicated test coverage.configure-profileenhanced withdeleteandcloneactions (v1.3.0).
Planning Documents:
- Enhancement Roadmap - Strategic overview
- Data Migration Design - Phase 4 implementation plan
- Project Plan (History) - Comprehensive implementation strategy
- Vertical Slices (History) - Detailed phase breakdowns
This server provides comprehensive multi-schema support for PostgreSQL databases:
| Tool | Schema Support |
|---|---|
list-tables |
Returns schema information for all tables |
describe-table |
Optional schema parameter with auto-detection |
sample-data |
Optional schema parameter |
analyze-schema |
Optional schema parameter |
list-schemas |
Discover all accessible schemas |
get-search-path |
Diagnostic tool for current schema context |
This server uses connection pooling per database profile. Due to this architecture:
set-search-pathis intentionally NOT implemented - Session-levelsearch_pathchanges would contaminate other clients using the same pooled connection- Use explicit schema qualification via the
schemaparameter on relevant tools - Auto-detection falls back gracefully:
current_schema()β first accessible schema β'public'
// Describe table in specific schema
{
"profile_name": "mydb",
"database_name": "mydb",
"table_name": "users",
"schema": "bitnami_redmine"
}
// List all schemas
{
"profile_name": "mydb",
"database_name": "mydb"
}Ready for immediate enhancement development while maintaining production stability.