A modern web-based monitoring and management interface for the NixOS router, built with FastAPI (backend) and React + Flowbite (frontend).
- FastAPI - Modern async Python web framework
- PostgreSQL - Time-series data storage for metrics and configuration (DNS, DHCP, Apprise, notifications)
- WebSockets - Real-time metrics broadcasting
- JWT Authentication - System user authentication via PAM
- SQLAlchemy - Async ORM for database operations
- Celery - Background task queue (workers + beat scheduler)
- Redis - Message broker and buffer/cache for Celery; also used for API response caching
- Background tasks (aggregation, notifications, port scanner, history cleanup) run in Celery processes, not in the FastAPI process. Production deploys use separate
router-webui-celery-workerandrouter-webui-celery-beatservices.
- React 18 - Modern UI library
- TypeScript - Type-safe development
- Flowbite React - Tailwind CSS component library
- Recharts - Beautiful, responsive charts
- Vite - Fast build tooling
- psutil - System metrics (CPU, memory, load, uptime)
- dnsmasq DHCP - Parse lease files for client information
- dnsmasq DNS - DNS statistics collection (where supported)
- Systemd - Service status monitoring
- Network interfaces - Real-time bandwidth monitoring
Real-time monitoring
- Dashboard with system metrics (CPU, memory, load average, uptime)
- Network interface statistics with live bandwidth graphs
- Per-device bandwidth and usage tracking
- Service status monitoring (DNS/DHCP, PPPoE, etc.)
- Speedtest with historical results
Configuration management
- DHCP - Configure DHCP networks and static reservations per network (homelab/lan)
- DNS - Configure DNS zones and records per network
- CAKE - View and configure CAKE traffic shaping
- Apprise - Manage notification services (email, Discord, Telegram, etc.)
- Dynamic DNS - Configure DynDns providers and updates
- Port Forwarding - Manage port forwarding rules
- Blocklists and Whitelist - Manage blocklists and whitelist per network
Other
- Notifications - Automated alert rules based on system metrics; send test notifications via Apprise
- Service control - Start, stop, restart, reload DNS and DHCP services from the WebUI
- Worker Status - View Celery worker and task status
- Logs - System and application logs
- Documentation - In-app link to the project documentation site
Authentication and UX
- System user login (PAM), JWT token-based sessions
- Responsive design, dark mode, mobile-friendly
- In-app Documentation link
cd webui/backend
# Install dependencies
pip install -r requirements.txt
# Set environment variables
export DATABASE_URL="postgresql+asyncpg://router_webui:password@localhost/router_webui"
export JWT_SECRET_KEY="your-secret-key-here"
export DEBUG=true
# Run development server
python -m uvicorn main:app --reload --host 0.0.0.0 --port 8080
# Access API docs
open http://localhost:8080/docsFor full configuration management and background tasks, Redis (and optionally Celery worker/beat) must be running; see Configuration below. Production on NixOS runs Celery as separate systemd services.
cd webui/frontend
# Install dependencies
npm install
# Run development server (with proxy to backend)
npm run dev
# Access frontend
open http://localhost:3000# Create PostgreSQL database
createdb router_webui
# Run schema and migrations (see backend/migrations/)
psql router_webui < webui/backend/schema.sql
# Then apply migrations in backend/migrations/ as needed{
imports = [
# ... other modules
./modules/webui.nix
];
services.router-webui = {
enable = true;
port = 8080;
collectionInterval = 2; # seconds
};
# Open firewall for WebUI
networking.firewall.allowedTCPPorts = [ 8080 ];
}cd webui/frontend
npm install
npm run buildsudo nixos-rebuild switchOpen http://router-ip:8080 in your browser
Default credentials: Your system user account
POST /api/auth/login- Login with system credentialsGET /api/auth/me- Get current user infoPOST /api/auth/logout- Logout
WS /ws?token={jwt}- WebSocket for real-time metrics
GET /api/history/system- System metrics historyGET /api/history/interface/{name}- Interface statistics historyGET /api/history/bandwidth/{network}?period={1h|24h|7d|30d}- Bandwidth historyGET /api/history/services- Service status history
GET /api/bandwidth/*- Bandwidth and connection historyGET /api/devices/*- Devices and client dataGET /api/system/*- System metrics and infoGET /api/speedtest/*- Speedtest results and historyGET /api/cake/*- CAKE status and configuration (read/write)
GET/POST /api/dns/*- DNS zones and recordsGET/POST /api/dhcp/*- DHCP networks and reservationsGET/POST /api/apprise/*- Apprise services and send testGET/POST /api/dyndns/*- Dynamic DNS configurationGET/POST /api/port-forwarding/*- Port forwarding rulesGET /api/blocklists/*- Blocklists configurationGET /api/whitelist/*- Whitelist configuration
GET/POST /api/notifications/*- Notification rulesGET /api/worker-status/*- Celery worker and task statusGET /api/logs/*- System/application logs
GET /api/health- Service health status
{
"type": "metrics",
"data": {
"timestamp": "2024-01-01T00:00:00",
"system": {
"cpu_percent": 25.5,
"memory_percent": 45.2,
"load_avg_1m": 0.85,
...
},
"interfaces": [
{
"interface": "ppp0",
"rx_rate_mbps": 15.2,
"tx_rate_mbps": 3.8,
...
}
],
"services": [...],
"dhcp_clients": [...],
"dns_stats": [...]
}
}The WebUI uses PostgreSQL with automatic migrations (see backend/migrations/). Tables include:
- Metrics and history:
system_metrics,interface_stats,dhcp_leases,service_status, and related time-series tables for bandwidth and history - Configuration (stored in DB after migration): Apprise services, DNS zones/records, DHCP networks/reservations, notification rules
- Device overrides: Hostnames and overrides for devices
- Indexes: Optimized for time-series and config queries
On first startup, the backend can migrate Apprise, DNS, and DHCP configuration from router-config.nix / config files into the database; after that, those settings are managed via the WebUI.
# Core
DATABASE_URL=postgresql+asyncpg://user:pass@host/db
JWT_SECRET_KEY=your-secret-key
JWT_ALGORITHM=HS256
JWT_EXPIRATION_MINUTES=1440
DEBUG=false
# Collection and paths
COLLECTION_INTERVAL=5
DNSMASQ_LEASE_FILES=/var/lib/dnsmasq/homelab/dhcp.leases /var/lib/dnsmasq/lan/dhcp.leases
ROUTER_CONFIG_FILE=/etc/nixos/router-config.nix
# Redis (required for Celery and caching)
REDIS_HOST=127.0.0.1
REDIS_PORT=6379
REDIS_DB=0
REDIS_PASSWORD= # optional
REDIS_WRITE_BUFFER_ENABLED=true
REDIS_BUFFER_FLUSH_INTERVAL=5
REDIS_BUFFER_MAX_SIZE=100
# Celery (worker/beat use these; backend uses Redis for cache)
# BROKER_URL or CELERY_BROKER_URL typically points to Redis, e.g. redis://localhost:6379/0VITE_API_URL=http://localhost:8080
VITE_WS_URL=ws://localhost:8080/ws# Check service status
sudo systemctl status router-webui-backend
# View logs
sudo journalctl -u router-webui-backend -f
# Test database connection
psql -h localhost -U router_webui -d router_webui -c "SELECT COUNT(*) FROM system_metrics;"sudo systemctl status router-webui-celery-worker
sudo systemctl status router-webui-celery-beat
journalctl -u router-webui-celery-worker -f# Check if backend is accessible
curl http://localhost:8080/api/health
# Rebuild frontend
cd webui/frontend
rm -rf node_modules dist
npm install
npm run build- Verify JWT token is valid
- Check firewall allows port 8080
- Ensure backend service is running
- Check browser console for errors
- ~50-80MB RAM usage (backend only; workers additional)
- Configurable collection interval (default 5 seconds)
- Async I/O for non-blocking operations
- Database connection pooling
- Redis caching for API responses
- Minimal bundle size with Vite
- Lazy-loaded routes
- Efficient WebSocket reconnection
- Buffered metrics for sparklines (last 60 points)
- PAM-based authentication
- JWT tokens with configurable expiration (secret from
JWT_SECRETorJWT_SECRET_FILEin production) - Login rate limiting (5 failed attempts per IP, 15-minute window, Redis-backed)
DATABASE_URLrequired in production (env or config.env)- HTTPS support (via reverse proxy)
- Systemd security hardening (NoNewPrivileges, ProtectSystem, etc.)
- CORS configuration for development
- SQL injection prevention (SQLAlchemy ORM)
- XSS protection (React escaping)
The app currently stores the JWT in localStorage, so any XSS could theoretically steal it. To harden further:
- Backend: On
POST /api/auth/login, after validating credentials, set an httpOnly, Secure, SameSite cookie (e.g.access_token=<jwt>) instead of (or in addition to) returning the token in the JSON body. Use a short-lived cookie or the same expiry as the JWT. - Backend: For API and WebSocket auth, accept the token from the
Cookieheader (e.g.access_token=...) as well as fromAuthorization: Bearer <token>. Validate the cookie on each request. - Frontend: Stop reading/writing
access_tokenfrom/tolocalStorage. Rely on the cookie being sent automatically with same-origin requests (credentials: 'include'if using fetch; axios withwithCredentials: true). For WebSocket, either pass the token in a query param (still needed for WS) or use a cookie if your WS server can read it. - Logout: Implement logout by clearing the cookie (e.g.
Set-Cookie: access_token=; Max-Age=0onPOST /api/auth/logout).
This requires coordinated changes in webui/backend/auth.py, webui/backend/api/auth.py, webui/frontend/src/api/client.ts, and any frontend code that checks localStorage.getItem('access_token').
This is part of the NixOS router project. Follow the main project's contribution guidelines.
Same as the parent project.