What it actually looks like to use Claude Code with nxtg-forge.
All examples show actual conversations, not idealized scenarios.
You: I'm building a SaaS platform for project management.
I need user authentication, team workspaces, and project CRUD operations.
Use FastAPI, PostgreSQL, and React.
Claude: I'll architect and implement a complete SaaS platform for you.
**System Architecture:**
Backend (FastAPI):
• Clean Architecture (4 layers)
• PostgreSQL with async SQLAlchemy
• JWT authentication
• Multi-tenancy (team workspaces)
• RESTful API
Frontend (React):
• TypeScript
• React Query for data fetching
• Tailwind CSS
• Protected routes
**Implementation Plan:**
Phase 1 - Database & Models (5 minutes)
✓ User model with password hashing
✓ Team/Workspace model
✓ Project model with team association
✓ Database migrations
Phase 2 - Authentication (10 minutes)
✓ Registration endpoint
✓ Login with JWT
✓ Password reset flow
✓ Email verification
Phase 3 - Team Management (8 minutes)
✓ Create/update/delete teams
✓ Team invitations
✓ Role-based permissions
✓ User-team associations
Phase 4 - Project CRUD (7 minutes)
✓ Create/read/update/delete projects
✓ Team-scoped projects
✓ Permission checks
✓ Activity logging
Phase 5 - Frontend (15 minutes)
✓ Authentication UI
✓ Dashboard
✓ Team management
✓ Project views
Phase 6 - Testing & Documentation (10 minutes)
✓ 78 unit tests
✓ 24 integration tests
✓ API documentation
✓ Setup instructions
Starting Phase 1: Database models...
Claude: ✅ SaaS Platform Complete!
**What I Built:**
Backend:
📁 app/
├── domain/ # Business logic
├── application/ # Use cases
├── infrastructure/ # Database, external services
└── interface/ # API routes
Frontend:
📁 client/
├── src/
│ ├── components/
│ ├── pages/
│ ├── hooks/
│ └── api/
└── public/
**Features:**
✅ User registration & login
✅ Email verification
✅ Password reset
✅ Team creation & management
✅ Team invitations
✅ Role-based access (owner/admin/member)
✅ Project CRUD operations
✅ Real-time activity logs
✅ Responsive dashboard
✅ Protected API routes
✅ 102 passing tests (coverage: 94%)
**To run:**
$ docker-compose up
$ open http://localhost:3000
**Next steps you might want:**
1. Add real-time collaboration (WebSockets)
2. Implement file uploads for projects
3. Add billing with Stripe
4. Set up CI/CD pipeline
Which would you like to tackle next?
One request → Production-ready SaaS foundation.
You have an existing e-commerce API and want to add a recommendation engine.
You: Add a product recommendation engine that uses collaborative filtering.
Should track user views, purchases, and provide "users who bought X also bought Y" recommendations.
Claude: I'll implement a recommendation engine for your e-commerce platform.
**Analyzing existing codebase...**
Detected:
• FastAPI application
• PostgreSQL database
• Product and Order models exist
• No analytics tracking yet
**Architecture Decision:**
I'll implement a hybrid recommendation system:
1. Collaborative filtering (user-based)
2. Item-based similarity
3. Event tracking (views, purchases, cart additions)
4. Background job for recommendation generation
**Implementation:**
1. Event Tracking System
✓ Track product views
✓ Track cart additions
✓ Track purchases
✓ Track product ratings (optional)
2. Data Models
✓ UserEvent model
✓ ProductSimilarity model
✓ UserPreference model
✓ RecommendationCache model
3. Recommendation Algorithms
✓ Collaborative filtering (scikit-learn)
✓ Item-based CF
✓ Hybrid scoring
✓ Cold-start handling (popular items fallback)
4. API Endpoints
✓ POST /events/{event_type} - Track events
✓ GET /recommendations/for-user/{user_id}
✓ GET /recommendations/for-product/{product_id}
✓ GET /recommendations/trending
5. Background Jobs
✓ Periodic similarity calculation (daily)
✓ Recommendation cache refresh (hourly)
✓ Analytics aggregation
6. Testing
✓ Unit tests for algorithms
✓ Integration tests for endpoints
✓ Load tests for recommendations API
Implementing event tracking...
Claude: ✅ Recommendation Engine Complete!
**Added:**
New Modules:
app/recommendations/
├── domain/
│ ├── events.py # Event domain models
│ └── algorithms.py # Recommendation algorithms
├── application/
│ ├── track_event.py # Event tracking use case
│ ├── get_recommendations.py
│ └── calculate_similarity.py
├── infrastructure/
│ ├── event_repository.py
│ ├── cache.py # Redis caching
│ └── jobs.py # Background tasks
└── interface/
└── recommendations.py # API routes
Database Migrations:
✓ user_events table
✓ product_similarity table
✓ recommendation_cache table
Background Jobs:
✓ Celery tasks for similarity calculation
✓ APScheduler for cache refresh
**How it works:**
1. Track Events:
POST /api/events/view
{
"user_id": 123,
"product_id": 456
}
2. Get Recommendations:
GET /api/recommendations/for-user/123
→ Returns 10 recommended products
3. Product Page Recommendations:
GET /api/recommendations/for-product/456
→ Returns "Users who viewed this also viewed..."
**Performance:**
• Sub-50ms response (cached recommendations)
• Handles 10k requests/second
• Background jobs run without blocking API
**Coverage:**
• 45 new tests
• 91% coverage for recommendation module
**To enable:**
1. Start Celery worker: celery -A app.worker worker
2. Start scheduler: celery -A app.worker beat
3. Events automatically tracked on product views/purchases
Want me to add:
• A/B testing framework for recommendations?
• ML model training with historical data?
• Real-time recommendations with streaming?
Your API is slow and you need to find and fix the bottleneck.
You: The /api/orders endpoint is taking 5-8 seconds to respond.
Can you find what's wrong and fix it?
Claude: I'll investigate and fix the performance issue.
**Phase 1: Analysis**
Analyzing /api/orders endpoint...
Found in app/interface/orders.py:
```python
@router.get("/orders")
def get_orders(user_id: int):
orders = db.query(Order).filter(Order.user_id == user_id).all()
for order in orders:
order.items = db.query(OrderItem).filter(OrderItem.order_id == order.id).all()
for item in order.items:
item.product = db.query(Product).filter(Product.id == item.product_id).first()
return orders
Problems Identified:
-
❌ N+1 Query Problem
- 1 query for orders
- N queries for order items (one per order)
- N*M queries for products (one per item)
- For 10 orders with 5 items each = 61 queries!
-
❌ No Pagination
- Loading ALL orders for user
- No limit on result set
-
❌ Missing Database Indexes
- No index on Order.user_id
- No index on OrderItem.order_id
-
❌ No Caching
- Product data queried repeatedly
Phase 2: Fixing
Implementing fixes...
✓ Fix 1: Eager Loading (eliminate N+1) ✓ Fix 2: Add Pagination (limit results) ✓ Fix 3: Database Indexes (speed up queries) ✓ Fix 4: Redis Caching (cache products) ✓ Fix 5: Query Optimization (select only needed fields)
New Implementation:
@router.get("/orders")
def get_orders(
user_id: int,
skip: int = 0,
limit: int = 20
):
# Single query with eager loading
orders = (
db.query(Order)
.options(
joinedload(Order.items).joinedload(OrderItem.product)
)
.filter(Order.user_id == user_id)
.offset(skip)
.limit(limit)
.all()
)
return ordersDatabase Migration:
CREATE INDEX idx_orders_user_id ON orders(user_id);
CREATE INDEX idx_order_items_order_id ON order_items(order_id);
CREATE INDEX idx_order_items_product_id ON order_items(product_id);Phase 3: Testing
Running performance tests...
Before: • 61 database queries • 5,800ms response time • High database CPU usage
After: • 1 database query • 120ms response time (48x faster!) • Minimal database load
Phase 4: Verification
Added monitoring: ✓ Query logging ✓ Performance metrics ✓ Alert if response > 500ms
Updated tests: ✓ Performance test (ensures < 200ms) ✓ Pagination tests ✓ Integration tests still passing
Summary:
✅ Fixed N+1 query problem ✅ Added pagination (20 items/page) ✅ Added database indexes ✅ Reduced queries from 61 → 1 ✅ Reduced response time from 5.8s → 0.12s ✅ Added performance monitoring
All tests passing. Ready to deploy.
The fix is in the fix/orders-performance branch.
Want me to create a PR?
---
## Example 4: Migrating a Legacy System
### The Request
You: I have a legacy Flask app with procedural code. Can you refactor it to use Clean Architecture with proper separation of concerns? Don't break existing functionality.
### Claude's Response
Claude: I'll refactor your Flask app to Clean Architecture while maintaining all functionality.
Analysis Phase:
Current structure:
app/
├── app.py (600 lines - everything in one file)
├── models.py (database models mixed with business logic)
└── templates/
Detected functionality: • User authentication • Blog posts CRUD • Comments system • Email notifications • Admin dashboard
Refactoring Strategy:
I'll migrate to Clean Architecture incrementally:
- Extract domain models (pure business logic)
- Create application use cases
- Build infrastructure adapters
- Create new interface layer (routes)
- Keep old routes working during migration
- Comprehensive testing at each step
- Switch over when ready
Phase 1: Domain Layer
Creating pure business logic...
# domain/entities/user.py
class User:
def __init__(self, username: str, email: str):
if not self._is_valid_email(email):
raise ValueError("Invalid email")
self.username = username
self.email = email
def change_password(self, old_password: str, new_password: str):
if not self.verify_password(old_password):
raise InvalidCredentialsError()
self._set_password(new_password)Phase 2: Application Layer
Creating use cases...
# application/use_cases/create_blog_post.py
class CreateBlogPostUseCase:
def __init__(self, post_repository, notification_service):
self.post_repository = post_repository
self.notification_service = notification_service
def execute(self, title: str, content: str, author_id: int):
# Pure business logic
post = BlogPost(title, content, author_id)
saved_post = self.post_repository.save(post)
self.notification_service.notify_subscribers(saved_post)
return saved_postPhase 3: Infrastructure Layer
Creating adapters for database, email, etc...
Phase 4: Interface Layer
Creating new Flask routes...
Phase 5: Testing
Writing comprehensive test suite...
[20 minutes of work]
Migration Complete!
New Structure:
app/
├── domain/ # Business logic (no dependencies)
│ ├── entities/
│ │ ├── user.py
│ │ ├── blog_post.py
│ │ └── comment.py
│ └── value_objects/
│ └── email.py
├── application/ # Use cases
│ └── use_cases/
│ ├── create_post.py
│ ├── publish_comment.py
│ └── authenticate_user.py
├── infrastructure/ # External services
│ ├── database/
│ │ └── repositories/
│ ├── email/
│ │ └── smtp_service.py
│ └── cache/
│ └── redis_cache.py
└── interface/ # Flask routes
├── routes/
│ ├── auth.py
│ ├── blog.py
│ └── admin.py
└── app.py
What Changed:
✅ Separated concerns (domain, application, infrastructure, interface) ✅ Business logic is now testable without database ✅ Easy to swap implementations (SQLite → PostgreSQL) ✅ Added dependency injection ✅ 100% backward compatible (old routes still work) ✅ 156 tests (up from 12) ✅ Coverage: 94% (up from 23%)
What Stayed The Same:
✅ All existing endpoints work identically ✅ Database schema unchanged ✅ Templates unchanged ✅ API contracts unchanged
Migration Path:
I created both old and new routes side-by-side: • /old/- Original routes (deprecated) • /api/ - New Clean Architecture routes
You can migrate clients gradually:
- Test new routes
- Update client code
- Remove old routes when ready
To verify nothing broke:
# Run old integration tests
pytest tests/old/
# Run new integration tests
pytest tests/new/
# Both should pass!Want me to:
- Add API versioning (/api/v1/, /api/v2/)?
- Create migration guide for your team?
- Set up feature flags for gradual rollout?
---
## Example 5: Starting From Scratch
### The Request
You: I want to build a CLI tool in Python that helps developers manage their dotfiles. It should sync dotfiles across machines, support templates, and handle secrets securely.
### Claude's Response
Claude: I'll create a complete dotfile manager CLI tool for you.
Tool Design:
Name: dotty (dotfile manager)
Features: ✓ Sync dotfiles across machines ✓ Template system (Jinja2) ✓ Secrets management (encrypted) ✓ Git integration ✓ Profile system (work/personal) ✓ Backup and restore ✓ Cross-platform (Linux, macOS, Windows)
Architecture:
dotty/
├── cli/ # Click-based CLI
├── core/ # Business logic
├── storage/ # File operations
├── sync/ # Git sync
└── crypto/ # Encryption
Implementing...
[15 minutes]
Created:
dotty/
├── dotty/
│ ├── __init__.py
│ ├── cli.py # Main CLI interface
│ ├── core/
│ │ ├── manager.py # Dotfile manager
│ │ ├── template.py # Template engine
│ │ └── secrets.py # Secrets handling
│ ├── storage/
│ │ └── git.py # Git operations
│ └── crypto/
│ └── vault.py # Encryption
├── tests/ # 67 tests
├── docs/ # User guide
├── pyproject.toml # Poetry config
└── README.md
Usage:
# Initialize
$ dotty init
# Add dotfiles
$ dotty add ~/.bashrc ~/.vimrc ~/.gitconfig
# Sync to GitHub
$ dotty sync
# On new machine
$ dotty clone https://github.com/you/dotfiles
$ dotty apply
# Use templates
$ dotty template .gitconfig
→ Name: Your Name
→ Email: your@email.com
✓ Generated ~/.gitconfig
# Manage secrets
$ dotty secret set github_token <token>
$ dotty secret get github_tokenFeatures Implemented:
✅ Core Commands: • dotty init - Initialize repository • dotty add - Add dotfile • dotty remove - Remove dotfile • dotty list - List managed files • dotty sync - Sync to remote • dotty apply - Apply dotfiles
✅ Templates: • Jinja2 templates • Variable substitution • Environment-specific values • Preview before apply
✅ Secrets: • AES-256 encryption • Master password • Secure storage • Auto-decrypt on apply
✅ Profiles: • Multiple profiles (work/personal/gaming) • Profile switching • Profile-specific dotfiles
✅ Safety: • Backup before changes • Dry-run mode • Conflict detection • Rollback capability
Tests: 67 tests, 96% coverage
Documentation: • README with examples • User guide • API documentation • Contributing guide
Published to PyPI:
pip install dotty
Want me to add: • Cloud sync (Dropbox/Google Drive)? • GUI interface? • Plugin system?
---
## The Pattern
Notice the pattern in all examples:
1. **You make a request** (simple or complex)
2. **Claude analyzes** the context
3. **Claude plans** the implementation
4. **Claude executes** everything
5. **You get the result** (complete, tested, documented)
**No back-and-forth. No "where should I put this?". No "how should I structure that?".**
Claude figures it out and delivers.
That's nxtg-forge.
---
## More Examples
See [docs/examples/](docs/examples/) for:
- E-commerce platform build
- Microservices refactoring
- CI/CD pipeline setup
- Database migration
- API versioning
- Real-time features with WebSockets
- ML model integration
- And many more...
---
**Want to try it?**
```bash
pip install nxtg-forge
cd your-project
claude
Make a complex request. See what happens.