A lightweight Retrieval-Augmented Generation (RAG) application for uploading documents and asking AI-powered questions about them. Built with FastAPI, LangChain, PostgreSQL, and Gemini.
- User uploads documents (PDF, DOCX, TXT)
- System extracts text and creates embeddings
- Embeddings stored in PostgreSQL with pgvector
- User asks questions via chat interface
- RAG pipeline retrieves relevant chunks and augments LLM response
| Phase | Task | Status |
|---|---|---|
| 1 | Backend structure & DB schema | ✅ Complete |
| 2 | Embedding pipeline & document processing | ✅ Complete |
| 3 | RAG query engine & retrieval | ✅ Complete |
| 4 | Frontend UI (React/Vite) | ⏳ Next |
| 5 | Deployment & automation | 📅 Post-launch |
# Clone and setup
cd /home/fazrigading/Projects/ingatini
# Copy environment template
cp .env.example .env
# Start with Docker
docker compose up
# Or use the helper script
./dev startAPI Documentation: http://localhost:8000/docs
Health Check: curl http://localhost:8000/api/health
See GETTING_STARTED.md for detailed setup instructions.
- Framework: FastAPI (Python)
- ORM: SQLAlchemy with PostgreSQL
- Vector Storage: pgvector (768-dim embeddings)
- RAG Pipeline: LangChain with Gemini
- API: RESTful with automatic docs
- Users — User accounts
- Documents — Document metadata
- Chunks — Text segments with embeddings
- QueryLogs — Query history for analytics
- Framework: React/Vite
- UI: Clean, minimal interface
- Features: Document upload, chat with source attribution
ingatini/
├── backend/ # FastAPI application
│ ├── app/
│ │ ├── core/ # Config & database
│ │ ├── api/ # Endpoints
│ │ ├── schemas/ # Pydantic models
│ │ ├── models/ # DB models
│ │ └── services/ # Business logic
│ ├── main.py
│ └── requirements.txt
├── frontend/ # React/Vite (TODO)
├── docker-compose.yml # Dev environment
├── GETTING_STARTED.md # Setup guide
├── IMPLEMENTATION.md # Progress tracking
└── dev # Development helper
./dev start # Start development
./dev logs # View logs
./dev shell # Access container
./dev test # Run tests
./dev format # Format codePOST /api/users/ # Create user
GET /api/users/{id} # Get user
POST /api/documents/upload # Upload document
GET /api/documents/{user_id} # List documents
POST /api/query/ # Query documents (RAG)
| Layer | Technology |
|---|---|
| API | FastAPI 0.109 |
| Database | PostgreSQL 15 + pgvector |
| ORM | SQLAlchemy 2.0 |
| RAG | LangChain + Gemini |
| Embedding | embedding-001 |
| LLM | gemini-pro |
| Frontend | React/Vite (TODO) |
| Containerization | Docker + Docker Compose |
Create .env from .env.example:
DATABASE_URL=postgresql://user:password@localhost:5432/ingatini_db
GEMINI_API_KEY=your_key_here
GEMINI_EMBEDDING_MODEL=models/embedding-001
GEMINI_LLM_MODEL=gemini-pro
DEBUG=True- GETTING_STARTED.md — Setup & usage guide
- IMPLEMENTATION.md — Progress & architecture
- backend/README.md — Backend development
- frontend/README.md — Frontend development
After MVP completion, integrate with n8n:
- Trigger: New file uploaded
- Action: Call embedding pipeline API
- Result: Auto-insert embeddings to database
MIT
Next: See GETTING_STARTED.md to begin developing!