-
Notifications
You must be signed in to change notification settings - Fork 0
Sprint 1
title: Sprint 1 – Full MVP Delivery for Titanic Survivor Prediction Application revision: v2-final · 08 May 2025
Sprint Duration: 2 Weeks
Team Members & Hours (Approx.):
-
Fullstack Developers (40h total each):
- Denisa-Iulia Vaidasigan @dv11079
- Fares Elbermawy @fe18597
- Huraira Ali @ha06705
- Kazi Rahman @kr09619
-
Backend & Model Specialists (40h total each):
- Lev Malets @lm21363
- Sameer Kumar @sk20179
Sprint Goal:
Deliver a complete, production-ready MVP of the Titanic Survivor Prediction Application that includes all major features and integrated services (frontend pages, backend prediction logic, ML model pipeline, admin console, marketing/advertisement integration, and Docker Compose orchestration), without incorporating user account or authentication flows (those will come in a later sprint). This sprint focuses on providing fully functional prediction capabilities, accessible immediately upon deployment.
Key Deliverables:
-
Production-Ready Docker Infrastructure
- A single
docker-compose.ymlthat orchestrates all containers (frontend, backend, model, and Supabase for data storage).
- A single
-
Complete Frontend (React + TypeScript)
- A production-grade landing page with marketing content.
- A fully functional Survival Calculator UI that collects passenger attributes and displays prediction results.
- An Admin Console UI for model management, accessible through a dedicated interface (no authentication restrictions yet).
-
Backend (FastAPI) for Predictions
- A
POST /predictendpoint that receives passenger data and returns survival predictions by calling the Model API. - Robust error handling and data validation for prediction requests.
- Logging of prediction requests for auditing.
- A
-
ML Model Service
- Endpoints for real-time inference (
/inference/) with a stable, cached ML model. - An optional
/training/endpoint for retraining the model and storing artifacts.
- Endpoints for real-time inference (
-
Marketing & Advertisement Integration
- Dedicated UI elements (banners, calls to action, etc.) on the landing page highlighting AI courses.
-
Test Coverage & Final Documentation
- Automated unit and integration tests for both frontend and backend.
- Updated documentation (in each service’s
README.mdand thedocs/folder) reflecting the final MVP configuration.
-
Finalize Docker Compose & Deployment
- Achieve zero local configuration: running
docker-compose up --build -dshould fully deploy the MVP. - Define all essential environment variables within the Compose file (no
.envusage). - Verify reliable network calls between containers (frontend → backend → model).
- Achieve zero local configuration: running
-
Complete Frontend Implementation
- Landing Page: Deliver a production-quality design that explains the application’s purpose and highlights AI course ads.
- Survival Calculator: Build a form-based component that collects the necessary passenger attributes (e.g., class, age, sex, fare, embarked) and displays the prediction result.
- Admin Console UI: Implement a basic area to view available models and trigger training operations (no authentication barriers).
- Ensure the UI is fully responsive on both desktop and mobile.
- Include basic component and user-flow tests using React Testing Library or Jest.
-
Robust Backend for Predictions
-
Prediction API: Provide a
POST /predictendpoint that validates incoming data and returns a structured prediction. - Use appropriate HTTP status codes and descriptive error messages.
- Log all prediction requests for auditing.
-
Prediction API: Provide a
-
ML Model Service
- Load and cache a trained Titanic ML model (e.g., Random Forest or SVM) at startup.
- Expose an
/inference/endpoint for real-time predictions, returning numeric probabilities or classifications. - Provide a
/training/endpoint to retrain the model, save artifacts, and (optionally) return metrics. - Optimize model loading and inference for production performance.
-
Marketing & Advertisement
- Integrate engaging marketing visuals, text blocks, and calls to action that promote AI courses.
- Deliver a consistent, polished design across all devices.
-
Automated Testing & Documentation
- Implement robust unit and integration test coverage.
- Update all documentation (
README.mdfiles,docs/folder) so new developers can deploy the MVP with a single command. - Ensure automated CI pipelines verify the entire system.
All tasks below must be completed for a production-ready MVP. Each task should map to a GitLab issue with clear acceptance criteria and an assigned owner.
-
Task A1:
feat/docker-compose-setup[x]-
Description:
- Finalize the multi-service Docker Compose configuration for frontend, backend, model, and Supabase (data storage only).
- Eliminate
.envusage; define required environment variables directly in the Compose file. - Expose and map essential ports (e.g.,
3000for frontend,8000for backend,5000for model,5432for Supabase).
-
Acceptance Criteria:
- Running
docker-compose up --build -dstarts all services without errors. - Inter-container communication (e.g., backend ↔ model) is reliable.
- The application is reachable at
http://localhost:3000/without manual config steps.
- Running
- Estimate: 6h
- Owner(s): [Assign in GitLab] (Lev Malets @lm21363)
-
Description:
-
Task A2:
feat/ci-cd-prod-build[ ] (Issue still open in GitLab)-
Description:
- Extend GitLab CI to auto-build Docker images for all services on each commit.
- Validate images via container-level tests or health checks.
- Ensure pipeline failures block merges if any container fails to build or start.
-
Acceptance Criteria:
- Each push triggers Docker builds for all services.
- Automated container health checks pass.
- Merge requests are blocked if pipelines fail.
- Estimate: 5h
- Owner(s): [Assign in GitLab] (Lev Malets @lm21363)
-
Description:
-
Task B1:
feat/frontend-landing-marketing[x]-
Description:
- Deliver a production-grade landing page with integrated marketing content and visuals.
- Include CTA buttons for the Survival Calculator and Admin Console.
- Ensure the layout is fully responsive.
-
Acceptance Criteria:
- The landing page meets marketing specifications (images, text blocks, CTAs).
- The layout is responsive on desktop and mobile.
- Users can easily navigate to the Survival Calculator and Admin Console.
- Estimate: 8h
- Owner(s): [Assign in GitLab] (Kazi Rahman @kr09619)
-
Description:
-
Task B2:
feat/survival-calculator-ui[x]-
Description:
- Implement a form-based component for collecting passenger attributes (e.g., class, sex, age, fare, embarked).
- On submit (or in real-time), call the backend to retrieve a prediction.
- Provide graceful error handling for server downtime or invalid inputs.
-
Acceptance Criteria:
- All required input fields are present and validated.
- Prediction results are displayed clearly (e.g., success/failure alerts or probabilities).
- Errors are handled gracefully with descriptive user feedback.
- Estimate: 8h
- Owner(s): [Assign in GitLab] (Denisa-Iulia Vaidasigan @dv11079, Fares Elbermawy @fe18597)
-
Description:
-
Task B3:
feat/admin-console-frontend[x]-
Description:
- Create an Admin Console UI showing a list of models (e.g., name, date trained, accuracy).
- Allow “Train Model” or “Delete Model” actions (calling backend endpoints).
- Provide a functional, responsive interface without user authentication.
-
Acceptance Criteria:
- The admin console lists existing models and relevant metadata.
- “Train Model” or “Delete Model” triggers backend endpoints successfully.
- The interface is intuitive and responsive.
- Estimate: 8h
- Owner(s): [Assign in GitLab] (Huraira Ali @ha06705)
-
Description:
-
Task C2:
feat/backend-prediction[x]-
Description:
- Develop a
POST /predictendpoint that accepts passenger data, forwards it to the Model API, and returns structured results. - Validate input data thoroughly.
- Log prediction events for auditing.
- Develop a
-
Acceptance Criteria:
- The endpoint returns a JSON response with survival probability or status.
- Invalid or missing data triggers an HTTP 400 with a descriptive message.
- All prediction requests are logged for future auditing.
- Estimate: 6h
- Owner(s): [Assign in GitLab] (Fares Elbermawy @fe18597, Denisa-Iulia Vaidasigan @dv11079)
-
Description:
-
Task C3:
feat/backend-admin-endpoints[x]-
Description:
- Add endpoints to list models (
GET /models), initiate training (POST /models/train), and delete models (DELETE /models/{id}). - No authentication is required for this MVP.
- Add endpoints to list models (
-
Acceptance Criteria:
- All endpoints return standard JSON and update underlying data (model artifacts) as expected.
- Responses are logged for auditing.
- Estimate: 6h
- Owner(s): [Assign in GitLab] (Huraira Ali @ha06705)
-
Description:
-
Task D1:
feat/model-service-inference[ ] (Issue still open in GitLab)-
Description:
- Load the final Titanic ML model (e.g., Random Forest or SVM) at service startup, caching it for performance.
- Provide a
/inference/endpoint receiving input features and returning predictions. - Handle errors gracefully (e.g., if model loading fails).
-
Acceptance Criteria:
- The ML model is loaded only once at startup.
- The inference endpoint returns numeric probabilities or classification results.
- Logs detail each inference request for traceability.
- Estimate: 5h
- Owner(s): [Assign in GitLab] (Sameer Kumar @sk20179, Lev Malets @lm21363)
-
Description:
-
Task D2:
feat/model-service-training[ ] (Issue still open in GitLab)-
Description:
- Implement a
/training/endpoint to retrain the model with the Titanic dataset. - Store new model artifacts (
.pklfiles) on a shared volume. - Optionally return training metrics (e.g., accuracy, F1 score).
- Implement a
-
Acceptance Criteria:
- Training completes without container crashes.
- Newly trained model artifacts are saved correctly and can replace or supplement existing ones.
- The endpoint returns a status message and (optionally) relevant metrics (e.g., “accuracy: 0.85”).
- Estimate: 6h
- Owner(s): [Assign in GitLab] (Sameer Kumar @sk20179, Lev Malets @lm21363)
-
Description:
Note: For this sprint, user account management is deferred, but basic Supabase configuration for data storage is included.
-
Task E1:
feat/supabase-setup-complete[ ] (Issue still open in GitLab, but moved away from supabase)-
Description:
- Configure Supabase (Postgres + GoTrue) in production mode.
- Migrate or seed any required database schemas (e.g., logs or model references).
- Verify backend connectivity to Supabase.
-
Acceptance Criteria:
-
docker-composesuccessfully starts Supabase with correct credentials. - The backend can create/read/update relevant data and logs.
- Any roles or privileges are set appropriately (if needed).
-
- Estimate: 5h
- Owner(s): [Assign in GitLab] (Lev Malets @lm21363)
-
Description:
-
Task E2:
feat/production-db-handlers[x]-
Description:
- Implement stable database operations for storing predictions, logs, or admin events.
- Use transactions and handle potential DB failures gracefully.
-
Acceptance Criteria:
- Data persists across app restarts and is queryable for metrics/debugging.
- Database errors produce clear error messages and logs.
- CI tests confirm successful migrations or queries.
- Estimate: 5h
- Owner(s): [Assign in GitLab] (Lev Malets @lm21363)
-
Description:
-
Standup Meetings (2x Weekly):
Brief 15-minute calls to report progress, clear blockers, and plan the next steps. -
Task Management:
All tasks appear as GitLab issues with acceptance criteria, time estimates, and labels (e.g.,frontend,backend,model). -
Code Reviews:
- Every merge request undergoes peer review for code quality, style adherence, and testing sufficiency.
- Merges are only allowed if all CI checks pass.
-
Continuous Integration (GitLab CI):
- Each push triggers:
- Linting and unit tests for backend, frontend, and model.
- Docker builds for all services.
- Container-level integration checks.
- Merge requests are blocked on CI failures.
- Each push triggers:
Sprint Review (End of Week 2):
-
Live End-to-End Demo:
- Demonstrate the complete workflow: using the Survival Calculator to obtain predictions and the Admin Console to manage models.
- Verify that landing-page marketing content displays correctly.
-
Production-Ready Check:
- Deploy on a fresh environment via
docker-compose up --build -d; confirm no manual config steps are required.
- Deploy on a fresh environment via
-
Testing Verification:
- All unit, integration, and (if implemented) E2E tests pass without regressions.
Retrospective:
-
What Went Well?
- Celebrate successes in delivering a production-ready MVP within one sprint.
-
What Could Be Improved?
- Discuss any time-management challenges or technical blockers.
-
Action Items:
- Document key takeaways for future sprints.
-
Production-Ready Features
- No placeholder code remains.
- The entire system (prediction logic, model management, marketing) is fully operational.
-
Separation of Concerns
- User registration and authentication will be introduced in Sprint 2.
- The current MVP focuses on essential predictions, model management, and marketing integration.
-
Performance & Stability
- Load the ML model once; cache it to ensure high-performance inference.
- Confirm reliable Docker networking among services.
-
Documentation Completeness
- All relevant instructions are included so new team members can deploy and test the MVP in one step.
This plan ensures the project achieves a fully functional MVP for Titanic survival predictions—complete with Docker-based deployment, model management, marketing integration, thorough testing, and clear documentation—within the first two-week sprint.