This project presents a hybrid music recommendation system that leverages both hand gestures and facial emotions to personalize song suggestions. By combining gesture recognition and emotion analysis, the system enhances user interaction and provides a tailored auditory experience.
-
Gesture Recognition using MediaPipe and TensorFlow.
-
Emotion Detection using Facial Expression Recognizer (FER).
-
Songs are recommended based on detected gestures (higher priority) and emotions.
-
Utilizes the concept of Navarasa (nine emotions) mapped to Melakarta ragas.
-
Built with Python, OpenCV, Tkinter (for GUI), and Pygame (for music playback).
-
FER2013: For facial emotion classification into seven emotions.
-
Hand Gesture Recognition Dataset (Kaggle): 24,000 images across 20 gestures.
-
Hand gesture classification using DenseNet201.
-
Emotion detection via CNN trained on FER2013.
-
Combined accuracy of 88%.
-
Music recommendation tailored to real-time emotional and gestural inputs.
-
Interface designed for ease of use, especially helpful for hands-free scenarios.
This project is developed for academic and research purposes music recommendation system that leverages both hand gestures and facial emotions to personalize song suggestions. Any implementation in real-world scenarios would require thorough testing, ethical review, and compliance with relevant legal frameworks.. For any external or commercial use, please contact the authors for permissions.






