0% found this document useful (0 votes)
8 views15 pages

Mini Project

The document is a mini project report for a Mood Based Music Recommender developed by students Aman Sharma and Avneesh under the guidance of Mr. Inderjeet. It outlines the project's objective to create a music recommendation system that uses emotion detection through natural language processing to provide personalized music suggestions based on user input. The report details the methodology, software and hardware requirements, and future scope for enhancements, emphasizing the system's potential to improve user engagement in music discovery.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views15 pages

Mini Project

The document is a mini project report for a Mood Based Music Recommender developed by students Aman Sharma and Avneesh under the guidance of Mr. Inderjeet. It outlines the project's objective to create a music recommendation system that uses emotion detection through natural language processing to provide personalized music suggestions based on user input. The report details the methodology, software and hardware requirements, and future scope for enhancements, emphasizing the system's potential to improve user engagement in music discovery.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Department of Computer

Applications (BCA)

MOOD BASED MUSIC


RECOMMENDER

MINI PROJECT REPORT

Course Code: BCA EC506

Course Title: Mini Project

Submitted To: Mr. Inderjeet

Submitted By:

Student Name Roll Number

AMAN SHARMA 41222120

AVNEESH 41222126

Year & Semester : IIIrd Year (Semester - V)

Date of Submission : 09.12.2024


Candidate Declaration
We, the undersigned, hereby declare that the project report titled ‘Mood
Based Music Recommender' represents our own original work
conducted under the guidance of Mr. Inderjeet from the Department of
Computer Science & Applications.

Furthermore, we affirm that the contents of this project report have not
been previously submitted, in part or in full, for the fulfillment of any other
academic qualification by either of us. We assert that the research
presented in this report is authentic, and all sources referenced have
been duly acknowledged.

This declaration serves to affirm our commitment to academic integrity


and honesty in scholarly pursuits. We have diligently conducted the
research, adhered to ethical standards, and ensured the accuracy and
reliability of the information presented herein.

We acknowledge the invaluable guidance and support provided by our


mentor, Inderjeet, whose expertise and encouragement have been
instrumental in the successful completion of this project.

We hereby affix our signatures to this declaration as a testament to our


accountability and responsibility for this project.

Avneesh Aman Sharma


(41222126) (41222120)
Acknowledgement

We extend our heartfelt appreciation to the esteemed individuals and


entities who have contributed significantly to the successful completion
of this Mini Project. Our deepest gratitude goes to Mr. Inderjeet from the
Department of Computer Science, whose unwavering guidance,
expertise, and encouragement have been invaluable pillars of support
throughout every phase of this endeavor.

Furthermore, we express our sincere thanks to the Delhi Skill &


Entrepreneurship University for generously providing the necessary
resources and opportunities that enabled us to undertake and
accomplish this project effectively.

Special acknowledgment is also extended to both of the group members:


Aman Sharma and Avneesh, whose dedication and collaboration were
instrumental in driving the project forward and achieving its objectives.
We are profoundly grateful for the collective support and collaboration of
all involved, which have been paramount in the realization of this
project's goals and objectives.
Abstract of the Project

Music plays a vital role in human life, often reflecting or influencing


emotional states. Traditional music recommendation systems primarily
rely on user preferences, listening history, or genre selection, which
limits their ability to adapt to a user's immediate emotional needs.

This project presents the development of a music recommendation


system based on emotion detection using machine learning techniques.
The system aims to recommend music tracks according to the user's
current emotional state, detected from their text input.

The primary goal of the system is to improve the user experience by


providing personalized music recommendations that align with their
emotional needs. The system uses natural language processing (NLP)
and sentiment analysis to identify emotions from text, followed by the
integration of these emotions with a music recommendation model.

The implementation is achieved through machine learning algorithms for


emotion classification and music track suggestion algorithms. Results
demonstrate the system’s effectiveness in delivering mood-based music
recommendations with a high accuracy rate in detecting emotions and
matching appropriate tracks.

The project showcases a web-based interface where users can input


textual descriptions of their feelings and receive emotion-aligned music
recommendations. Evaluation of the system demonstrated high accuracy
in emotion detection and significant user satisfaction with the
recommended playlists. This emotion-aware approach bridges the gap
between user sentiment and music preferences, offering an engaging
and dynamic music-listening experience.
Introduction
Music holds a unique power to resonate with human emotions, acting as
a companion during times of joy, sorrow, or relaxation. Over the years,
personalized recommendation systems have transformed how users
discover digital content, including music. However, existing music
recommendation systems often focus on historical preferences, genres,
or popularity, overlooking the dynamic nature of a user's emotions.

This project aims to bridge that gap by integrating emotion detection into
the music recommendation process. By analyzing text input provided by
the user, the system identifies their emotional state using natural
language processing (NLP) techniques. Based on the detected emotion,
a tailored list of music tracks is suggested, aligning with the user’s mood
and enhancing their listening experience.

With advancements in machine learning, particularly in sentiment


analysis and emotion classification, it is now feasible to personalize
music recommendations in real-time. This project explores these
possibilities, providing a deeper connection between user sentiment and
music preferences. The goal is to create an intuitive and emotionally
aware music recommendation system that offers an enriched and
meaningful user experience.

This report delves into the system's methodology, implementation


details, results, and future prospects, shedding light on how
emotion-based music recommendations can redefine user interactions in
the digital music landscape.
Key Objectives
● Develop an intelligent system that recommends music tracks
based on the user's emotional state described using text.

● Utilize machine learning and natural language processing (NLP)


techniques to detect emotions from user-provided text input.

● Classify emotions into predefined categories such as happiness,


sadness, anger, or calmness for personalized music
recommendations.

● Create and maintain a curated database of music tracks tagged


with corresponding emotional labels.

● Implement a content-based filtering algorithm to match detected


emotions with relevant music tracks.

● Design a user-friendly web-based interface using streamlit for


seamless input and output interactions.

● Evaluate the system’s accuracy in emotion detection and user


satisfaction with music recommendations.

● Explore the potential of emotion-based systems in enhancing user


engagement with digital music platforms.

● Explore the use of different machine learning techniques used in


recommendation systems.

● To integrate emotion detection with music suggestion algorithms to


improve the relevance of recommended tracks.
Requirements

1. Software Requirements

● Programming Language: Python


● IDE/Code Editor: Visual Studio Code
● APIs: Spotify Web API
● Libraries and Frameworks:
○ Sentiment Analysis: NLTK, TextBlob and Hugging Face
○ Web Development: Stresmlit for creating the web interface
○ Data Manipulation: Pandas, NumPy
● Other Technology Used : Transformers and Pipelines

2. Hardware Requirements

● Processor: Intel i5 or equivalent (minimum)


● RAM: 8 GB (minimum)
● Storage: 256 GB SSD (to store music datasets and models)
● Internet Connection: For downloading libraries, datasets, and
testing APIs.

3. Functional Requirements

● Emotion Detection Module:


○ Process user text input to classify emotions.
○ Use NLP techniques to detect emotional states with high
accuracy.
● Music Recommendation Module:
○ Maintain a database of songs tagged with emotional
categories.
○ Generate a playlist or list of recommended songs based on
detected emotions.
● User Interface:
○ Accept user input (text describing emotions).
○ Display recommended music tracks in a visually appealing
format.
Methodology
The methodology of this project revolves around two core modules:
emotion detection and music recommendation. These modules work
cohesively to analyze user input, interpret their emotional state, and
suggest music tracks that resonate with their mood. The approach
incorporates machine learning techniques, natural language processing
(NLP), and content-based recommendation algorithms to achieve the
desired functionality.

1. Emotion Detection

Emotion detection is the foundation of the system, as it identifies the


user’s emotional state from their text input. This is achieved through
sentiment analysis, where NLP techniques process the textual data and
classify it into predefined emotional categories such as happy, sad,
angry, calm, or excited. For this, the project utilizes TextBlob, a powerful
NLP library that simplifies sentiment analysis and emotion classification.
TextBlob processes user input by analyzing the polarity and subjectivity
of the text, effectively identifying the emotion expressed.

2. Music Recommendation

Once the user’s emotion is detected, the system’s recommendation


engine matches it with an extensive music database. The database is
pre-tagged with emotional labels for each track, ensuring that the
recommendations are emotionally relevant. For example, a "happy"
emotion might correspond to upbeat or cheerful tracks, while a "sad"
emotion might suggest slow and soulful music.

3. Integration of Modules

The emotion detection and music recommendation modules are


seamlessly integrated into a unified system. The system integrates the
emotion detection and recommendation modules into a web application
with an intuitive interface created using Streamlit. Streamlit simplifies the
development of interactive web applications and provides a visually
appealing front end. Users input their text through the interface, which is
processed in real time.
4. Testing and Evaluation

The system is tested using multiple textual inputs to evaluate the


accuracy of emotion detection and the relevance of the recommended
music. Metrics like precision, recall, and F1-score are used to measure
the performance of the emotion detection model. User feedback and
surveys are employed to assess the satisfaction levels of music
recommendations, ensuring the system delivers an engaging and
personalized experience.

This methodology ensures a systematic approach to achieving the


project’s objectives, leveraging advanced techniques in NLP, machine
learning, and recommendation systems to provide an emotionally
intelligent solution. Streamlit’s ease of use allows the system to present
playlists in a clean and accessible format, ensuring users can interact
with the app effortlessly. The integration of the TextBlob-based emotion
detection module and the recommendation engine with Streamlit
enhances the system's usability and appeal.
Source Code
main.py
app.py (Streamlit code for UI)
Output
Future Scope

1. Enhanced Emotion Detection

● Incorporate advanced deep learning models like BERT, GPT, or


RoBERTa to improve the accuracy of emotion detection.
● Extend emotion categories to include complex and subtle emotions
such as nostalgia, anxiety, or excitement.
● Add multi-modal emotion detection by integrating facial expression
and voice tone analysis along with text input.

2. Broader Applications

● Utilize the system for mental health applications, offering


therapeutic music based on detected stress or anxiety levels.

3. Expanded Music Database

● Use crowd-sourced emotional tagging to continuously enrich the


music database with diverse tracks.
● Include multiple genres and cultural music options to cater to a
broader audience.

5. Cross-Platform Integration

● Provide seamless integration with smart devices such as Alexa,


Google Home, and other IoT devices.
Conclusion
The Music Recommendation System Using Emotion Detection combines
the power of machine learning, natural language processing, and
user-centered design to create an innovative solution for personalized
music recommendations. By analyzing text inputs with the help of
TextBlob, the system accurately detects user emotions and provides
relevant music suggestions tailored to their mood.

This project successfully demonstrates the potential of emotion-based


systems in enhancing digital interactions, specifically in the realm of
music discovery. It offers a personalized and meaningful listening
experience.

While the current implementation achieves its primary objectives, the


project also opens up avenues for future enhancements, such as
incorporating advanced emotion detection techniques, expanding the
music database, and integrating cross-platform capabilities. These
improvements can further elevate the system, making it more versatile
and appealing to a diverse audience.

In conclusion, this project serves as a stepping stone toward creating


emotionally intelligent systems that enhance user engagement and
satisfaction in the digital era. It underscores the importance of integrating
human emotions into technological solutions, paving the way for more
intuitive and empathetic applications.
Bibliography
1. https://huggingface.co

2. https://www.youtube.com/watch?v=uDzLxos0lNU&ab_channel=Pr
ogrammingHut

3. https://textblob.readthedocs.io/

4. https://developer.spotify.com/documentation/web-api/

5. https://www.kaggle.com/

6. https://www.python.org/

You might also like