Abstract:
Human emotions can be expressed through music, including happiness, sadness, love, violence, and energy. Listening to music is accessible to everyone, everywhere, at any ...Show MoreMetadata
Abstract:
Human emotions can be expressed through music, including happiness, sadness, love, violence, and energy. Listening to music is accessible to everyone, everywhere, at any time, and the number of songs is growing by the day. Due to this, categorising songs according to emotion is crucial for music recommendation systems, as users may experience media overload. With the advancement of signal processing and machine learning algorithms, features can be extracted and emotions can be predicted. In this work, a comparison of different multi-class machine learning algorithms, as well as dimensional models such as Thayer and Russell, is done to find the best among them. A data set can be developed from audio features of music extracted by signal processing methods in order to perform machine learning. Different audio features can be analysed and tested in Matlab, and high-quality features useful for this work will be selected to be implemented in Python. Mood tags will be manually added to audio tracks using human input via survey, and then the audio features are utilised to develop a machine learning model, which will generate mood tags. Then tracks will be classified and can be used as part of the recommendation system application developed for the Android platform using appropriate frameworks and tools.
Published in: 2022 4th International Conference on Inventive Research in Computing Applications (ICIRCA)
Date of Conference: 21-23 September 2022
Date Added to IEEE Xplore: 29 December 2022
ISBN Information: