Loading [a11y]/accessibility-menu.js
Ubiquitous Music Retrieval by Context-Brain Awareness Techniques | IEEE Conference Publication | IEEE Xplore

Ubiquitous Music Retrieval by Context-Brain Awareness Techniques


Abstract:

In recent years, people are used to listening to music because the music can effectively relax our tight life. Hence, how to retrieve the preferred music from a large amo...Show More

Abstract:

In recent years, people are used to listening to music because the music can effectively relax our tight life. Hence, how to retrieve the preferred music from a large amount of music data has been an attractive topic for many years. Traditionally, music retrieval contains two main types, namely text-based music retrieval and content-based music retrieval. However, these traditional music retrieval types ignore a human sense: emotion. That is, the preferred music might be different in different emotions. In fact, the emotion is highly related to the environment and it can be represented by brain actions. Therefore, in this paper, we propose a creative approach that performs a ubiquitous music search by content comparisons of brains and music. The major intent of this paper is to provide affective music retrieval in different contexts. Without any query, the context-brain triggers the music search and the context-related music will be retrieved by computing brain similarities and music similarities. The proposed approach was materialized and evaluated by a number of volunteers. The evaluation results reveal that, the proposed affective music retrieval can obtain high satisfactions for the invited testing users.
Date of Conference: 11-14 October 2020
Date Added to IEEE Xplore: 14 December 2020
ISBN Information:

ISSN Information:

Conference Location: Toronto, ON, Canada
References is not available for this document.

I. Introduction

Music is an important multimedia for modern people because it can alleviate the tense mood. Therefore, people are used to listening to music. In the past few years, it is much easier to access the music due to the advances on Information Technology. For example, the users listen to music through online music websites without downloading the physical music. This incurs a problem for how to retrieve the preferred music from a massive amount of music data. To deal with this problem, a number of previous studies were proposed on music retrieval. Actually, music retrieval can be decomposed into two sub issues, namely system control and search algorithm.

Select All
1.
P. Cano, E. Batlle, T. Kalker, and J. Haitsma, “A Review of Audio Fingerprinting,” Journal of VLSI Signal Processing, Vol. 41, No. 3, pp. 271–284, 2005.
2.
M. Casey and M. Slaney, “Song Intersection by Approximate Nearest Neighbor Search,” Proc. of the 7th International Conference on Music Information Retrieval (ISMIR), pp. 144 - 149, 2006, Canada.
3.
A. Ghias, J. Logan, D. Chamberlin, and B. C. Smith, “Query by Humming: Musical Information Retrieval in an Audio Database,” Proc. of the 3rd ACM International Conference on Multimedia, pp. 231 - 236, 1995, USA.
4.
T. Hayashi, N. Ishii, and M. Yamaguchi, “Fast Music Information Retrieval with Indirect Matching,” Proce. of the 22nd European Signal Processing Conference (EUSIPCO), pp. 1567–1571, 2014, Portugal.
5.
T. Hayashi, N. Ishii, M. Ishimori, and K. Abe, “Stability Improvement of Indirect Matching for Music Information Retrieval,” Proc. of the IEEE International Symposium on Multimedia (ISM), pp. 229 - 232, 2015, USA.
6.
M.L. Kringelbach and K.C. Berridge, “The Affective Core of Emotion: Linking Pleasure, Subjective Well-Being, and Optimal Metastability in the Brain,” Emotion Review, vol. 9, no. 3, pp. 1 - 9, 2017.
7.
N. Kosugi, Y. Nishihara, S. Kon’ya, M. Yamamuro, and K. Kushima, “Music Retrieval by Humming-Using Similarity Retrieval over High Dimensional Feature Vector Space,” Proc. of Pacific Rim Conference on Ceramic and Glass Technology, pp. 404 - 407, 1999.
8.
N. Kosugi, Y. Nishihara, T. Sakata, M. Yamamuro, and K. Kushima, “A Practical Query-by-Humming System for a Large Music Database,” Proceedings of the 8th ACM International Conference on Multimedia, pp. 333 - 342, 2000, USA.
9.
K. Markov, and T. Matsui, “Music Genre and Emotion Recognition Using Gaussian Processes,” IEEE Access, vol. 2, pp. 688 - 697, 2014.
10.
M. Panteli, E. Benetos, and S. Dixon, “Learning a Feature Space for Similarity in World Music,” Proc. of the 17th International Society for Music Information Retrieval Conference (ISMIR), pp. 538 - 544, 2016, USA.
11.
V.N. Salimpoor, I.v.d. Bosch, N. Kovacevic, A.R. McIntosh, A. Dagher, R.J. Zatorre, “Interactions Between the Nucleus Accumbens and Auditory Cortices Predict Music Reward Value,” Science, vol. 340, pp. 216 - 219, 2013.
12.
I.-h. Shin, J. Cha, G.W. Cheon, C. Lee, S.Y. Lee, H.-J. Yoon and H.C. Kim, “Automatic stress-relieving music recommendation system based on photoplethysmography-derived heart rate variability analysis,” Proc. of Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2014.
13.
J.H. Su, C.Y. Chin, J.Y. Li and V.S. Tseng, “Parallel big image data retrieval by conceptualised clustering and un-conceptualised clustering,” International Journal of High Performance Computing and Networking, vol. 15, nos. 1/2, pp. 22 - 30, 2019.
14.
J.H. Su, T. P. Hong, and Y. T. Chen, “Fast Music Retrieval with Advanced Acoustic Features,” Proc. of the IEEE International Conference on Consumer Electronics, pp. 359–360, 2017, Taiwan.
15.
J.-H. Su, T.-P. Hong and H.-H. Yeh, “Music Classification by Automated Relevance Feedbacks,” Proc. of the 12th Asian Conference on Intelligent Information and Database Systems, Phuket Thailand, 23-26 March, 2020.
16.
J.-H. Su, H.-H. Yeh, P.S. Yu and V.S. Tseng, “Music Recommendation Using Content and Context Information Mining,” IEEE Intelligent Systems, vol. 25, no. 1, pp. 16 - 26, 2010.
17.
D. Wang, S. Deng, X. Zhang and G. Xu, “Learning to embed music and metadata for context-aware music recommendation,” World Wide Web, vol. 21, pp. 1399–1423, 2018.
18.
J.L. Zhang, X.L. Huang, L.F. Yang, Y. Xu, S.T. Sun, “Feature selection and feature learning in arousal dimension of music emotion by using shrinkage methods,” Multimedia Systems, vol. 23, pp. 251–264, 2017.

Contact IEEE to Subscribe

References

References is not available for this document.