earticle

논문검색

Internet

Multiclass Music Classification Approach Based on Genre and Emotion

초록

영어

Reliable and fine-grained musical metadata are required for efficient search of rapidly increasing music files. In particular, since the primary motive for listening to music is its emotional effect, diversion, and the memories it awakens, emotion classification along with genre classification of music is crucial. In this paper, as an initial approach towards a “ground-truth” dataset for music emotion and genre classification, we elaborately generated a music corpus through labeling of a large number of ordinary people. In order to verify the suitability of the dataset through the classification results, we extracted features according to MPEG-7 audio standard and applied different machine learning models based on statistics and deep neural network to automatically classify the dataset. By using standard hyperparameter setting, we reached an accuracy of 93% for genre classification and 80% for emotion classification, and believe that our dataset can be used as a meaningful comparative dataset in this research field.

목차

Abstract
1. Introduction
2. Related Works
2.1 Music genre classification
2.2 Music emotion classification
3. Datasets
4. Methods
4.1 Feature Extraction
4.2 Classification
5. Results
6. Conclusion
References

저자정보

  • Jonghwa Kim Professor, Department of Artificial Intelligence, Cheju Halla University, Korea

참고문헌

자료제공 : 네이버학술정보

    함께 이용한 논문

      ※ 기관로그인 시 무료 이용이 가능합니다.

      • 4,000원

      0개의 논문이 장바구니에 담겼습니다.