earticle

논문검색

Internet

A Study on a DNN-Based Specific Sound Source Detection and Display System Using the UrbanSound8K Dataset

초록

영어

We propose a specific sound source detection and display system that can distinguish types of sound sources and indicate their directional information using a Deep Neural Network (DNN)-based Artificial Intelligence (AI) learning model in an urban environment where sound sources are introduced from various directions. Our proposed system acquires sound source information through seven microphones and utilizes a commercially available module that outputs the results via radial LEDs (Light Emitting Diodes). We developed an AI learning model, derived through the DNN training process, which is mounted on the interface and expansion slot of the basic module, enabling the system to classify the characteristics of different sound sources and display them using LED elements. Through our experiments, we trained the DNN learning model 1,000 times, achieving a recognition accuracy of 93.90% and a test accuracy of 89.41%. We attempted to intelligently classify various sound source types and their input directions in urban environments. We expect our work to serve as a foundational study for extracting and displaying AI-based sound source characteristic data.

목차

Abstract
1. Introduction
2. Design of a DNN Based Specific Sound Source Detection and Display System
3. Structure of the DNN Based Sound Source Learning Model
4. Implementation of a DNN-based Specific Sound Source Detection and Display System
4.1 Experimental Results Configured to Activate LEDs According to Each Sound Label
5. CONCLUSION
Acknowledgement
References

저자정보

  • Dae-Kyeon Shin Ph.D. Candidate, Dept. of Information Tech. & Media Eng., SNUST, Korea
  • Seong-Kweon Kim Professor, Dept. of Information Tech. & Media Eng., SNUST, Korea

참고문헌

자료제공 : 네이버학술정보

    함께 이용한 논문

      ※ 원문제공기관과의 협약기간이 종료되어 열람이 제한될 수 있습니다.

      0개의 논문이 장바구니에 담겼습니다.