원문정보
초록
영어
We propose a specific sound source detection and display system that can distinguish types of sound sources and indicate their directional information using a Deep Neural Network (DNN)-based Artificial Intelligence (AI) learning model in an urban environment where sound sources are introduced from various directions. Our proposed system acquires sound source information through seven microphones and utilizes a commercially available module that outputs the results via radial LEDs (Light Emitting Diodes). We developed an AI learning model, derived through the DNN training process, which is mounted on the interface and expansion slot of the basic module, enabling the system to classify the characteristics of different sound sources and display them using LED elements. Through our experiments, we trained the DNN learning model 1,000 times, achieving a recognition accuracy of 93.90% and a test accuracy of 89.41%. We attempted to intelligently classify various sound source types and their input directions in urban environments. We expect our work to serve as a foundational study for extracting and displaying AI-based sound source characteristic data.
목차
1. Introduction
2. Design of a DNN Based Specific Sound Source Detection and Display System
3. Structure of the DNN Based Sound Source Learning Model
4. Implementation of a DNN-based Specific Sound Source Detection and Display System
4.1 Experimental Results Configured to Activate LEDs According to Each Sound Label
5. CONCLUSION
Acknowledgement
References
