원문정보
초록
영어
At a time when securing driving safety is the most important in the development and commercialization of autonomous vehicles, AI and big data-based algorithms are being studied to enhance and optimize the recognition and detection performance of various static and dynamic vehicles. However, there are many research cases to recognize it as the same vehicle by utilizing the unique advantages of radar and cameras, but they do not use deep learning image processing technology or detect only short distances as the same target due to radar performance problems. Radars can recognize vehicles without errors in situations such as night and fog, but it is not accurate even if the type of object is determined through RCS values, so accurate classification of the object through images such as cameras is required. Therefore, we propose a fusion-based vehicle recognition method that configures data sets that can be collected by radar device and camera device, calculates errors in the data sets, and recognizes them as the same target.
목차
1. Introduction
2. Related Work
2.1 Development of a Target Vehicle Selection Technique using the Fusion of Radar and Camera Information
2.2 Front Vehicle Collision Warning System using Radar/Camera Sensor Fusion
2.3 Data Fusion Technique based on Coordinate System Matching Between Camera and Radar
2.4 Target Detection using Stereo Vision Sensor and Radar Sensor Fusion
3. Proposed Method
3.1 Radar Data for Fusion-based Vehicle Recognition
3.2 Camera Data for Fusion-based Vehicle Recognition
3.3 Vehicle Recognition Method based on Radar and Camera Fusion
4. Conclusion
Acknowledgement
References