원문정보
초록
영어
This work proposes a novel biologically motivated visual selective attention model for efficient visual searching, which is implemented by integrating three attention mechanisms: bottom-up attention, top-down attention, and spatial attention. Bottom-up attention generates salient locations by reflecting top-down biases as well as three primitive visual features: intensity, edge and color. Prototype-based object perception is proposed for top-down attention, in which a 3-D color histogram is applied to generate a prototype of the target object. And experience based spatial attention can determine acceleration to localize a target object, which is modeled by using memorized spatial location information updated by object-localization experience. In order to verify the performance of the proposed visual selective attention model, we apply the proposed model to a real application in pedestrian traffic signal detection, to be utilized as part of a blind guide system. The proposed selective attention model shows plausible performance in terms of accuracy and computation time while efficiently localizing pedestrian traffic signals.
목차
1. Introduction
2. Proposed Visual Selective Attention Model
2.1. Bottom-Up Attention with Top-Down Bias
2.2. Prototype-Based Object Perception for Top-Down Attention
2.3. Spatial Attention
2.4. Integrated Selective Attention for Efficient Target Object Localization
3. Experimental Results
4. Conclusion and Future Works
Acknowledgments
References