원문정보
초록
영어
Image holistic scene understanding based on global contextual features and Bayesian topic model is proposed. The model integrates three basic subtasks: the scene classification, image annotation and semantic segmentation. The model takes full advantage of global feature information in two aspects. On the one side, the performance of image scene classification and image annotation are boosted by incorporating image global contextual features; On the other side, the performance of image semantic segmentation is also boosted by new superpixel region segmentation method and new superpixel regions and patch feature representation. 1) For image scene classification and image annotation: (1) We improve the feature engineering methods by using the PHOW proposed by Vedaldi [1]; (2) Furthermore, global contextual features are learned by semantic features. 2) For semantic segmentation: (1) We improve the super-pixel segmentation method by using UCM in the literature [2]; (2)We proposed new feature representation for super-pixel region and patches by incorporating DSIFT, texton filter banks, RGB color, HOG, LBP and location features. The experiments testify that model performance has raised on all three sub-tasks.
목차
1. Introduction
2. Related Work
3. Framework of Image Holistic Scene Understanding Based on Global Contextual Features and Bayesian Topic Model
3.1. The Generative Model
3.2. Generating the Visual Component
3.3. Generating the Tag Component
4. Composite Model
5. Feature Engineering of Global Contextual Model
5.1. Semantic Feature Extraction
5.2. Global Contextual Feature
5.3. New Feature Representation for Superpixel Regions and Patches
6. Model Learning
7. Model Inference
7.1. Image Classification
7.2. Image Annotation
7.3. Image Segmentation
8. Experimental Design
8.1. Data Sets
8.2. Experimental setup
9. Experimental Results and Analysis
9.1. Image Scene Classification
9.2. Image annotation
9.3. Image Segmentation
9.4. Experimental Discussion
10. Conclusions and Future Works
References