earticle

논문검색

Visual Tracking with Fragments-Based PCA Sparse Representation

초록

영어

In this paper, we propose a robust tracking method with a novel appearance model based on fragments-based PCA sparse representation. It samples non-overlapped local image patches within the templates in PCA subspace. Then, the candidate local image patches are sparse represented by the local template patches in PCA subspace. Finally, tracking is continued using the particle filter for propagating sample distributions over time. In addition, the templates are updated online based on incremental subspace learning .Using the fragments-based PCA templates rather than the image templates facilitates the tracker to handle significant illumination and pose change as well as occlusion. Experimental results on challenging videos show that our method can track accurately and robustly, and outperform many other state-of-the- art trackers.

목차

Abstract
 1. Introduction
 2. Related Works
  2.1. Subspace Representation
  2.2. Sparse Representation
  2.3. Motivation of our Method
 3. Visual Tracking with Fragments-Based PCA Sparse Representation
  3.1. Fragments-Based PCA Sparse Representation for Target Appearance Model
  3.2. Particle Filter Framework for Visual Tracking
  3.3. Template Update
  3.4. Algorithm Summary
 4. Experimental Results
  4.1. Quantity Evaluation
  4.2. Quality Evaluation
 5. Conclusion
 References

저자정보

  • Peishu Qu College of Physics and Electronic engineering, Dezhou University, Dezhou 253023, China

참고문헌

자료제공 : 네이버학술정보

    함께 이용한 논문

      ※ 원문제공기관과의 협약기간이 종료되어 열람이 제한될 수 있습니다.

      0개의 논문이 장바구니에 담겼습니다.