earticle

논문검색

Spatio-Temporal Consistency Enhancement for Disparity Sequence

초록

영어

Disparity estimation for still images has attracted much interest and acquired many promising results. However, simply applying these methods to produce a disparity sequence may suffer from the undesirable flickering artifacts. These errors not only distinctly decrease the visible quality of the synthesized video, but also significantly reduce the coding efficiency of the disparity sequence. In this paper, a novel temporal consistency enhancement algorithm based on Guided Filter and Temporal Gradient (GFTG) is proposed. The flickering artifacts and noises are effectively removed and the edges of objects are well preserved. Both quantitative and qualitative evaluations show that the spatio-temporal consistency has been highly improved by utilizing our approach.

목차

Abstract
 1. Introduction
 2. Related Work
 3. Technical Details
  3.1. Image-based Disparity Map Refinement
  3.2. Spatio-temporally Consistent Disparity Map Estimation
 4. Experimental Results
  4.1. Subjective Evaluation
  4.2. Objective Evaluation
 5. Conclusions and Future Work
 References

저자정보

  • Haixu Liu School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Beijing, China
  • Chenyu Liu School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Beijing, China
  • Yufang Tang School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Beijing, China
  • Haohui Sun School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Beijing, China
  • Xueming Li Beijing Key Laboratory of Network System and Network Culture, Beijing, China

참고문헌

자료제공 : 네이버학술정보

    함께 이용한 논문

      ※ 원문제공기관과의 협약기간이 종료되어 열람이 제한될 수 있습니다.

      0개의 논문이 장바구니에 담겼습니다.