earticle

논문검색

Implementation of AR Remote Rendering Techniques for Real-time Volumetric 3D Video

초록

영어

Recently, with the growth of mixed reality industrial infrastructure, relevant convergence research has been proposed. For real-time mixed reality services such as remote video conferencing, the research on real-time acquisition-process-transfer methods is required. This paper aims to implement an AR remote rendering method of volumetric 3D video data. We have proposed and implemented two modules; one, the parsing module of the volumetric 3D video to a game engine, and two, the server rendering module. The result of the experiment showed that the volumetric 3D video sequence data of about 15 MB was compressed by 6-7%. The remote module was streamed at 27 fps at a 1200 by 1200 resolution. The results of this paper are expected to be applied to an AR cloud service.

목차

Abstract
1. Introduction
2. BACKGROUND THEORY
2.1 Volumetric 3D Video
2.2 AR Remote Rendering
3. PROPOSED METHOD
3.1 Parsing Module
3.2 Server Rendering Module
4. EXPERIMENT AND RESULT
4.1. Experiment Environment
4.2. Experiment Result
5. CONCLUSION
ACKNOWLEDGEMENT
REFERENCES

저자정보

  • Daehyeon Lee Researcher, Graduate School of Smart Convergence, Kwangwoon University, Seoul, Korea
  • Munyong Lee Researcher, Graduate School of Smart Convergence, Kwangwoon University, Seoul, Korea
  • Sang-ha Lee Researcher, Dept. of Electronic Engineering, Kwangwoon University, Seoul, Korea
  • Jaehyun Lee Researcher, Dept. Plasma Bio Display, Kwangwoon University, Seoul, Korea
  • Soonchul Kwon Researcher, Graduate School of Smart Convergence, Kwangwoon University, Seoul, Korea

참고문헌

자료제공 : 네이버학술정보

    함께 이용한 논문

      ※ 기관로그인 시 무료 이용이 가능합니다.

      • 4,000원

      0개의 논문이 장바구니에 담겼습니다.