earticle

논문검색

Convergence of Internet, Broadcasting and Communication

Application of Virtual Studio Technology and Digital Human Monocular Motion Capture Technology - Based on <Beast Town> as an Example -

초록

영어

This article takes the talk show "Beast Town" as an example to introduce the overall technical solution, technical difficulties and countermeasures for the combination of cartoon virtual characters and virtual studio technology, providing reference and experience for the multi-scenario application of digital humans. Compared with the live broadcast that combines reality and reality, we have further upgraded our virtual production technology and digital human-driven technology, adopted industry-leading real-time virtual production technology and monocular camera driving technology, and launched a virtual cartoon character talk show - "Beast Town" to achieve real Perfectly combined with virtuality, it further enhances program immersion and audio-visual experience, and expands infinite boundaries for virtual manufacturing. In the talk show, motion capture shooting technology is used for final picture synthesis. The virtual scene needs to present dynamic effects, and at the same time realize the driving of the digital human and the movement with the push, pull and pan of the overall picture. This puts forward very high requirements for multi-party data synchronization, real-time driving of digital people, and synthetic picture rendering. We focus on issues such as virtual and real data docking and monocular camera motion capture effects. We combine camera outward tracking, multi-scene picture perspective, multi-machine rendering and other solutions to effectively solve picture linkage and rendering quality problems in a deeply immersive space environment. , presenting users with visual effects of linkage between digital people and live guests.

목차

Abstract
1. Introduction
2. Motion Capture Technology
2.1 Traditional Optical Motion Capture
2.2 ROKOKO
2.3 EasyMocap
3. Production process
3.1 Model
3.2 Rigging
3.3 Role to UE
3.4 Motion capture data processing
3.5 Virtual character driver
3.6 Post-processing
Conclusion
References

저자정보

  • YuanZi Sang Doctor, Department of Visual Contents, Dongseo University, China.
  • KiHong Kim Professor, Department of Visual Contents, Dongseo University, Korea.
  • JuneSok Lee Professor, Department of Software Contents, Dongseo University, Korea.
  • JiChuTang Master, Department of Visual Contents, Dongseo University, China.
  • GaoHe Zhang Doctor, Department of Visual Contents, Dongseo University, China
  • ZhengRan Liu Master, Department of Visual Contents, Dongseo University, China.
  • QianRu Liu Master, Department of Visual Contents, Dongseo University, China.
  • ShiJie Sun Master, Department of Visual Contents, Dongseo University, China.
  • YuTing Wang Doctor, Department of Visual Contents, Dongseo University, China.
  • KaiXing Wang Master, Department of Visual Contents, Dongseo University, China.

참고문헌

자료제공 : 네이버학술정보

    함께 이용한 논문

      ※ 원문제공기관과의 협약기간이 종료되어 열람이 제한될 수 있습니다.

      0개의 논문이 장바구니에 담겼습니다.