earticle

논문검색

Reduction in Encoding Redundancy over Visual Sensor Networks

초록

영어

Visual sensor networks (VSN) are wireless sensor networks in which each sensor has video capture and processing capability.Power consumption may be examined for encoding, transmitting, and receiving subsystems, and research has been performed on minimizing these power levels in parallel. When multiple camera modules of a visual sensor node are aimed at the same objects with different fields of view (FOVs), the captured images may overlap. Such overlapped FOVs give rise to encoding redundancy over the VSN and also lead to increased power consumption among adjacent nodes. The power-rate-distortion (P-R-D) is determined and used to construct an optimization problem for minimizing power consumption of each node, hence maximizing node lifetime. The optimal solution provides distributed power allocation and node scheduling over the VSN at the same time, via simple information sharing, resulting in network lifetime maximization.

목차

Abstract
 1. Introduction
 2. Proposed System Descriptions
  2.1. Power Consumption Model
  2.2. Power Minimization Problem for Visual Sensor Node
 3. Power Consumption Optimization for Multi-Visual Sensor Nodes
  3.1. Power Minimization among Visual Sensor Nodes
  3.2. Lifetime Maximization of Visual Sensor Nodes
 4. Simulation Results
 5. Conclusion
 Acknowledgements
 References

저자정보

  • Hyungkeuk Lee Electronics and Telecommunications Research Institute
  • Hyunwoo Lee Electronics and Telecommunications Research Institute
  • Won Ryu Electronics and Telecommunications Research Institute
  • Kyounghee Lee Pai Chai University

참고문헌

자료제공 : 네이버학술정보

    함께 이용한 논문

      ※ 원문제공기관과의 협약기간이 종료되어 열람이 제한될 수 있습니다.

      0개의 논문이 장바구니에 담겼습니다.