원문정보
초록
영어
Lifelog is a set of continuously captured data records of our daily activities. The lifelog data usually consists of text, picture, video, audio, gyro, acceleration, position, annotations, etc., and is kept in some large databases as records of individual’s life experiences, which can be retrieved when necessary and used as reference to improve life’s quality. The lifelog in this study includes several types of media data/information acquired from wearable multi sensors which capture video images, individual’s body motions, biological information, location information, and so on. We propose an integrated technique to process the lifelog which is composed of both captured video (called lifelog images) and other sensed data. Our proposed technique, called Activity Situation Model, is based on two models; i.e., the space-oriented model and the action-oriented model. By using the two modeling techniques, we can analyze the lifelog images to find representative images in video scenes using both the pictorial visual features and the individual’s context information, and to represent the individual’s life experiences in some semantic and structured ways for future experience data retrievals and exploitations. The resulting structured lifelog images were evaluated using the previous approach and the proposed technique. Our proposed integrated technique exhibited better results.
목차
1. Introduction
2. Related work
3. Lifelog acquisition
4. Space-oriented and action-oriented models
5. Lifelog image analysis and processing
6. Application prototype: outdoor running workout assistance
6.1 System configuration
6.2 Structured lifelog image generation
7. Conclusion and future work
8. References