원문정보
초록
영어
As an essential approach for extracting valuable summarized information from massive data set, aggregate query plays important roles for data-intensive applications in cloud computing. As a popular cloud computing platform, MapReduce is a promising paradigm for processing massive data. However, executing aggregate query over massive data sets is very time-consuming and it is also inefficient to run aggregate query directly on MapReduce platform. In order to process an aggregate query efficiently, this work proposes a cache-based approach for improving the performance of aggregate queries on MapReduce platform. This approach enhances the performance of processing aggregate queries on MapReduce platform by caching the pre-processing results before executing the aggregate query. The pre-process results are partitioned into different parts which are cached on different nodes in the cluster. Some strategies are presented to maintain the cached tuples when the original data changes. The experimental results demonstrate that the proposed approach has better performance compared with some existing cache managing approach, such as LRU and LFU.
목차
1. Introduction
2. Related Work
3. Overview of the Model
4. Implementation of Aggregate Query in MapReduce
4.1 Aggregation over Relational Data Set
4.2 Aggregate Query in MapReduce
5. Cache Management
5.1 Initializing the Cache
5.2 Algorithm for Updating the Cache
5.3 Maintenance of the Cache Coherency
6. Experimental Evaluation
6.1 File Access Latency
6.2 Comparison of Hit Rate
6.3 Comparison of Scalability
6.4 Comparison of Average Response Time
7. Conclusions
Acknowledgements
References