원문정보
초록
영어
Managing and processing the large scale of data is the crucial part of the experimental high energy physics which uses a gigantic instrument in order to exploit the world’s largest particle accelerator at CERN. The WLCG has enabled the distributed computing technology, which is called the grid, and has proven its excellent performance by contributing their capability of data management and processing. In addition to the grid computing, a compact but powerful computing facility introducing the parallelism is designed. In this paper, we introduce the KIAF cluster which is designed to process large scale of data produced from the ALICE experiment at CERN LHC in parallel based on PROOF. The PROOF enables parallelism on a Linux cluster by exploiting the special characteristic of the data produced in high energy physics. Event processing performance, the number of event processed per second and the size of data processed per second, of the KIAF cluster is shown with the practical use case for the high energy physics. The performance of the KIAF shows pseudo-linearity increasing as the number of workers involved in the processing while it shows as well an evidence of the upper limitation of the scalability of PROOF cluster in terms of the performance.
목차
1. Introduction
2. Proof Concepts and Architecture
3. KIAF System
3.1. Specification
3.2. Authentication and Authorization
3.3. Analysis Procedure Explained
3.4. Monitoring
4. Performance Measurement
4.1. Metrics to Measure
4.2. Task Selection
4.3. Input Data
4.4. Result
5. Conclusion
References