원문정보
초록
영어
Recently, cloud computing platforms are available to provide convenient infrastructures such that cloud applications could conduct cloud and data-intensive computing. As the number of users and data increases, distribution and concurrency in systems is becoming the norm, not the exception. Data is replicated in geographically distinct datacentres, and large numbers of user requests are concurrently processed within a single replica. Consequently, most present new consistency models, which seek to provide stronger guarantees whilst preserving performance, through making certain assumptions on the operations executed or the data accessed. We discuss that this approach is often flawed. Recent years have seen a paradigm shift: distribution and concurrency have become prevalent. It is unrealistic to expect that a unique system view could, or even should exist at any given time. Yet, the literature still equates divergent views to inconsistency. Secondly, current distributed systems often rely on a definition of consistency that is too narrow. Consistency is an application-centric property. It should not be reduced to freshness. Thirdly, there exists a multiplicity of consistency definitions, each relying on subtly different assumptions and definitions of consistency. As a result, it is almost impossible to compare the guarantees provided by each system.
목차
1. Introduction
2. The CAP Theorem
3. Consistency Models
4. ACM: Adaptive Consistency Model
4.1. ACM Design
4.2. Two-Layer ACM Infrastructure
4.3. ACM Protocol
5. Evaluation for ACM Guarantees
6. Conclusions
Acknowledgements
References