The expansion of the Internet to the point where it is an ubiquitous component of most daily activities brought along the necessity for transitioning its service model from centralized client-server architectures to distributed services. Distributed solutions are the natural choice for serving large amounts of data to a large number of clients, as they reduce load on servers, they provide better latency to clients, and perhaps most importantly, they provide better fault tolerance by eliminating single points of failure.
However, despite the acknowledged necessity for distributedness, a closer examination of some critical applications reveals that a lot of the infrastructure and significant parts of the current solutions still have, in reality, a significant degree of centralization. Database replication is performed mostly through variations of master-slave techniques; cluster availability and load balancing often use a centralized dispatcher; the Domain Name System (DNS) employs a master-secondary approach for zone management and uses a hierarchical structure which is exposed to a single logical point of failure at the root servers.
In this talk we recognize the importance of maintaining distributed consistent state in an efficient way and we show that purely distributed solutions, can be practical without compromising on the consistency requirements. We substantiate our argument by tackling two complex problems: peer synchronous database replication and the Domain Name System.