Categories
Publications

Asynchronous Distributed-Memory Triangle Counting and LCC with RMA Caching

This paper was accepted at IPDPS 2022, the 36th IEEE International Parallel and Distributed Processing Symposium that was held from 30 May to 03 June 2022 in Lyon, France. This paper was prepared by ETH Zürich.

Abstract

Triangle count and local clustering coefficient are two core metrics for graph analysis. They find broad application in analyses such as community detection and link recommen-dation. To cope with the computational and memory demands that stem from the size of today’s graph datasets, distributed-memory algorithms have to be developed. Current state-of-the-art solutions suffer from synchronization overheads or expensive pre-computations needed to distribute the graph, achieving limited scaling capabilities. We propose a fully asynchronous implementation for triangle counting and local clustering coef-ficient based on 1D partitioning, using remote memory accesses for transferring data and avoid synchronization. Additionally, we show how these algorithms present data reuse on remote memory accesses and how the overall communication time can be improved by caching these accesses. Finally, we extend CLaMPI, a software-layer caching system for MPI RMA, to include application-specific scores for cached entries and influence the eviction procedure to improve caching efficiency. Our results show improvements on shared memory, and we achieve 14x speedup from 4 to 64 nodes for the LiveJoumal 1 graph on distributed memory. Moreover, we demonstrate how caching remote accesses reduces total running time by up to 73 % with respect to a non-cached version. Finally, we compare our implementation to TriC, the 2020 graph champion paper, and achieve up to 100x faster results for scale-free graphs.

Authors

András Strausz (ETH Zürich); Flavio Vella (University of Trento); Salvatore Di Girolamo (ETH Zürich); Maciej Besta (ETH Zürich); Torsten Hoefler (ETH Zürich)

DOI: 10.1109/IPDPS53621.2022.00036

Paper>>

Open-access pre-print>>