Topology-Based Thread Affinity Setting (Thread Pinning) in OpenMP

In applications such as graph processing, it is important how threads are pinned on CPU cores as the threads that share resources (such as memory and cache) can accelerate the performance by processing consecutive blocks of input dataset, especially, when the dataset has a high-level of locality.

In LaganLighter, we read the CPU topology to specify how OpenMP threads are pinned. In omp.c file, the block starting with comment “Reading sibling groups of each node“, reads the “/sys/devices/system/cpu/cpu*/topology/thread_siblings” files to identify the sibling threads and three arrays ("node_sibling_groups_start_ID“, “sibling_group_cpus_start_offsets“, and “sibling_groups_cpus“) are used to store the sibling CPUs.

Then, in block starting with comment “Setting affinity of threads“, the sibling groups are read and based on the total number of threads requested by user, a number of threads with consecutive IDs are pinned to sibling CPUs.

For a machine with 24 cores, 48 hyperthreads, when 48 threads are requested, we have:

If 96 threads are created, we have:

Technical Posts


LaganLighter

On Designing Structure-Aware High-Performance Graph Algorithms (PhD Thesis)

Mohsen Koohi Esfahani
Supervisors: Hans Vandierendonck and Peter Kilpatrick

Thesis in PDF format
Thesis on QUB Pure Portal

Graph algorithms find several usages in industry, science, humanities, and technology. The fast-growing size of graph datasets in the context of the processing model of the current hardware has resulted in different bottlenecks such as memory locality, work-efficiency, and load-balance that degrade the performance. To tackle these limitations, high-performance computing considers different aspects of the execution in order to design optimized algorithms through efficient usage of hardware resources.

The main idea in this thesis is to analyze the structure of graphs to exploit special features that are key to introduce new graph algorithms with optimized performance.

First, we study the structure of real-world graph datasets with skewed degree distribution and the applicability of graph relabeling algorithms as the main restructuring tools to improve performance and memory locality. To that end, we introduce novel locality metrics including Cache Miss Rate Degree Distribution, Effective Cache Size, Push Locality and Pull Locality, and Degree Range Decomposition.

Based on this structural analysis, we introduce the Uniform Memory Demands strategy that (i) recognizes diverse memory demands and behaviours as a source of performance inefficiency, (ii) separates contrasting memory demands into groups with uniform behaviours across each group, and (iii) designs bespoke data structures and algorithms for each group in order to satisfy memory demands with the lowest overhead.

We apply the Uniform Memory Demands strategy to design three graph algorithms with optimized performance: (i) the SAPCo Sort algorithm as a parallel counting sort algorithm that is faster than comparison-based sorting algorithms in degree-ordering of power-law graphs, (ii) the iHTL algorithm that optimizes locality in Sparse Matrix-Vector (SpMV) Multiplication graph algorithms by extracting dense subgraphs containing incoming edges to in-hubs and processing them in the push direction, and (iii) the LOTUS algorithm that optimizes locality in Triangle Counting by separating different caching demands and deploying specific data structure and algorithm for each of them.

Bibtex

@phdthesis{ODSAGA-ethos.874822,
  title  = {On Designing Structure-Aware High-Performance Graph Algorithms},
  author = {Mohsen Koohi Esfahani},
  year   = 2022,
  url    = {https://blogs.qub.ac.uk/DIPSA/On-Designing-Structure-Aware-High-Performance-Graph-Algorithms-PhD-Thesis/},
  school = {Queen's University Belfast},
  EThOSID = {uk.bl.ethos.874822}
}

Related Posts

LaganLighter

LaganLighter Source Code


Repository

https://github.com/DIPSA-QUB/LaganLighter

Documentation

https://github.com/DIPS-QUB/LaganLighter/tree/main/docs

Algorithms in This Repo

Cloning

git clone https://github.com/MohsenKoohi/LaganLighter.git --recursive

Graph Types

LaganLighter supports the following graph formats:

  • CSR/CSC graph in text format, for testing. This format has 4 lines: (i) number of vertices (|V|), (ii) number of edges (|E|), (iii) |V| space-separated numbers showing offsets of the vertices, and (iv) |E| space-separated numbers indicating edges.
  • CSR/CSC WebGraph format: supported by the Poplar Graph Loading Library
    external git repository

Measurements

In addition to execution time, we use the PAPI library to measure hardware counters such as L3 cache misses, hardware instructions, DTLB misses, and load and store memory instructions. ( papi_(init/start/reset/stop) and (print/reset)_hw_events functions defined in omp.c).

To measure load balance, we measure the total time of executing a loop and the time each thread spends in this loop (mt and ttimes in the following sample code). Using these values, PTIP macro (defined in omp.c) calculates the percentage of average idle time (as an indicator of load imbalance) and prints it with the total time (mt).

mt = - get_nano_time()
#pragma omp parallel  
{
   unsigned tid = omp_get_thread_num();
   ttimes[tid] = - get_nano_time();
	
   #pragma omp for nowait
   for(unsigned int v = 0; v < g->vertices_count; v++)
   {
      // .....
   }
   ttimes[tid] += get_nano_time();
}
mt += get_nano_time();
PTIP("Step ... ");

As an example, the following execution of Thrifty, shows that the “Zero Planting” step has been performed in 8.98 milliseconds and with a 8.22% load imbalance, while processors have been idle for 72.22% of the execution time, on average, in the “Initial Push” step.

NUMA-Aware and Locality-Preserving Partitioning and Scheduling

In order to assign consecutive partitions (vertices and/or their edges) to each parallel processor, we initially divide partitions and assign a number of consecutive partitions to each thread. Then, we specify the order of victim threads in the work-stealing process. During the initialization of LaganLighter parallel processing environment (in initialize_omp_par_env() function defined in file omp.c), for each thread, we create a list of threads as consequent victims of stealing.

A thread, first, steals jobs (i.e., partitions) from consequent threads in the same NUMA node and then from the threads in consequent NUMA nodes. As an example, the following image shows the stealing order of a 24-core machine with 2 NUMA nodes. This shows that thread 1 steals from threads 2, 3, …,11, and ,0 running on the same NUMA socket and then from threads 13, 14, …, 23, and 12 running on the next NUMA socket.

We use dynamic_partitioning_...() functions (in file partitioning.c) to process partitions by threads in the specified order. A sample code is in the following:

struct dynamic_partitioning* dp = dynamic_partitioning_initialize(pe, partitions_count);

#pragma omp parallel  
{
   unsigned int tid = omp_get_thread_num();
   unsigned int partition = -1U;		

   while(1)
   {
      partition = dynamic_partitioning_get_next_partition(dp, tid, partition);
      if(partition == -1U)
	 break; 

      for(v = start_vertex[partition]; v < start_vertex[partition + 1]; v++)
      {
	// ....
       }
   }
}

dynamic_partitioning_reset(dp);

Bugs & Support

As “we write bugs that in particular cases have been tested to work correctly”, we try to evaluate and validate the algorithms and their implementations. If you receive wrong results or you are suspicious about parts of the code, please contact us or submit an issue.

License

Licensed under the GNU v3 General Public License, as published by the Free Software Foundation. You must not use this Software except in compliance with the terms of the License. Unless required by applicable law or agreed upon in writing, this Software is distributed on an “as is” basis, without any warranty; without even the implied warranty of merchantability or fitness for a particular purpose, neither express nor implied. For details see terms of the License.

Copyright 2022 The Queen’s University of Belfast, Northern Ireland, UK

LaganLighter

Related Posts

Technical Posts

LOTUS: Locality Optimizing Triangle Counting – PPOPP’22

27th ACM SIGPLAN Annual Symposium on Principles and Practice of Parallel Programming (PPoPP 2022)
April 2-6, 2022
Acceptance Rate: 31%

DOI: 10.1145/3503221.3508402

Authors’ Copy (PDF Format)

Triangle Counting (TC) is a basic graph algorithm and is widely used in different fields of science, humanities and technology. The great size of real-world graphs with skewed degree distribution is a prohibiting factor in efficient TC.

In this paper, we study the implications of the power-law degree distribution of graph datasets on memory utilization in TC and we explain how a great percentage of triangles are formed around a small number of vertices with very great degrees (that are called hubs).

Using these novel observations, we present the LOTUS algorithm as a structure-aware and locality optimizing TC that separates counting triangles formed by hub and non-hub edges. Lotus introduces bespoke TC algorithms and data structures for different edges in order to (i) reduce the size of data structures that are target of random memory accesses, (ii) to optimize cache capacity utilization, (iii) to provide better load balance, and (iv) to avoid fruitless searches.

To provide better CPU utilization, we introduce the Squared Edge Tiling algorithm that divides a large task into smaller ones when the size of work for each edge in the neighbour-list depends on the index of that edge.

We evaluate the Lotus algorithm on 3 different processor architectures and for real-world graph datasets with up to 162 billion edges that shows LOTUS provides 2.2-5.5 times speedup in comparison to the previous works.

  • LOTUS: Locality Optimizing Triangle Counting
  • LOTUS: Locality Optimizing Triangle Counting
  • LOTUS: Locality Optimizing Triangle Counting - Forward Algorithm
  • LOTUS: Locality Optimizing Triangle Counting - Analysis of Forward Algorithm
  • LOTUS: Locality Optimizing Triangle Counting - Analysis of Forward Algorithm
  • LOTUS: Locality Optimizing Triangle Counting - Analysis of Forward Algorithm
  • LOTUS: Locality Optimizing Triangle Counting - Analysis of Forward Algorithm
  • LOTUS: Locality Optimizing Triangle Counting - Lotus Graph Structure
  • LOTUS: Locality Optimizing Triangle Counting -
  • LOTUS: Locality Optimizing Triangle Counting - HHH & HHN
  • LOTUS: Locality Optimizing Triangle Counting - HNN
  • LOTUS: Locality Optimizing Triangle Counting - NNN
  • LOTUS: Locality Optimizing Triangle Counting - Evaluation
  • LOTUS: Locality Optimizing Triangle Counting - Conclusion
  • LOTUS: Locality Optimizing Triangle Counting -
  • LOTUS: Locality Optimizing Triangle Counting -

Code Availability
The source-code will be published soon.

BibTex

@INPROCEEDINGS{10.1145/3503221.3508402,
  author = {Koohi Esfahani, Mohsen and Kilpatrick, Peter and Vandierendonck, Hans},
  booktitle = {27th ACM SIGPLAN Annual Symposium on Principles and Practice of Parallel Programming (PPoPP 2022)},  
  title = {LOTUS: Locality Optimizing Triangle Counting}, 
  year = {2022},
  numpages = {15},
  pages={219–233},
  publisher = {Association for Computing Machinery},
  address = {New York, NY, USA},
  doi = {10.1145/3503221.3508402}
}

LaganLighter

Related Posts

Locality Analysis of Graph Reordering Algorithms – IISWC’21

2021 IEEE International Symposium on Workload Characterization (IISWC’21)
November 7-9, 2021
Acceptance Rate: 39.5%
DOI: 10.1109/IISWC53511.2021.00020

Authors’ Copy (PDF Format)

Graph reordering algorithms try to improve locality of graph algorithms by assigning new IDs to vertices that ultimately changes the order of random memory accesses. While graph relabeling algorithms such as SlashBurn, GOrder, and Rabbit-Order provide better locality, it is not clear how they affect graph processing and different graph datasets , mainly, for three reasons:
(1) The large size of datasets,
(2) The lack of suitable measurement tools, and
(3) Disparate characteristics of graphs.
The paucity of analysis has also inhibited the design of more efficient RAs.

This paper introduces a number of metrics and tools to investigate the functionality of graph reordering algorithms and their effects on different real-world graph datasets:
(1) We introduce the Cache Miss Rate Degree Distribution and Degree Distribution of Neighbour to Neighbour Average Distance ID (N2N AID) to show how reordering algorithms affect different vertices,
(2) We introduce the Effective Cache Size as a metric to measure how much of cache capacity is used by reordered graphs for satisfying random memory accesses,
(3) We introduce the Assymetricity Degree Distribution and Neighbourhood Decomposition to explain the composition of neighbourhood of vertices to explain structural differences between web graphs and social networks.
(4) We investigate the effects of the structure of real-world graphs on the locality and performance of traversing graphs in pull and push directions by introducing Push Locality and Pull Locality.

Finally, we present improvements to graph reordering algorithms and propose other suggestions based on the new insights and features of real-world graphs introduced by this paper.

BibTex

@INPROCEEDINGS{10.1109/IISWC53511.2021.00020,
  author={Koohi Esfahani, Mohsen and Kilpatrick, Peter and Vandierendonck, Hans},
  booktitle={2021 IEEE International Symposium on Workload Characterization (IISWC'21)},  
  title={Locality Analysis of Graph Reordering Algorithms}, 
  year={2021},
  volume={},
  number={},
  pages={101-112},
  publisher={IEEE Computer Society},
  doi={10.1109/IISWC53511.2021.00020}
}

Related Posts

LaganLighter

Exploiting in-Hub Temporal Locality in SpMV-based Graph Processing – ICPP’21

ICPP'21

50th International Conference on Parallel Processing (ICPP’21)
August 9-12, 2021

Acceptance Rate: 26.4%

DOI:10.1145/3472456.3472462
ACM Digital Library
PDF Version (Authors’ Copy)

This paper investigates the implications made by the structure of real-world graphs with power-law degree distribution on the locality of SpMV graph analytics, and by considering the efficacy of locality optimizing graph reordering algorithms (such as SlashBurn, GOrder, and Rabbit-Order) shows that irregular datasets requires special traversals in order to improve locality for hub vertices that dedicate a large portion of the processing time to themselves.

We introduce in-Hub Temporal Locality (iHTL) as a structure-aware and cache-friendly graph traversal that optimizes locality in pull traversal. iHTL identifies different blocks in the adjacency matrix of a graph and applies a suitable traversal direction (push or pull) for each block based on its contents. In other words, iHTL optimizes locality of one traversal of all edges of the graph by:

(1) applying push direction for flipped blocks containing edges to in-hubs, and
(2) applying pull direction for processing sparse block containing edges to non-hubs.

Moreover, iHTL introduces a new algorithm to efficiently identify the number of flipped blocks by investigating connection between hub vertices of the graph. This allows iHTL to create flipped blocks as much as the graph structure requires and makes iHTL suitable for a wide range of different real-world graph datasets like social networks and web graphs.

iHTL is 1.5× – 2.4× faster than pull and 4.8× – 9.5× faster than push in state-of-the-art graph processing frameworks. More importantly, iHTL is 1.3× – 1.5× faster than pull traversal of state-of-the-art locality optimizing reordering algorithms such as SlashBurn, GOrder, and Rabbit-Order while reduces the preprocessing time by 780×, on average.

  • Exploiting in-Hub Temporal Locality in SpMV-based Graph Processing
  • Exploiting in-Hub Temporal Locality in SpMV-based Graph Processing : Outline
  • Exploiting in-Hub Temporal Locality in SpMV-based Graph Processing : Introduction
  • Exploiting in-Hub Temporal Locality in SpMV-based Graph Processing : Pull vs Push
  • Exploiting in-Hub Temporal Locality in SpMV-based Graph Processing : Is Pull A Suitable Direction
  • Exploiting in-Hub Temporal Locality in SpMV-based Graph Processing : iHTL: in-Hub Temporal Locality
  • Exploiting in-Hub Temporal Locality in SpMV-based Graph Processing : iHTL Graph Structure
  • Exploiting in-Hub Temporal Locality in SpMV-based Graph Processing : SpMV in iHTL
  • Exploiting in-Hub Temporal Locality in SpMV-based Graph Processing : Evaluation
  • Exploiting in-Hub Temporal Locality in SpMV-based Graph : Conclusion
  • Exploiting in-Hub Temporal Locality in SpMV-based Graph : Thanks
  • Exploiting in-Hub Temporal Locality in SpMV-based Graph : A Gift From QUB

Code Availability
The source-code will be published soon.

BibTex


@INPROCEEDINGS{10.1145/3472456.3472462,
author = {Koohi Esfahani, Mohsen and Kilpatrick, Peter and Vandierendonck, Hans},
title = {Exploiting In-Hub Temporal Locality In SpMV-Based Graph Processing},
year = {2021},
isbn = {9781450390682},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3472456.3472462},
doi = {10.1145/3472456.3472462},
booktitle = {50th International Conference on Parallel Processing},
numpages = {10},
location = {Lemont, IL, USA},
series = {ICPP 2021}
}

LaganLighter

Related Posts

How Do Graph Relabeling Algorithms Improve Memory Locality? ISPASS’21 (Poster)

IEEE Xplore (DOI: 10.1109/ISPASS51385.2021.00023)
ISPASS-2021
2021 IEEE International Symposium on Performance Analysis of Systems and Software
March 28-30, 2021

Authors’ Copy (PDF Format)

For a complete version of this article, please refer to Locality Analysis of Graph Reordering Algorithms and also Chapter 3 of the On Designing Structure-Aware High-Performance Graph Algorithms thesis.

Relabeling (reordering) algorithms aim to improve the poor memory locality of graph processing by changing the order of vertices. This paper analyses the functionality of three state-of-the-art relabeling algorithms: SlashBurn, GOrder, and Rabbit-Order for real-world graphs.

We use a number of techniques to explain how locality is affected by relabeling algorithms and how locality of different datasets (like social networks and web graphs) is enhanced by relabeling algorithms.

We use last level cache simulation to study miss rate degree distribution. We also use the degree distribution of Giant Connected Component (GCC) in SlashBurn iterations to see if real-world graphs follow the assumption that “power-law graphs are created/destroyed recursively” [SlashBurn]. We represent SlashBurn++ as an enhanced version of SlashBurn with lower preprocessing time and better locality.

Using cache simulation, we count the number of misses for accessing vertices data of high-degree vertices. This helps to explain how GOrder provides better temporal locality by managing cache space. Average ID Distance (AID) is a spatial locality metric introduced in this paper to explain how clustering relabeling algorithms like Rabbit-Order provide better spatial locality.

This paper also investigates why push and pull traversals of different datasets show different performances by introducing Push Locality and Pull Locality.

Code Availability
The source-code of LaganLighter is available on LaganLighter Repository.

BibTex

@INPROCEEDINGS{10.1109/ISPASS51385.2021.00023,
  author={Koohi Esfahani, Mohsen and Kilpatrick, Peter and Vandierendonck, Hans},
  booktitle={2021 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS)}, 
  title={How Do Graph Relabeling Algorithms Improve Memory Locality?}, 
  year={2021},
  volume={},
  number={},
  pages={84-86},
  publisher={IEEE Computer Society},
  doi={10.1109/ISPASS51385.2021.00023}
}

ISPASS’21
ISPASS’21 Final Program
LaganLighter

Related Posts

VEBO: a vertex- and edge-balanced ordering heuristic to load balance parallel graph processing (Poster)

https://doi.org/10.1145/3293883.3295703

This work proposes Vertex- and Edge-Balanced Ordering (VEBO): balance the number of edges and the number of unique destinations of those edges. VEBO balances edges and vertices for graphs with a power-law degree distribution, and ensures an equal degree distribution between partitions. Experimental evaluation on three shared-memory graph processing systems (Ligra, Polymer and GraphGrind) shows that VEBO achieves excellent load balance and improves performance by 1.09× over Ligra, 1.41× over Polymer and 1.65× over GraphGrind, compared to their respective partitioning algorithms, averaged across 8 algorithms and 7 graphs. VEBO improves GraphGrind performance with a speedup of 2.9× over Ligra on average.

VEBO: A Vertex- and Edge-Balanced Ordering Heuristic to Load Balance Parallel Graph Processing

https://arxiv.org/abs/1806.06576

Graph partitioning drives graph processing in distributed, disk-based and NUMA-aware systems. A commonly used partitioning goal is to balance the number of edges per partition in conjunction with minimizing the edge or vertex cut. While this type of partitioning is computationally expensive, we observe that such topology-driven partitioning nonetheless results in computational load imbalance. We propose Vertex- and Edge-Balanced Ordering (VEBO): balance the number of edges and the number of unique destinations of those edges. VEBO optimally balances edges and vertices for graphs with a power-law degree distribution. Experimental evaluation on three shared-memory graph processing systems (Ligra, Polymer and GraphGrind) shows that VEBO achieves excellent load balance and improves performance by 1.09x over Ligra, 1.41x over Polymer and 1.65x over GraphGrind, compared to their respective partitioning algorithms, averaged across 8 algorithms and 7 graphs.

The GraphGrind Framework: Fast Graph Analytics on Large Shared-Memory Systems (PhD Thesis)

Thesis on QUB Pure Portal
Thesis in PDF Format

Author: Jiawen Sun, https://www.linkedin.com/in/jiawen-sun-33b368103/

As shared memory systems support terabyte-sized main memory, they provide an opportunity to perform efficient graph analytics on a single machine. Graph analytics is characterised by frequent synchronisation, which is addressed in part by shared memory systems. However, performance is limited by load imbalance and poor memory locality, which originate in the irregular structure of small-world graphs.
This dissertation demonstrates how graph partitioning can be used to optimise (i) load balance, (ii) Non-Uniform Memory Access (NUMA) locality and (iii) temporal locality of graph partitioning in shared memory systems. The developed techniques are implemented in GraphGrind, a new shared memory graph analytics framework.

At first, this dissertation shows that heuristic edge-balanced partitioning results in an imbalance in the number of vertices per partition. Thus, load imbalance exists between partitions, either for loops iterating over vertices, or for loops iterating over edges. To address this issue, this dissertation introduces a classification of algorithms to distinguish whether they algorithmically benefit from edge-balanced or vertex-balanced partitioning. This classification supports the adaptation of partitions to the characteristics of graph algorithms. Evaluation in GraphGrind, shows that this outperforms state-of-the-art graph analytics frameworks for shared memory including Ligra by 1.46x on average, and Polymer by 1.16x on average, using a variety of graph algorithms and datasets.

Secondly, this dissertation demonstrates that increasing the number of graph partitions is effective to improve temporal locality due to smaller working sets.
However, the increasing number of partitions results in vertex replication in some graph data structures. This dissertation resorts to using a graph layout that is immune to vertex replication and an automatic graph traversal algorithm that extends the previously established graph traversal heuristics to a 3-way graph layout choice is designed. This new algorithm furthermore depends upon the classification of graph algorithms introduced in the first part of the work. These techniques achieve an average speedup of 1.79x over Ligra and 1.42x over Polymer.

Finally, this dissertation presents a graph ordering algorithm to challenge the widely accepted heuristic to balance the number of edges per partition and minimise edge or vertex cut. This algorithm balances the number of edges per partition as well as the number of unique destinations of those edges. It balances edges and vertices for graphs with a power-law degree distribution. Moreover, this dissertation shows that the performance of graph ordering depends upon the characteristics of graph analytics frameworks, such as NUMA-awareness. This graph ordering algorithm achieves an average speedup of 1.87x over Ligra and 1.51x over Polymer.