Three PhD Positions in Data Analytics – RELAX Doctoral Network

We have 3 recruitment opportunities in a Marie Curie Doctoral Training Network on data analytics. These are PhD opportunities with a research assistant contract:

(1) Application-Aware Relaxed Synchronisation for Distributed Graph Processing,
(2) Interactive and intelligent exploration of big complex data, and
(3) Efficient and Responsible Analytics for Urban Mobility and Allied Applications.

Application Deadlines
7 May 2023

RELAX Doctoral Network
The RELAX Doctoral Network brings together 5 cross-disciplinary research groups working across data science, data management, distributed computing and computing systems to pursue a fundamentally new approach to this problem by leveraging the semantics or correctness conditions of applications, with the goal of enhancing scalability, response times, and availability. The Doctoral Network provides a bespoke technical and non-technical training programme and fosters cross-disciplinary and third-party collaborations.

Funding Information
This project is funded by the Engineering and Physical Sciences Research Council grant number EP/X029174/1.

To be eligible for consideration for a RELAX Doctoral Candidate position (covering tuition fees and a basic salary with pension of approx. £33,001 per annum), a candidate must satisfy all the eligibility criteria based on transnational mobility and academic qualifications. The Studentship is open to all nationalities.

Applicants MUST be doctoral candidates, i.e. not already in possession of a doctoral degree at the date of the recruitment (understood as the recruitment call deadline) and undertake transnational mobility (see mobility rule below). Researchers who have successfully defended their doctoral thesis but who have not yet formally been awarded the doctoral degree will not be considered eligible.

Mobility Rule
Researchers must not have resided or carried out their main activity (work, studies, etc.) in the United Kingdom for more than 12 months in the 36 months immediately before their date of recruitment. Compulsory national service, short stays such as holidays, and time spent as part of a procedure for obtaining refugee status under the Geneva Convention are not taken into account.

Academic Requirements
The minimum academic requirement for admission is normally an Upper Second Class Honours degree from a UK or ROI Higher Education provider in a relevant discipline, or an equivalent qualification acceptable to the University.

More Information
Applicants may additionally consider applying to positions with the partner universities of the network: http://www.relax-dn.eu/

ROMA: Run-Time Object Detection To Maximize Real-Time Accuracy

This is a follow-up work on TOD, which selects one of multiple deep neural networks (DNNs) to perform real-time video analytics (object detection) on low-end devices, e.g., on the camera itself. TOD uses the median object size to determine which one out of 4 YOLO DNNs will meet the real-time requirement best, with respect to object size and speed of the objects. TOD requires specific knowledge of the device to select appropriate thresholds on the median object sizes, and needs to be retuned for each computing device.

ROMA removes the limitation that TOD imposes by performing a more detailed analysis of the image content. In particular, ROMA separately estimates the impact of the selected DNN on object size and on object speed. Its formulation is sufficiently flexible to adapt to changes in the computational power of the device such that it does not need to be retrained when migrating across hardware. Moreover, this way, ROMA can adapt to runtime changes in computational power, which may arise from power management features on the device, or from other workloads which share the device. ROMA does however have hyper-parameters that are dependent on the YOLO DNNs.

ROMA will be presented at the 2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV).

You can read the post-print on arXiv.

TOD: Transprecise Object Detection

Real-time video analytics on the edge is challenging as the computationally constrained resources typically cannot analyse video streams at full fidelity and frame rate, which results in loss of accuracy. We propose a Transprecise Object Detector (TOD) which maximises the real-time object detection accuracy on an edge device by selecting an appropriate Deep Neural Network (DNN) on the fly with negligible computational overhead.

TOD makes two key contributions over the state of the art: (1) TOD leverages characteristics of the video stream such as object size and speed of movement to identify networks with high prediction accuracy for the current frames; (2) it selects the best-performing network based on projected accuracy and computational demand using an effective and low-overhead decision mechanism.

Experimental evaluation on a Jetson Nano demonstrates that TOD improves the average object detection precision by 34.7 % over the YOLOv4-tiny-288 model on average over the MOT17Det dataset. In the MOT17-05 test dataset, TOD utilises only 45.1 % of GPU resource and 62.7 % of the GPU board power without losing accuracy, compared to YOLOv4-416 model. We expect that TOD will maximise the application of edge devices to real-time object detection, since TOD maximises real-time object detection accuracy given edge devices according to dynamic input features without increasing inference latency in practice.

TOD was presented at the 5th International Conference on Fog and Edge Computing (ICFEC). Read the paper on arXiv.

Software-Defined Floating-Point Number Formats and Their Application to Graph Processing – ICS’22

DOI: 10.1145/3524059.3532360

This paper proposes software-defined floating-point number formats for graph processing workloads, which can improve performance in irregular workloads by reducing cache misses. Efficient arithmetic on software-defined number formats is challenging, even when based on conversion to wider, hardware-supported formats. We derive efficient conversion schemes that are tuned to the IA64 and AVX512 instruction sets.

We demonstrate that: (i) reduced-precision number formats can be applied to graph processing without loss of accuracy; (ii) conversion of floating-point values is possible
with minimal instructions; (iii) conversions are most efficient when utilizing vectorized instruction sets, specifically on IA64 processors.

Experiments on twelve real-world graph data sets demonstrate that our techniques result in speedups up to 89% for PageRank and Accelerated PageRank, and up to 35% for Single-Source Shortest Paths. The same techniques help to accelerate the integer-based maximal independent set problem by up to 262%.

Graptor Sources Published

Finally got around to this: publishing the Graptor source code. With time passing, the code has changed quite a bit compared to that used in the paper: Graptor: efficient pull and push style vectorized graph processing. The evolution of the code has advantages: it’s faster. There are also disadvantages: not all versions and variations of the code that were experimented with can still be compiled.

The source code can be found here: https://github.com/hvdieren/graptor

There will likely be issues (errors, lack of documentation, …) as this is experimental research code. Drop me a line if you need a hand h {a dot} vandierendonck {an at} qub {another dot} ac {the last dot} uk .

Invited Talk – Enabling high-performance large-scale irregular computations by Maciej Besta

17 June 2021

Abstract:
Large graphs are behind many problems in today’s computing landscape. The
growing sizes of such graphs, reaching 70 trillion edges recently, require
unprecedented amounts of compute power, storage, and energy. In this talk, we
illustrate how to effectively process such extreme-scale graphs. We will first
discuss Slim Graph, the first approach for practical lossy graph compression,
that facilitates high-performance approximate graph processing, storage, and
analytics with strong theoretical guarantees on accuracy, for a broad set of
graph problems and algorithms. In the second part of the talk, we will focus on
a class of complex graph mining problems such as clique listing. Here, we first
show how to solve such problems on complex parallel architectures in a simple
and high-performance way. For this, we propose a novel set-centric paradigm,
where complex algorithms are broken down into simple set algebra building
blocks such as set intersection or union, which can be separately optimized,
executed, and scheduled. Moreover, after discussing how to effectively and
efficiently mine complex graph patterns, we will turn our attention to pattern
prediction. Specifically, we establish a general problem of motif prediction,
in which we generalize link prediction, one of central problems in graph
analytics, into predicting the arrival of arbitrary complex higher-order
structures called motifs. To solve motif prediction, we harness recent graph
neural network architectures.

Bio
Maciej is a researcher from Scalable Parallel Computing Lab at ETH Zurich. He works on large-scale irregular computations and high-performance networking. He received Best Paper awards and Best Student Paper awards at ACM/IEEE Supercomputing 2013, 2014, and 2019, at ACM HPDC 2015 and 2016, ACM Research Highlights 2018, and several further best paper nominations (ACM HPDC 2014, ACM FPGA 2019, and ACM/IEEE Supercomputing 2019). He won, among others, the competition for the Best Student of Poland (2012), the first Google Fellowship in Parallel Computing (2013), and the ACM/IEEE-CS George Michael HPC Fellowship (2015). More detailed information on a personal site: https://people.inf.ethz.ch/bestam/

Invited Talk: Adaptiveness and Lock-free Synchronization in Parallel Stochastic Gradient Descent by Karl Bäckström

3 June 2021

Abstract
The emergence of big data in recent years due to the vast societal digitalization and large-scale sensor deployment has entailed significant interest in machine learning methods to enable automatic data analytics. In a majority of the learning algorithms used in industrial as well as academic settings, the first-order iterative optimization procedure Stochastic gradient descent (SGD), is the backbone. However, SGD is often time-consuming, as it typically requires several passes through the entire dataset to converge to a solution of sufficient quality. In order to cope with increasing data volumes, and to facilitate accelerated processing utilizing contemporary hardware, various parallel SGD variants have been proposed. In addition to traditional synchronous parallelization schemes, asynchronous ones have received particular interest in recent literature due to their improved ability to scale due to less coordination, and subsequently waiting time. However, asynchrony implies inherent challenges in understanding the execution of the algorithm and its convergence properties, due the presence of both stale and inconsistent views of the shared state. In this work, we aim to increase the understanding of the convergence properties of SGD for practical applications under asynchronous parallelism and develop tools and frameworks that facilitate improved convergence properties as well as further research and development. First, we focus on understanding the impact of staleness, and introduce models for capturing the dynamics of parallel execution of SGD. This enables (i) quantifying the statistical penalty on the convergence due to staleness and (ii) deriving an adaptation scheme, introducing a staleness-adaptive SGD variant MindTheStep-AsyncSGD, which provably reduces this penalty. Second, we aim at exploring the impact of synchronization mechanisms, in particular consistency-preserving ones, and the overall effect on the convergence properties. To this end, we propose Leashed-SGD, an extensible algorithmic framework supporting various synchronization mechanisms for different degrees of consistency, enabling in particular a lock-free and consistency-preserving implementation. In addition, the algorithmic construction of Leashed-SGD enables dynamic memory allocation, claiming memory only when necessary, which reduces the overall memory footprint. We perform an extensive empirical study, benchmarking the proposed methods, together with established baselines, focusing on the prominent application of Deep Learning for image classification on the benchmark datasets MNIST and CIFAR, showing significant improvements in converge time for Leashed-SGD and MindTheStep-AsyncSGD.


Bio:

Karl Bäckström is a Ph.D. student at the Distributed Computing and Systems group at Chalmers University of Technology in Sweden. Karl has an academic background in Mathematics, Computer Science, and Engineering physics, with an overarching interest in distributed and parallel computation, optimization, and machine learning. Karl’s research directions include adaptiveness, synchronization, and consistency in parallel algorithms for iterative optimization. At the 35th IEEE International Parallel and Distributed Processing Symposium, Karl with co-authors were awarded Best Paper Honorable Mention for the paper “Consistent Lock-free Parallel Stochastic Gradient Descent for Fast and Stable Convergence”. Karl lives in Gothenburg, a coastal city in western Sweden, together with his Swiss Shepherd Valdi, often enjoying their free time together in nature and wilderness, or at home playing the Piano.

Invited Talk – Efficient Parallel Graph Processing on GPU using Approximate Computing By Somesh Singh

Abstract
Graph algorithms are widely used in several application domains. It has been established that parallelizing graph algorithms is challenging. The parallelization issues get exacerbated when graphics processing unit (GPU) is used to execute graph algorithms. In particular, three important GPU-specific aspects affect performance: memory coalescing, memory latency, and thread divergence. We attempt to tame these challenges using approximate computing. We target graph applications on GPUs that can tolerate some degradation in the quality of the output, in exchange for obtaining the result faster. We propose three techniques for boosting the performance of graph processing on the GPU by injecting approximations in a controlled manner. The first one creates a graph isomorph that brings relevant nodes nearby in memory and adds a controlled replica of nodes to improve coalescing. The second reduces memory latency by facilitating the processing of subgraphs inside shared memory by adding edges among specific nodes and processing well-connected subgraphs iteratively inside shared memory. The third technique normalizes degrees across nodes assigned to a warp to reduce thread divergence. Each technique offers notable performance benefits and provides a knob to control inaccuracy added to an execution. We demonstrate the effectiveness of the proposed techniques using a suite of large graphs with varied characteristics and five popular graph algorithms.


Bio
Somesh Singh is a Ph.D. candidate in the Dept. of CSE at the Indian Institute of Technology Madras, India. His research interests span the areas of high-performance computing, parallel computing, and graph analytics. His dissertation research focuses on designing techniques for improving the efficiency of parallel graph analytics on graphics processing unit (GPU) by trading-off computational accuracy. His PhD works have been accepted for publication at ICPP 2020, PPoPP 2019 (poster) and TMSCS 2018. He was a research intern at Intel Labs, India  and Microsoft Research, India in 2020. He was a Google Summer of Code participant with CERN-HSF in 2017 and 2018.

Invited Talk – Parallel Graph Learning and Computational Biology Through Sparse Matrices by Dr Aydın Buluç

Dr Aydın Buluç
29 April 2021

Solving systems of linear equations have traditionally driven the research in sparse matrix computation for decades. Direct and iterative solvers, together with finite element computations, still account for the primary use case for sparse matrix data structures and algorithms. These sparse “solvers” often serve as the workhorse of many algorithms in spectral graph theory and traditional machine learning. 

In this talk, I will be highlighting two of the emerging use cases of sparse matrices outside the domain of solvers: graph representation learning methods such as graph neural networks (GNNs) and graph kernels, and computational biology problems such as de novo genome assembly and protein family detection. A recurring theme in these novel use cases is the concept of a semiring on which the sparse matrix computations are carried out. By overloading scalar addition and multiplication operators of a semiring, we can attack a much richer set of computational problems using the same sparse data structures and algorithms. This approach has been formalized by the GraphBLAS effort. I will illustrate one example application from each problem domain, together with the most computationally demanding sparse matrix primitive required for its efficient execution. I will also cover novel parallel algorithms for these sparse matrix primitives and available software that implement them efficiently on various architectures.


Aydın Buluç is a Staff Scientist and Principal Investigator at the Lawrence Berkeley National Laboratory (LBNL) and an Adjunct Assistant Professor of EECS at UC Berkeley. His research interests include parallel computing, combinatorial scientific computing, high performance graph analysis and machine learning, sparse matrix computations, and computational biology. Previously, he was a Luis W. Alvarez postdoctoral fellow at LBNL and a visiting scientist at the Simons Institute for the Theory of Computing. He received his PhD in Computer Science from the University of California, Santa Barbara in 2010 and his BS in Computer Science and Engineering from Sabanci University, Turkey in 2005. Dr. Buluç is a recipient of the DOE Early Career Award in 2013 and the IEEE TCSC Award for Excellence for Early Career Researchers in 2015. He is also a founding associate editor of the ACM Transactions on Parallel Computing. As a graduate student, he spent a semester at the Mathematics Department of MIT, and a summer at the CSRI institute of Sandia National Labs, in New Mexico. He is a member of the SIAM and the ACM.

Scheduling Fine-Grain Loops in Graph Processing Workloads

Scheduling or distributing the computational workload over multiple threads is a critical and repeatedly performed activity in graph processing workloads. In a recent paper “Reducing the burden of parallel loop schedulers for many‐core processors” published in Concurrency & Computation: Practice & Experience, we investigated the overhead introduced by scheduling. This overhead follows from two effects: (i) threads require to communicate and arrive at the same point in the program at the same time; (ii) inter-thread communication incurs significant cache misses and coherence messages sent between processors. We have likened the work distribution to barrier synchronisation and observed that state-of-the-art parallel schedulers such as the Intel OpenMP runtime and Intel Cilkplus incur the cost of a full-barrier synchronisation at the start of a parallel loop and at the end of the loop. The below figure illustrates the synchronisation pattern:

A barrier synchronisation is a synchronisation mechanism that waits for all threads to arrive at the barrier, then signals each thread they may continue execution. If we look in more detail at a barrier, it consists of two phases: a join phase and a release phase:

However, this introduces redundant synchronisation. It suffices to place only a half-barrier synchronisation at the start of the loop, and the other half at the end of the loop. Schematically, this looks like this:

Based on this observation, we designed an optimised scheduling technique that works specifically well for fine-grain loops, which are typically counted loops with very short loop bodies.

Using our optimised scheduler, fine-grain loops in graph processing applications can be sped up by 21.6% to 29.6%. The below figure shows a histogram of the performance obtained for the fine-grain loops in the betweenness centrality kernel (BC). This evaluation was performed on a four-socket 2.6 GHz Intel Xeon E7-4860 v2 machine with 12 physical cores per socket (plus hyperthreading) and30 MB L3 cache per socket. The baseline uses the Intel Cilkplus scheduler, while hybrid demonstrates performance of a hybrid version of the Cilkplus scheduler which can execute a mixture of coarse-grain loops (scheduled using the normal Cilkplus policy) and fine-grain loops using our optimised scheduler.

As graph processing applications contain a mix of fine-grain and coarse-grain loop, overall speedups in these applications is below 5%.

More details can be found in the paper, published under Open Access: https://onlinelibrary.wiley.com/doi/10.1002/cpe.6241