Research Fellow Position in Kelvin Living Lab for Sustainability

We have a research fellow position.

Please visit the following link for full description and applying: https://hrwebapp.qub.ac.uk/tlive_webrecruitment/wrd/run/ETREC107GF.open?VACANCY_ID=098532NfiX&WVID=6273090Lgx&LANG=USA

Deadline: 15/07/2024

For more information about the Kelvin Living Lab project, please refer to: https://blogs.qub.ac.uk/dipsa/the-kelvin-living-lab/

Four Lecturer/Senior Lecturer Positions

Deadline: 26/02/2024

Lecturer/Senior Lecturer in Distributed Computing

https://hrwebapp.qub.ac.uk/tlive_webrecruitment/wrd/run/ETREC107GF.open?VACANCY_ID=844083M0Nn&WVID=6273090Lgx&LANG=USA

Lecturer/Senior Lecturer in Emerging Computing Technologies

https://hrwebapp.qub.ac.uk/tlive_webrecruitment/wrd/run/ETREC107GF.open?VACANCY_ID=566107M0HD&WVID=6273090Lgx&LANG=USA

Lecturer/Senior Lecturer in High Performance Computing

https://hrwebapp.qub.ac.uk/tlive_webrecruitment/wrd/run/ETREC107GF.open?VACANCY_ID=128409M0GW&WVID=6273090Lgx&LANG=USA

Lecturer/Senior Lecturer in Programming Languages & Compilers

https://hrwebapp.qub.ac.uk/tlive_webrecruitment/wrd/run/ETREC107GF.open?VACANCY_ID=125161M0Gm&WVID=6273090Lgx&LANG=USA

Post-Doctoral / Researcher Vacancy on GPU Computing

We are recruiting a post-doctoral researcher or research assistant with expertise in parallel computing and GPU programming to work on the RAPID project, looking at real-time analytics in manufacturing.

Closing date is 6 December 2023.

Advert: https://hrwebapp.qub.ac.uk/tlive_webrecruitment/wrd/run/ETREC107GF.open?VACANCY_ID=697817LSP2&WVID=6273090Lgx&LANG=USA

Accelerating scientific discovery using domain adaptive language modelling (PhD Thesis)

Thesis on QUB Pure Portal
Thesis in PDF Format

Author: Dimitrios Christofidellis

Research has been conducted for numerous centuries but recent advances in technology have facilitated and accelerated the process keeping the research budget and the required effort at manageable levels. Scientific and technical corpora, such as papers and patents, are great written sources of already existing research knowledge and information. The abundance of such documents, in addition to their exponential growth, set them as a unique source of knowledge offering a great opportunity to push the research boundaries even further. Yet, this information’s volume and growth rate are so large that it is unmanageable for researchers to study all of it. Realizing the potential of incorporating this knowledge efficiently into the discovery process and that the recent advances in the NLP domain provide us with a powerful methodological base, our work aims to establish methods that can speed up parts of the discovery process relying on scientific and technical corpora. We focus on but do not limit our work on patent corpora as methods to leverage such documents in discovery pipelines are limited so far. Our contributions focus on three specific cases: the domain definition of a given corpus in the form of a metagraph; the domain definition of a given corpus in the form of keywords, focusing on the patent classification case; and the semi-automated reporting of a discovery artifact in the form of a patent. In all three cases, we rely on transformer-based Language Models and adhere to domain adaptive techniques to achieve our goals by providing efficient methods in terms of both performance and needed training/inference requirements. Concluding our work, we discuss the importance of our contributions. We demonstrate how our proposed methods can be incorporated into discovery pipelines, combined, and complement existing methods. We conclude with a discussion of promising future directions derived from our work.

ASCCED: Asynchronous Scientific Continuous Computations Exploiting Disaggregation

UKRI EPSRC Grant

The design of efficient and scalable scientific simulation software is reaching a critical point whereby continued advances are increasingly harder, more labour-intensive, and thus more expensive to achieve. This challenge emanates from the constantly evolving design of large-scale high-performance computing systems. World-leading (pre-)exascale systems, as well as their successors, are characterised by multi-million-scale parallel computing activities and a highly heterogeneous mix of processor types such as high-end many-core processors, Graphics Processing Units (GPU), machine learning accelerators, and various accelerators for compression, encryption and in-network processing. To make efficient use of these systems, scientific simulation software must be decomposed in various independent components and make simultaneous use of the variety of heterogeneous compute units.

Developing efficient, scalable scientific simulation software for these systems becomes increasingly harder as the limits of parallelism available in the simulation codes is approached. Moreover, the limit of parallelism cannot be reached in practice due to heterogeneity, system imbalances and synchronisation overheads. Scientific simulation software often persists over several decades. The software is optimised and re-optimised repeatedly as the design and scale of the target hardware evolves at a much faster pace, as impactful changes in the hardware may occur every few years. One may thus find that the guiding principles that underpin such software are outdated.

The ASCCED project will fundamentally change the status quo in the design of scientific simulation software by simplifying the design to reduce software development and maintenance effort, to facilitate performance optimisation, and to make software more robust to future evolution of computing hardware. The key distinguishing factor of our approach is to structure scientific simulation software as a collection of loosely coupled parallel activities. We will explore the opportunities and challenges of applying techniques previously developed for Parallel Discrete Event Simulation (PDES) to orchestrate these loosely coupled parallel activities. This radically novel approach will enable runtime system software to extract unprecedented scales of parallelism and to minimise performance inefficiencies due to synchronisation. Additionally, based on a speculative execution mechanism, it will uncover parallelism that has not been feasible to extract before.

The computational model proposed by ASCCED will, if successful, initiate a new direction of research within programming models for high-performance computing that may dramatically impact not only the performance of scientific simulation software, but can also reduce the engineering effort required to produce efficient scientific simulation software. It will have a profound impact on the sciences that are highly dependent on leadership computing capabilities, such as climate modeling and cancer research.

The GraphGrind Framework: Fast Graph Analytics on Large Shared-Memory Systems (PhD Thesis)

Thesis on QUB Pure Portal
Thesis in PDF Format

Author: Jiawen Sun, https://www.linkedin.com/in/jiawen-sun-33b368103/

As shared memory systems support terabyte-sized main memory, they provide an opportunity to perform efficient graph analytics on a single machine. Graph analytics is characterised by frequent synchronisation, which is addressed in part by shared memory systems. However, performance is limited by load imbalance and poor memory locality, which originate in the irregular structure of small-world graphs.
This dissertation demonstrates how graph partitioning can be used to optimise (i) load balance, (ii) Non-Uniform Memory Access (NUMA) locality and (iii) temporal locality of graph partitioning in shared memory systems. The developed techniques are implemented in GraphGrind, a new shared memory graph analytics framework.

At first, this dissertation shows that heuristic edge-balanced partitioning results in an imbalance in the number of vertices per partition. Thus, load imbalance exists between partitions, either for loops iterating over vertices, or for loops iterating over edges. To address this issue, this dissertation introduces a classification of algorithms to distinguish whether they algorithmically benefit from edge-balanced or vertex-balanced partitioning. This classification supports the adaptation of partitions to the characteristics of graph algorithms. Evaluation in GraphGrind, shows that this outperforms state-of-the-art graph analytics frameworks for shared memory including Ligra by 1.46x on average, and Polymer by 1.16x on average, using a variety of graph algorithms and datasets.

Secondly, this dissertation demonstrates that increasing the number of graph partitions is effective to improve temporal locality due to smaller working sets.
However, the increasing number of partitions results in vertex replication in some graph data structures. This dissertation resorts to using a graph layout that is immune to vertex replication and an automatic graph traversal algorithm that extends the previously established graph traversal heuristics to a 3-way graph layout choice is designed. This new algorithm furthermore depends upon the classification of graph algorithms introduced in the first part of the work. These techniques achieve an average speedup of 1.79x over Ligra and 1.42x over Polymer.

Finally, this dissertation presents a graph ordering algorithm to challenge the widely accepted heuristic to balance the number of edges per partition and minimise edge or vertex cut. This algorithm balances the number of edges per partition as well as the number of unique destinations of those edges. It balances edges and vertices for graphs with a power-law degree distribution. Moreover, this dissertation shows that the performance of graph ordering depends upon the characteristics of graph analytics frameworks, such as NUMA-awareness. This graph ordering algorithm achieves an average speedup of 1.87x over Ligra and 1.51x over Polymer.