Three PhD Positions in Data Analytics – RELAX Doctoral Network

We have 3 recruitment opportunities in a Marie Curie Doctoral Training Network on data analytics. These are PhD opportunities with a research assistant contract:

(1) Application-Aware Relaxed Synchronisation for Distributed Graph Processing,
(2) Interactive and intelligent exploration of big complex data, and
(3) Efficient and Responsible Analytics for Urban Mobility and Allied Applications.

Application Deadlines
7 May 2023

RELAX Doctoral Network
The RELAX Doctoral Network brings together 5 cross-disciplinary research groups working across data science, data management, distributed computing and computing systems to pursue a fundamentally new approach to this problem by leveraging the semantics or correctness conditions of applications, with the goal of enhancing scalability, response times, and availability. The Doctoral Network provides a bespoke technical and non-technical training programme and fosters cross-disciplinary and third-party collaborations.

Funding Information
This project is funded by the Engineering and Physical Sciences Research Council grant number EP/X029174/1.

To be eligible for consideration for a RELAX Doctoral Candidate position (covering tuition fees and a basic salary with pension of approx. £33,001 per annum), a candidate must satisfy all the eligibility criteria based on transnational mobility and academic qualifications. The Studentship is open to all nationalities.

Applicants MUST be doctoral candidates, i.e. not already in possession of a doctoral degree at the date of the recruitment (understood as the recruitment call deadline) and undertake transnational mobility (see mobility rule below). Researchers who have successfully defended their doctoral thesis but who have not yet formally been awarded the doctoral degree will not be considered eligible.

Mobility Rule
Researchers must not have resided or carried out their main activity (work, studies, etc.) in the United Kingdom for more than 12 months in the 36 months immediately before their date of recruitment. Compulsory national service, short stays such as holidays, and time spent as part of a procedure for obtaining refugee status under the Geneva Convention are not taken into account.

Academic Requirements
The minimum academic requirement for admission is normally an Upper Second Class Honours degree from a UK or ROI Higher Education provider in a relevant discipline, or an equivalent qualification acceptable to the University.

More Information
Applicants may additionally consider applying to positions with the partner universities of the network: http://www.relax-dn.eu/

Parallel Computing Training Session 2023

Date: 6th March 2023
Location: CSB Training Room 02.017

Parallel computing is a key technology supporting high-performance computing (HPC) and data analytics. The goal of this module is to provide an overview of parallel computing and introduce attendees to prevailing programming models. The expected outcome of this module is that participants will have an understanding of the different languages and approaches used in this domain and that they will be able to construct simple parallel programs using the discussed languages.

Theoretical lectures and practical training will be dispensed by Hans Vandierendonck, Romain Garnier, Syed Ibtisam Tauhidi, Amir Sabbagh Molahosseini, and Zohreh Moradinia. A general overview of High performance computing will be presented in the morning by Romain Garnier and Amir Sabbagh Molahosseini following by lab classes using practical examples with OpenMP. In the afternoon Syed Ibtisam Tauhidi and Zohreh Moradinia will present how to efficiently handle data analytics with Hadoop and Spark.

ASCCED: Asynchronous Scientific Continuous Computations Exploiting Disaggregation

UKRI EPSRC Grant

The design of efficient and scalable scientific simulation software is reaching a critical point whereby continued advances are increasingly harder, more labour-intensive, and thus more expensive to achieve. This challenge emanates from the constantly evolving design of large-scale high-performance computing systems. World-leading (pre-)exascale systems, as well as their successors, are characterised by multi-million-scale parallel computing activities and a highly heterogeneous mix of processor types such as high-end many-core processors, Graphics Processing Units (GPU), machine learning accelerators, and various accelerators for compression, encryption and in-network processing. To make efficient use of these systems, scientific simulation software must be decomposed in various independent components and make simultaneous use of the variety of heterogeneous compute units.

Developing efficient, scalable scientific simulation software for these systems becomes increasingly harder as the limits of parallelism available in the simulation codes is approached. Moreover, the limit of parallelism cannot be reached in practice due to heterogeneity, system imbalances and synchronisation overheads. Scientific simulation software often persists over several decades. The software is optimised and re-optimised repeatedly as the design and scale of the target hardware evolves at a much faster pace, as impactful changes in the hardware may occur every few years. One may thus find that the guiding principles that underpin such software are outdated.

The ASCCED project will fundamentally change the status quo in the design of scientific simulation software by simplifying the design to reduce software development and maintenance effort, to facilitate performance optimisation, and to make software more robust to future evolution of computing hardware. The key distinguishing factor of our approach is to structure scientific simulation software as a collection of loosely coupled parallel activities. We will explore the opportunities and challenges of applying techniques previously developed for Parallel Discrete Event Simulation (PDES) to orchestrate these loosely coupled parallel activities. This radically novel approach will enable runtime system software to extract unprecedented scales of parallelism and to minimise performance inefficiencies due to synchronisation. Additionally, based on a speculative execution mechanism, it will uncover parallelism that has not been feasible to extract before.

The computational model proposed by ASCCED will, if successful, initiate a new direction of research within programming models for high-performance computing that may dramatically impact not only the performance of scientific simulation software, but can also reduce the engineering effort required to produce efficient scientific simulation software. It will have a profound impact on the sciences that are highly dependent on leadership computing capabilities, such as climate modeling and cancer research.

ROMA: Run-Time Object Detection To Maximize Real-Time Accuracy

This is a follow-up work on TOD, which selects one of multiple deep neural networks (DNNs) to perform real-time video analytics (object detection) on low-end devices, e.g., on the camera itself. TOD uses the median object size to determine which one out of 4 YOLO DNNs will meet the real-time requirement best, with respect to object size and speed of the objects. TOD requires specific knowledge of the device to select appropriate thresholds on the median object sizes, and needs to be retuned for each computing device.

ROMA removes the limitation that TOD imposes by performing a more detailed analysis of the image content. In particular, ROMA separately estimates the impact of the selected DNN on object size and on object speed. Its formulation is sufficiently flexible to adapt to changes in the computational power of the device such that it does not need to be retrained when migrating across hardware. Moreover, this way, ROMA can adapt to runtime changes in computational power, which may arise from power management features on the device, or from other workloads which share the device. ROMA does however have hyper-parameters that are dependent on the YOLO DNNs.

ROMA will be presented at the 2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV).

You can read the post-print on arXiv.

TOD: Transprecise Object Detection

Real-time video analytics on the edge is challenging as the computationally constrained resources typically cannot analyse video streams at full fidelity and frame rate, which results in loss of accuracy. We propose a Transprecise Object Detector (TOD) which maximises the real-time object detection accuracy on an edge device by selecting an appropriate Deep Neural Network (DNN) on the fly with negligible computational overhead.

TOD makes two key contributions over the state of the art: (1) TOD leverages characteristics of the video stream such as object size and speed of movement to identify networks with high prediction accuracy for the current frames; (2) it selects the best-performing network based on projected accuracy and computational demand using an effective and low-overhead decision mechanism.

Experimental evaluation on a Jetson Nano demonstrates that TOD improves the average object detection precision by 34.7 % over the YOLOv4-tiny-288 model on average over the MOT17Det dataset. In the MOT17-05 test dataset, TOD utilises only 45.1 % of GPU resource and 62.7 % of the GPU board power without losing accuracy, compared to YOLOv4-416 model. We expect that TOD will maximise the application of edge devices to real-time object detection, since TOD maximises real-time object detection accuracy given edge devices according to dynamic input features without increasing inference latency in practice.

TOD was presented at the 5th International Conference on Fog and Edge Computing (ICFEC). Read the paper on arXiv.

RAPID: ReAl-time Process ModellIng and Diagnostics: Powering Digital Factories

This projects aims to develop algorithms for real-time processing for analytics in digital factories. A particular use case is the design of hardware circuits, where silicon can easily be damaged during manufacturing. The production defects that arise negatively affect yield and/or quality of the devices. Uncovering these defects, and how they may be mitigated through tunable process parameters, is a demanding process, especially considering the high production rate and voluminous metrics that are collected.

The project considers the design of sketching algorithms, transprecise computing and their efficient implementation on modern high-throughput hardware such as graphics processing units.

RAPID is sponsored by EPSRC.

Project members:

Accelerating Scientific Discovery Using Domain Adaptive Language Modeling

Scientific corpora, such as papers and patents, are great source of information. Incorporating this information into scientific discovery pipelines is a great challenge that could reduce the discovery costs and speed-up the process. Motivating by this fact and leveraging the recent advances of the Natural Language Processing (NLP) domain, we provide domain adaptive NLP methods that are able to understand the scientific domain and its specific characteristics and facilitate necessary tasks for the discovery process.

Iterative Approximate Analysis Of Graph-Structured Data For Precision Medicine

This project aims to develop novel algorithms for the maximum weighted clique (MWC) problem, which appears in various data analysis pipelines in precision medicine. The MWC problem is NP-hard in nature, which makes it particularly challenging given the exponentially increasing amount of data it is applied to.

Although several attempts have been made to solve the maximum weighted clique problem in large graphs, there is still much opportunity for lowering the execution time necessary to find a satisfactory solution. In this project in particular we are investigating approximate algorithms for the MWC problem. We are working towards an algorithm that achieves a very high quality solution (i.e., finding a clique with weight very close to the MWC) in polynomial time.

Project Members

IBM will provide industrially relevant context on knowledge extraction from graph-structured data. They have extensive experience in this area by building scalable software systems for the analysis of massive-scale graph data. They will moreover provide access to relevant datasets.

Funding

This PhD project is funded by the European Union’s Horizon 2020 research and innovation program under the Marie Sklodowska-Curie Actions.

SoftNum: Software-Defined Number Formats

Computing devices implement computer arithmetic as basic functionality, and they implement the same, standardized number formats in order to support software portability. However, with Moore’s Law ending, we question whether it remains the best approach to achieve high performance and low energy consumption by applying the same standardized number formats for all applications. We explore how to make number formats, generally considered to be hard-wired functionality, software-defined. Software-defined number formats have the advantage of high performance, low energy consumption, and ensure sufficient while not excessive precision.

SoftNum is Amir’s individual Marie Sklodowska Curie Fellowship, sponsored by the European Commission.

DiPET: Distributed Stream Processing on Fog and Edge via Transprecise Computing

Publications

The DiPET project investigates models and techniques that enable distributed stream processing applications to seamlessly span and redistribute across fog and edge computing systems.

The goal is to utilize devices dispersed through the network that are geographically closer to users to reduce network latency and to increase the available network bandwidth. However, the network that user devices are connected to is dynamic. For example, mobile devices connect to different base stations as they roam, and fog devices may be intermittently unavailable for computing.

In order to maximally leverage the heterogeneous compute and network resources present in these dynamic networks, the DiPET project pursues a bold approach based on transprecise computing.

Transprecise computing states that computation need not always be exact and proposes a disciplined trade-off of precision against accuracy, which impacts on computational effort, energy efficiency, memory usage and communication bandwidth and latency. Transprecise computing allows to dynamically adapt the precision of computation depending on the context and available resources.

This creates new dimensions to the problem of scheduling distributed stream applications in fog and edge computing environments and will lead to schedules with superior performance, energy efficiency and user experience.

The DiPET project will demonstrate the feasibility of this unique approach by developing a transprecise stream processing application framework and transprecision-aware middleware. Use cases in video analytics and network intrusion detection will guide the research and underpin technology demonstrators.

Please refer to the website for the details of the project: https://dipet.eeecs.qub.ac.uk

This project is sponsored by CHIST-ERA and EPSRC.