Sustainable Wearable EdgE inTelligence (SWEET)

The SWEET project will investigate the efficient deployment and sustainability issues of wearable sensors in particular for health analytics. Real-time remote monitoring of physiological indicators can support early detection and intervention in heart diseases and save lives. These services however require wearable technologies with strong predictive abilities, fast networks and fast servers to extract insights from the collected data. Unfortunately, these technology components are often inaccessible to hundreds of millions of people, specifically, people who live in areas with limited broadband connectivity and limited means to invest in local computing and communication infrastructure.

The project will focus on three components (i) energy-efficient wearable hardware accelerators using custom instruction set acceleration, (ii) distributed scheduling and machine learning model serving to account for performance variability of the systems and networks, (iii) technologies for efficient and portable deployment of web services and approximate key caching.

The project will support one PhD student and one post-doctoral researcher in our group.

The project is a collaboration between Deepu John (University College Dublin), Dimitrios S. Nikolopoulos (Virginia Tech), Bo Ji (Virginia Tech) and ourselves in DIPSA (Queen’s University Belfast), and is funded through the tripartite US-Ireland funding scheme.

We graciously acknowledge the support by the Department for the Economy, NI (contracts to be finalised).

ROMA: Run-Time Object Detection To Maximize Real-Time Accuracy

This is a follow-up work on TOD, which selects one of multiple deep neural networks (DNNs) to perform real-time video analytics (object detection) on low-end devices, e.g., on the camera itself. TOD uses the median object size to determine which one out of 4 YOLO DNNs will meet the real-time requirement best, with respect to object size and speed of the objects. TOD requires specific knowledge of the device to select appropriate thresholds on the median object sizes, and needs to be retuned for each computing device.

ROMA removes the limitation that TOD imposes by performing a more detailed analysis of the image content. In particular, ROMA separately estimates the impact of the selected DNN on object size and on object speed. Its formulation is sufficiently flexible to adapt to changes in the computational power of the device such that it does not need to be retrained when migrating across hardware. Moreover, this way, ROMA can adapt to runtime changes in computational power, which may arise from power management features on the device, or from other workloads which share the device. ROMA does however have hyper-parameters that are dependent on the YOLO DNNs.

ROMA will be presented at the 2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV).

You can read the post-print on arXiv.

TOD: Transprecise Object Detection

Real-time video analytics on the edge is challenging as the computationally constrained resources typically cannot analyse video streams at full fidelity and frame rate, which results in loss of accuracy. We propose a Transprecise Object Detector (TOD) which maximises the real-time object detection accuracy on an edge device by selecting an appropriate Deep Neural Network (DNN) on the fly with negligible computational overhead.

TOD makes two key contributions over the state of the art: (1) TOD leverages characteristics of the video stream such as object size and speed of movement to identify networks with high prediction accuracy for the current frames; (2) it selects the best-performing network based on projected accuracy and computational demand using an effective and low-overhead decision mechanism.

Experimental evaluation on a Jetson Nano demonstrates that TOD improves the average object detection precision by 34.7 % over the YOLOv4-tiny-288 model on average over the MOT17Det dataset. In the MOT17-05 test dataset, TOD utilises only 45.1 % of GPU resource and 62.7 % of the GPU board power without losing accuracy, compared to YOLOv4-416 model. We expect that TOD will maximise the application of edge devices to real-time object detection, since TOD maximises real-time object detection accuracy given edge devices according to dynamic input features without increasing inference latency in practice.

TOD was presented at the 5th International Conference on Fog and Edge Computing (ICFEC). Read the paper on arXiv.

SoftNum: Software-Defined Number Formats

Computing devices implement computer arithmetic as basic functionality, and they implement the same, standardized number formats in order to support software portability. However, with Moore’s Law ending, we question whether it remains the best approach to achieve high performance and low energy consumption by applying the same standardized number formats for all applications. We explore how to make number formats, generally considered to be hard-wired functionality, software-defined. Software-defined number formats have the advantage of high performance, low energy consumption, and ensure sufficient while not excessive precision.

SoftNum is Amir’s individual Marie Sklodowska Curie Fellowship, sponsored by the European Commission.

DiPET: Distributed Stream Processing on Fog and Edge via Transprecise Computing

Publications

The DiPET project investigates models and techniques that enable distributed stream processing applications to seamlessly span and redistribute across fog and edge computing systems.

The goal is to utilize devices dispersed through the network that are geographically closer to users to reduce network latency and to increase the available network bandwidth. However, the network that user devices are connected to is dynamic. For example, mobile devices connect to different base stations as they roam, and fog devices may be intermittently unavailable for computing.

In order to maximally leverage the heterogeneous compute and network resources present in these dynamic networks, the DiPET project pursues a bold approach based on transprecise computing.

Transprecise computing states that computation need not always be exact and proposes a disciplined trade-off of precision against accuracy, which impacts on computational effort, energy efficiency, memory usage and communication bandwidth and latency. Transprecise computing allows to dynamically adapt the precision of computation depending on the context and available resources.

This creates new dimensions to the problem of scheduling distributed stream applications in fog and edge computing environments and will lead to schedules with superior performance, energy efficiency and user experience.

The DiPET project will demonstrate the feasibility of this unique approach by developing a transprecise stream processing application framework and transprecision-aware middleware. Use cases in video analytics and network intrusion detection will guide the research and underpin technology demonstrators.

Please refer to the website for the details of the project: https://dipet.eeecs.qub.ac.uk

This project is sponsored by CHIST-ERA and EPSRC.