The DIPSA group at Queen’s, led by Hans Vandierendonck, studies and designs high-performance and parallel solutions for data-intensive applications. Our work considers computing systems (runtime systems and computer architecture) and the design and optimisation of algorithms for a variety of applications in scientific computing and machine learning. Some branches of our activity also consider applications of machine learning.
– High-Performance Graph Processing studies the design and implementation of efficient algorithms that operate on graph-structured data. As data set sizes continue to grow at exponential rates, the demand to efficiently analyse large graph-structured data sets increases. Graph data sets, however, have a number of characteristics which introduce major challenges for efficient processing, such as the pattern of interconnections in the graph, skewness of degree distributions, and size. While some graph analyses demonstrate a major lack of memory locality that makes them ill-suited to modern computing architectures, other analyses are prone to combinatorial explosion. Our interest is to design efficient computing systems and algorithms for these problems.
– Transprecise and Approximate Computing explores the view that computing need not be (and cannot always be) exact. Many applications admit a degree of error that does not reduce the quality of experience. For instance, digital video playback may incur visual artefacts such as pixelation that is confined to a narrow area and appears only for a brief period of time. This can often go unnoticed. In other cases, such as scientific computing, computations aim to mimic or predict physical processes, yet numeric discretisation methods and computer arithmetic introduce inevitable inaccuracies. As such, absolute exactness in computing is neither achievable nor required. The goal of transprecise and approximate computing is then to design systems and algorithms that judiciously trade off precision in order to improve other aspects of the system, such as speed, power or energy consumption, or the ease of constructing applications.
– Machine Learning evolved from the study of transprecise computing and graph algorithms and aims to provide solutions for novel machine learning tasks.
– Distributed algorithms and consensus
News
- Random Vertex Relabelling in LaganLighter
- Minimum Spanning Forest of MS-BioGraphs
- Topology-Based Thread Affinity Setting (Thread Pinning) in OpenMP
- An (Incomplete) List of Publicly Available Graph Datasets/Generators
- Research Fellow Position in Kelvin Living Lab for Sustainability
- QClique: Optimizing Performance and Accuracy in Maximum Weighted Clique – Euro-Par 2024
- Selective Parallel Loading of Large-Scale Compressed Graphs with ParaGrapher – arXiv Version
- An Evaluation of Bandwidth of Different Storage Types (HDD vs. SSD vs. LustreFS) for Different Block Sizes and Different Parallel Read Methods (mmap vs pread vs read)
- MS-BioGraphs on IEEE DataPort
- Brian Dandurand is offered a Marie Curie Individual Fellowship