21 February 2024 Abstract Numerical software is being reinvented to provide opportunities to tune dynamically the accuracy of computation to the requirements of the application, resulting in savings of memory, time, and energy. Floating point computation in science and engineering has a history of “oversolving” relative to expectations for many models. So often are real datatypes defaulted to double […]
seminars
9 May 2025 Abstract As computing power demands continue to grow, achieving energy efficiency in high-performance systems has become a key challenge. One of the most promising software techniques for energy efficiency is Dynamic Voltage and Frequency Scaling (DVFS) which optimize the energy-performance trade-off by changing hardware frequencies. This presentation […]
28 August 2025 Abstract Constraint programming is a declarative way of solving hard combinatorial, scheduling, resource allocation, and logistics problems. We specify a problem in a high-level language, give it to a solver, and the solver thinks for a while and then gives us the optimal answer. Unfortunately, even the […]
Date: 6th March 2023Location: CSB Training Room 02.017 Parallel computing is a key technology supporting high-performance computing (HPC) and data analytics. The goal of this module is to provide an overview of parallel computing and introduce attendees to prevailing programming models. The expected outcome of this module is that participants […]
17 June 2021 Abstract: Large graphs are behind many problems in today’s computing landscape. Thegrowing sizes of such graphs, reaching 70 trillion edges recently, requireunprecedented amounts of compute power, storage, and energy. In this talk, weillustrate how to effectively process such extreme-scale graphs. We will firstdiscuss Slim Graph, the first […]
3 June 2021 Abstract: The emergence of big data in recent years due to the vast societal digitalization and large-scale sensor deployment has entailed significant interest in machine learning methods to enable automatic data analytics. In a majority of the learning algorithms used in industrial as well as academic settings, the first-order iterative […]
Abstract Graph algorithms are widely used in several application domains. It has been established that parallelizing graph algorithms is challenging. The parallelization issues get exacerbated when graphics processing unit (GPU) is used to execute graph algorithms. In particular, three important GPU-specific aspects affect performance: memory coalescing, memory latency, and thread […]
Dr Aydın Buluç29 April 2021 Solving systems of linear equations have traditionally driven the research in sparse matrix computation for decades. Direct and iterative solvers, together with finite element computations, still account for the primary use case for sparse matrix data structures and algorithms. These sparse “solvers” often serve as the workhorse of many algorithms in spectral graph theory […]
Dr Jeremy Singer, University of Glasgow25 February 2021 Abstract:Love ’em or hate ’em, interactive computational notebooks are here to stay as a mainstream code development medium. In particular, the Jupyter system is widely used by the data science community. This presentation explores some use cases for programmatic introspection of a Jupyter notebook […]
Dr Giorgis Georgakoudis, Lawrence Livermore National Laboratory18 February 2021 Abstract: This talk will present an overview of research on different areas of open problems in HPC. On fault tolerance, Giorgis will present the Reinit solution for fault tolerance in large scale MPI applications. Reinit improves the recovery time of checkpointed […]