{"id":3563,"date":"2025-09-15T16:24:39","date_gmt":"2025-09-15T15:24:39","guid":{"rendered":"https:\/\/blogs.qub.ac.uk\/dipsa\/?p=3563"},"modified":"2025-09-15T16:24:39","modified_gmt":"2025-09-15T15:24:39","slug":"invited-talk-efficient-computation-through-tuned-approximation-by-david-keyes","status":"publish","type":"post","link":"https:\/\/blogs.qub.ac.uk\/dipsa\/invited-talk-efficient-computation-through-tuned-approximation-by-david-keyes\/","title":{"rendered":"Invited Talk &#8211; Efficient\u00a0Computation through Tuned Approximation by David Keyes"},"content":{"rendered":"\n<p>21 February 2024<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Abstract<\/h2>\n\n\n\n<p>Numerical\u00a0software is being reinvented to provide opportunities to tune dynamically the\u00a0accuracy of\u00a0computation to the requirements of the application, resulting in\u00a0savings of memory, time, and energy.\u00a0\u00a0Floating\u00a0point computation in science and engineering has a history of \u201coversolving\u201d\u00a0relative to\u00a0expectations for many models. So often are real datatypes defaulted\u00a0to double precision that GPUs did not\u00a0gain wide acceptance until they provided\u00a0in hardware operations not required in their original domain of\u00a0graphics.\u00a0\u00a0Computational science is now reverting to\u00a0employ lower precision arithmetic where possible.\u00a0Many matrix operations allow\u00a0for lower precision considered at a blockwise level without loss of accuracy,\u00a0adapting to the magnitude of the norm of the block. Furthermore, many blocks\u00a0can be approximated with\u00a0low-rank near equivalents to a prescribed accuracy,\u00a0adapting to the smoothness of the coefficients of the\u00a0block.\u00a0\u00a0This leads to smaller memory footprint, which\u00a0implies higher residency on memory hierarchies,\u00a0leading in turn to less time\u00a0and energy spent on data copying, which may even dwarf the savings from\u00a0fewer\u00a0and cheaper flops.\u00a0\u00a0We provide examples\u00a0from several application domains, including Gordon Bell\u00a0Prize-nominated\u00a0research in 2022 in environmental statistics and in 2023 in seismic processing.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Bio<\/h2>\n\n\n\n<p>David Keyes directs the Extreme\u00a0Computing Research Center at the King Abdullah University of Science\u00a0and\u00a0Technology (KAUST), where he was a founding Dean in 2009 and currently serves\u00a0in the Office of the\u00a0President. He is a professor in the programs of\u00a0Applied Mathematics, Computer Science, and Mechanical\u00a0Engineering. He is also\u00a0an Adjunct\u00a0Professor\u00a0of Applied Mathematics and Applied Physics at\u00a0Columbia\u00a0University, where he formerly held the Fu\u00a0Foundation Chair. He works at the interface between parallel\u00a0computing\u00a0and PDEs and statistics, with a focus on scalable algorithms that exploit data\u00a0sparsity. Before\u00a0joining KAUST, Keyes led multi-institutional scalable\u00a0solver software projects in the SciDAC and ASCI\u00a0programs of the US Department\u00a0of Energy (DoE), ran university collaboration programs at US DoE and\u00a0NASA\u00a0institutes, and taught at Columbia, Old Dominion, and Yale\u00a0Universities. He is a Fellow of SIAM, the\u00a0AMS, and the AAAS. He has been\u00a0awarded the Gordon Bell Prize from the ACM, the Sidney Fernbach\u00a0Award from the\u00a0IEEE Computer Society, and the SIAM Prize for Distinguished Service to the\u00a0Profession.\u00a0He earned a B.S.E. in Aerospace and Mechanical Sciences from\u00a0Princeton in 1978 and a Ph.D. in Applied\u00a0Mathematics from Harvard in 1984.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>21 February 2024 Abstract Numerical\u00a0software is being reinvented to provide opportunities to tune dynamically the\u00a0accuracy of\u00a0computation to the requirements of the application, resulting in\u00a0savings of memory, time, and energy.\u00a0\u00a0Floating\u00a0point computation in science and engineering has a history of \u201coversolving\u201d\u00a0relative to\u00a0expectations for many models. So often are real datatypes defaulted\u00a0to double precision that GPUs did not\u00a0gain [&hellip;]<\/p>\n","protected":false},"author":974,"featured_media":0,"comment_status":"closed","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[1],"tags":[33],"class_list":{"0":"post-3563","1":"post","2":"type-post","3":"status-publish","4":"format-standard","6":"category-uncategorised","7":"tag-seminars","8":"czr-hentry"},"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/blogs.qub.ac.uk\/dipsa\/wp-json\/wp\/v2\/posts\/3563","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blogs.qub.ac.uk\/dipsa\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.qub.ac.uk\/dipsa\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.qub.ac.uk\/dipsa\/wp-json\/wp\/v2\/users\/974"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.qub.ac.uk\/dipsa\/wp-json\/wp\/v2\/comments?post=3563"}],"version-history":[{"count":1,"href":"https:\/\/blogs.qub.ac.uk\/dipsa\/wp-json\/wp\/v2\/posts\/3563\/revisions"}],"predecessor-version":[{"id":3564,"href":"https:\/\/blogs.qub.ac.uk\/dipsa\/wp-json\/wp\/v2\/posts\/3563\/revisions\/3564"}],"wp:attachment":[{"href":"https:\/\/blogs.qub.ac.uk\/dipsa\/wp-json\/wp\/v2\/media?parent=3563"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.qub.ac.uk\/dipsa\/wp-json\/wp\/v2\/categories?post=3563"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.qub.ac.uk\/dipsa\/wp-json\/wp\/v2\/tags?post=3563"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}