{"id":448,"date":"2025-08-29T12:10:45","date_gmt":"2025-08-29T11:10:45","guid":{"rendered":"https:\/\/blogs.qub.ac.uk\/relax-dn\/?p=448"},"modified":"2025-08-29T12:13:27","modified_gmt":"2025-08-29T11:13:27","slug":"efficient-estimation-of-uncertainty","status":"publish","type":"post","link":"https:\/\/blogs.qub.ac.uk\/relax-dn\/2025\/08\/29\/efficient-estimation-of-uncertainty\/","title":{"rendered":"Efficient Estimation of Uncertainty"},"content":{"rendered":"<h5 class=\"wp-block-heading\">By\u00a0<strong>Moule Lin<\/strong><\/h5>\n<p>When using AI, it is often helpful or necessary to have an estimation of the uncertainty of the results, as they are sometimes overconfident. As an example, consider that we ask a multimodal model how many fingers it sees in the image of a hand:<\/p>\n<p><img fetchpriority=\"high\" decoding=\"async\" class=\"aligncenter size-large wp-image-449\" src=\"https:\/\/blogs.qub.ac.uk\/relax-dn\/wp-content\/uploads\/sites\/17\/2025\/08\/pic1moule-1024x644.png\" alt=\"\" width=\"700\" height=\"440\" srcset=\"https:\/\/blogs.qub.ac.uk\/relax-dn\/wp-content\/uploads\/sites\/17\/2025\/08\/pic1moule-1024x644.png 1024w, https:\/\/blogs.qub.ac.uk\/relax-dn\/wp-content\/uploads\/sites\/17\/2025\/08\/pic1moule-300x189.png 300w, https:\/\/blogs.qub.ac.uk\/relax-dn\/wp-content\/uploads\/sites\/17\/2025\/08\/pic1moule-768x483.png 768w, https:\/\/blogs.qub.ac.uk\/relax-dn\/wp-content\/uploads\/sites\/17\/2025\/08\/pic1moule.png 1355w\" sizes=\"(max-width: 700px) 100vw, 700px\" \/><\/p>\n<p>However, due to the underlying mathematical models, the estimation of uncertainty with existing approaches is computationally expensive.<\/p>\n<p>Hence, our research project on Efficient Uncertainty Estimation (EUE) aims to devise techniques for efficiency-first uncertainty checks that estimate how unsure the model is, in an efficient fashion. One approach is to compress the \u201cheavy\u201d parts of uncertainty so we keep the useful signal with less need for time-intensive calculations.<\/p>\n<p><strong>In Summary:<\/strong><\/p>\n<ul>\n<li>Problem: AI can sound confident when it\u2019s actually unsure (as an example, consider hallucination in large language models).<\/li>\n<li>What we add: A tiny, built-in uncertainty signal that runs in a single pass that has no slow re-runs or big extra inference.<\/li>\n<li>Why it matters: You get answers and a simple \u201cconfidence badge\u201d (high \/ medium \/\u00a0 low) without hurting speed.<\/li>\n<\/ul>\n<h3><strong>1. What is Efficient Uncertainty Estimation?<\/strong><\/h3>\n<p>Efficient Uncertainty Estimation is a way to reduce the computation needed to determine \u201chow sure\u201d an AI is. Instead of doing lots of extra work when you ask a question, we design the model so it can provide a reliable indicator of the uncertainty while producing the answer.<\/p>\n<p>Imagine that next to any output the model produces, there is a little indicator that turns red, yellow, or green depending on the model&#8217;s certainty about that result.<\/p>\n<p>As another example, consider a model that gets images from a car\u2019s video camera and has to classify each pixel according to categories like <em>car<\/em>, <em>road<\/em>, <em>traffic sign<\/em>, and <em>pedestrian<\/em>, see the following screenshot (based on a dataset from [1]):<\/p>\n<p><img decoding=\"async\" class=\"aligncenter size-full wp-image-450\" src=\"https:\/\/blogs.qub.ac.uk\/relax-dn\/wp-content\/uploads\/sites\/17\/2025\/08\/pic2moule.png\" alt=\"\" width=\"602\" height=\"301\" srcset=\"https:\/\/blogs.qub.ac.uk\/relax-dn\/wp-content\/uploads\/sites\/17\/2025\/08\/pic2moule.png 602w, https:\/\/blogs.qub.ac.uk\/relax-dn\/wp-content\/uploads\/sites\/17\/2025\/08\/pic2moule-300x150.png 300w\" sizes=\"(max-width: 602px) 100vw, 602px\" \/><\/p>\n<p>At the boundaries where the image transitions between, for instance, a road and a car, the uncertainty will be higher since the model is less confident about pixel classifications. This is indicated by pixel color, white means higher uncertainty.<\/p>\n<h3><strong>2. How Does the Approach Achieve Its Efficiency?<\/strong><\/h3>\n<ul>\n<li><strong>Share what\u2019s redundant [2]<\/strong><br \/>\nBig model weights are redundant. We merge similar pieces so the model carries less baggage but keeps the same information to sense when something \u201cseems off\u201d.<\/li>\n<li><strong>Project information on certainty\/uncertainty into a smaller space<\/strong><br \/>\nInstead of tracking uncertainty everywhere, we keep a compact internal indicator focused on the most telling signals. That\u2019s how we get a single-pass uncertainty score, which indicates that the model is not so sure about this answer.<\/li>\n<\/ul>\n<h3>FAQ<\/h3>\n<p><strong>Does this make AI underconfident or overconfident?<\/strong><br \/>\nNo\u2014our goal is actually to get a good estimate (that is calibrated well): when it says \u201c70% sure,\u201d reality should match that, not more, not less [3][4].<\/p>\n<p><strong>Will it slow responses?<\/strong><br \/>\nNo\u2014Our approach is built for <strong>single-pass<\/strong> checks [5], so the uncertainty comes \u201calmost for free\u201d with the answer.<\/p>\n<p><strong>Do I need special hardware?<\/strong><br \/>\nNo\u2014if anything, it\u2019s <strong>more<\/strong> hardware-friendly because it reduces extra compute.<\/p>\n<h3>Summary<\/h3>\n<p>Efficient Uncertainty Estimation provides a trustworthy indicator for the uncertainty of the results, with reduced computational effort. This is particularly useful in domains where incorrect results can have severe negative consequences, for instance, self-driving cars or medical diagnosis.<\/p>\n<h5>References<\/h5>\n[1] Cordts M, Omran M, Ramos S, et al. The cityscapes dataset for semantic urban scene understanding[C]\/\/Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 3213-3223.<\/p>\n[2] Lin M, Guan S, Jing W, et al. Stochastic Weight Sharing for Bayesian Neural Networks[C]\/\/International Conference on Artificial Intelligence and Statistics. PMLR, 2025: 4519-4527.<\/p>\n[3] Vaicenavicius J, Widmann D, Andersson C, et al. Evaluating model calibration in classification[C]\/\/The 22nd international conference on artificial intelligence and statistics. PMLR, 2019: 3459-3467.<\/p>\n[4] Loquercio A, Segu M, Scaramuzza D. A general framework for uncertainty estimation in deep learning[J]. IEEE Robotics and Automation Letters, 2020, 5(2): 3153-3160.<\/p>\n[5] Van Amersfoort J, Smith L, Teh Y W, et al. Uncertainty estimation using a single deep deterministic neural network[C]\/\/International conference on machine learning. PMLR, 2020: 9690-9700.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>By\u00a0Moule Lin When using AI, it is often helpful or necessary to have an estimation of the uncertainty of the results, as they are sometimes overconfident. As an example, consider that we ask a multimodal model how many fingers it&hellip; <\/p>\n","protected":false},"author":1553,"featured_media":450,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[19],"tags":[],"class_list":["post-448","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-blog"],"jetpack_featured_media_url":"https:\/\/blogs.qub.ac.uk\/relax-dn\/wp-content\/uploads\/sites\/17\/2025\/08\/pic2moule.png","jetpack_sharing_enabled":true,"amp_enabled":true,"_links":{"self":[{"href":"https:\/\/blogs.qub.ac.uk\/relax-dn\/wp-json\/wp\/v2\/posts\/448","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blogs.qub.ac.uk\/relax-dn\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.qub.ac.uk\/relax-dn\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.qub.ac.uk\/relax-dn\/wp-json\/wp\/v2\/users\/1553"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.qub.ac.uk\/relax-dn\/wp-json\/wp\/v2\/comments?post=448"}],"version-history":[{"count":2,"href":"https:\/\/blogs.qub.ac.uk\/relax-dn\/wp-json\/wp\/v2\/posts\/448\/revisions"}],"predecessor-version":[{"id":453,"href":"https:\/\/blogs.qub.ac.uk\/relax-dn\/wp-json\/wp\/v2\/posts\/448\/revisions\/453"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/blogs.qub.ac.uk\/relax-dn\/wp-json\/wp\/v2\/media\/450"}],"wp:attachment":[{"href":"https:\/\/blogs.qub.ac.uk\/relax-dn\/wp-json\/wp\/v2\/media?parent=448"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.qub.ac.uk\/relax-dn\/wp-json\/wp\/v2\/categories?post=448"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.qub.ac.uk\/relax-dn\/wp-json\/wp\/v2\/tags?post=448"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}