Our group has published a new paper in the IEEE Transactions on Emerging Topics in Computing. In this study, we introduced Software-Defined Floating-Point (SDF) number formats designed to enhance the performance of the Belief Propagation (BP) algorithm, which is widely used in fields like machine learning, communications, and robotics. Traditional floating-point formats, such as single (32-bit) and double precision (64-bit), consume a lot of memory and require high bandwidth, making BP implementations slow especially for large-scale graphs or on devices with limited resources. Our SDF formats, using more compact 16-bit (half-precision) and 8-bit (mini-precision) representations, significantly reduce memory usage and bandwidth needs without losing the necessary accuracy for BP applications. It should be noted that a standard 8-bit floating-point format (E5M2 or E4M3) are not applicable for BP as they cannot provide convergence. Therefore, software-defined floating-point format design is necessary.
To ensure that our SDF formats work efficiently on standard CPUs, we developed highly effective software implementations that convert SDF numbers to single-precision arithmetic with minimal overhead. In our experiments using Ising grids sized from 100×100 to 500×500, our 16-bit and 8-bit SDF formats achieved speedups of up to 3.40 times compared to traditional double-precision floating-point formats on an Intel Xeon processor. Importantly, these performance gains did not compromise the accuracy of the BP algorithm, maintaining results equivalent to those obtained with double precision. Larger grid sizes benefited even more from the speed improvements, demonstrating the scalability and effectiveness of our approach.
For more information please visit https://ieeexplore.ieee.org/document/10847799