Monotonicity of Multi-term Floating-Point Adders

In the literature on algorithms for computing multi-term addition <inline-formula><tex-math notation="LaTeX">$s_{n}=\sum_{i=1}^{n}x_{i}$</tex-math><alternatives><mml:math><mml:msub><mml:mi>s</mml:mi><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:munderover><mml:mo>∑</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:munderover><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math><inline-graphic xlink:href="mikaitis-ieq1-3371783.gif"/></alternatives></inline-formula> in floating-point arithmetic it is often shown that a hardware unit that has single normalization and rounding improves precision, area, latency, and power consumption, compared with the use of standard add or fused multiply–add units. However, non-monotonicity can appear when computing sums with a subclass of multi-term addition units, which is currently not explored in the literature. We prove that computing multi-term floating-point addition with <inline-formula><tex-math notation="LaTeX">$n\geq 4$</tex-math><alternatives><mml:math><mml:mi>n</mml:mi><mml:mo>≥</mml:mo><mml:mn>4</mml:mn></mml:math><inline-graphic xlink:href="mikaitis-ieq2-3371783.gif"/></alternatives></inline-formula>, without normalization of intermediate quantities, can result in non-monotonicity—increasing one of the addends <inline-formula><tex-math notation="LaTeX">$x_{i}$</tex-math><alternatives><mml:math><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math><inline-graphic xlink:href="mikaitis-ieq3-3371783.gif"/></alternatives></inline-formula> decreases the sum <inline-formula><tex-math notation="LaTeX">$s_{n}$</tex-math><alternatives><mml:math><mml:msub><mml:mi>s</mml:mi><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub></mml:math><inline-graphic xlink:href="mikaitis-ieq4-3371783.gif"/></alternatives></inline-formula>. Summation is required in dot product and matrix multiplication operations, operations that are increasingly appearing in the hardware of high-performance computers, and knowing where monotonicity is preserved can be of interest to the developers and users. Non-monotonicity of summation in existent hardware devices that implement a specific class of multi-term adders may have appeared unintentionally as a consequence of design choices that reduce circuit area and other metrics. To demonstrate our findings we simulate non-monotonic multi-term adders in MATLAB using the <monospace>CPFloat</monospace> custom-precision floating-point simulator.


INTRODUCTION
A real function f is monotonically nondecreasing on an interval [a, b] if f (x) ≤ f (y) whenever a ≤ x ≤ y ≤ b.In other words, when the argument of a monotonic function is increasing, the value of the function does not decrease.Similarly for a function that is monotonic nonincreasing: when the input argument is increasing the value of the function is not increasing.For the multivariate functions, a function is monotonic if it is monotonic for all the input values.For example, f (x 1 , x 2 ) is monotonic nondecreasing if for any x 1 ≤ x * 1 and x 2 ≤ x * 2 we have f (x 1 , x 2 ) ≤ f (x * 1 , x * 2 ).Summation of a set of values is a multivariate function that is monotonic nondecreasing (just monotonic thereafter).Given a set of input values x 1 , x 2 , ..., x n , summation is expressed as This function is monotonically nondecreasing by the definition of the sum: take x n = a followed by x n = a + ε, with ε > 0. Then f (x 1 , x 2 , ..., a) = ( n−1 i=1 x i ) + a, whereas f (x 1 , x 2 , ..., a + ε) = ( n−1 i=1 x i ) + a + ε > ( n−1 i=1 x i ) + a.Because of the commutativity of the sum this is true for all the input arguments.
Summation of values is at the core of scientific computing-it is required, for example, for calculating vector-vector products, matrix-vector and matrix-matrix multiplications, as well as in evaluating polynomials.Most computer software works with floating-point numbers [1], rather than exact numbers, so the addition operation is different from the exact addition and the monotonicity of summation should be tested rather than assumed to hold because it holds in exact arithmetic.This is similar to the properties of associativity, commutativity and distributivitycommutativity is generally preserved, but associativity and School of Computing, University of Leeds, Leeds, LS2 9JT, United Kingdom (M.Mikaitis@leeds.ac.uk).Preprint version: 4th of December, 2023.From here we use the notation of the standard model [3, Sec.2.2] of addition in precision-p arithmetic: fl(x + y) = (x + y)(1 + δ), |δ| ≤ u where u = 2 −p , the unit roundoff, and fl(x) refers to normalizing (see Section 2.1) and rounding x to form a floating-point value defined above.
Most floating-point units on general-purpose hardware include 2-term adders that compute the sum of floatingpoint numbers z = fl(a + b) which is required by the IEEE 754 standard [1,Sec. 5.4.1].This operation includes computing a + b as though in infinite precision, normalizing the resultant significand if required and rounding it to obtain z [2,Sec. 7.3].At software level, this operation is called repeatedly to compute sums of arbitrary length (Equation 1).A high-level algorithm [3,Sec. 4.2] is given as Algorithm 1.1.On line 3 the operation fl(a + b) includes rounding and normalization, which means that, when implemented this way, overall the n-term addition performs n− 1 such roundings and normalizations.It is safe to say that most software is implemented this way on the hardware that includes circuitry only for adding two numbers as per IEEE 754.
Summation of three, four or longer vectors of floatingpoint values can be classified under reduction operations which are recommended but not required by the IEEE 754 standard [1,Sec. 9.4].The guideline provided by IEEE 754 for implementing them is not significantly constrained.Addition of more than three values can be achieved by an accumulator which takes as one of the inputs its output from the previous stage or by using multiple 2-term adders in series or in parallel (Figure 1).However, in hardware, a custom design is often considered to gain in speed and circuit area compared with the straightforward approach of using the standard 2-term adders.Algorithm 1.2 shows the main steps taken by most of the multi-term adders available in the literature (see Section 2.1 for the details on floating point).The main point to note is that normalization and rounding are performed once, at the end, rather than after each intermediate addition operation, as in Algorithm 1.1, which in literature on hardware appears to be beneficial for saving hardware resources and even increasing the accuracy.All of this applies to dot products and matrix multiplies, by replacing fl As a side note, unnormalized floating-point arithmetic appeared as early as 1958 in the work of Metropolis and Ashenhurst [4]; the performance improvement compared with the normalized arithmetic was noted even then.
In this paper we identify four types of implementation for the Algorithm 1.2 and demonstrate that one class of designs for multi-term addition are non-monotonic, including various commercial hardware designs available and widely present on the machines in the TOP500 1 .We also suggest that this issue can be fixed by masking off the bottom bits when carries occur or by using an adder from a different class, which can be added into the future summation and dot product hardware designs as an option.
We mentioned that non-monotonic summation in floating-point arithmetic adds to the list of mathematical properties that are not preserved when switching from exact arithmetic.One other motivating point for studying this is reproducibility of numerical computations.Bit-wise reproducibility is not impossible on single-core CPUs that implement the IEEE 754 standard correctly and assuming special features such as 80-bit arithmetic are not enabled by compilers.If we run some code in IEEE 754 software, using basic operations of a floating-point unit (FPU), we get one behaviour, but if we run that code in hardware that performs summation non-monotonically, we may get unexpected results that may be hard to explain.We now demonstrate this with an example.

An example on current hardware
For the following we use NVIDIA A100 SXM 80GB GPU because we have access to it, however we predict that most commercial hardware in the market would not pass the following, and similar, tests.A100 GPUs are equipped with matrix multiply hardware that can perform D = A × B + C, where A ∈ R 8×8 and B ∈ R 8×4 are binary16 matrices, and C, D ∈ R 8×4 are binary32 matrices [5, p. 20].We showed before that matrix multipliers in this GPU have one extra bit in the alignment of binary32 significands [6].We will focus on two resulting matrix elements, which perform dot products Computing A × B + C with the GPU matrix multipliers returns a matrix that has d 11 = 33554436 and d 12 = 33554432, demonstrating non-monotonic behaviour where an increase in one of the 9 addends decreases the sum.Note that we increased the addend by 2 and the sum was decreased by 4 since the amount we decrease by comes from the other 8 addends not contributing to the sum.Similar examples can be constructed around any powers of two, and at the edges of the dynamic range the quantities by which the sum changes by decreasing one of the addends would be larger in absolute terms.In the past we performed tests that indicated that NVIDIA V100 and the T4 GPUs have similar behaviour, although the V100 differs due to narrower internal accumulator in the multi-term addition [6].NVIDIA H100 and the AMD matrix multipliers are yet to be tested.

Multi-term floating-point addition in literature
Hardware designs of multi-term adders (Algorithm 1.2), including those that are part of dot product and matrix multiply hardware, can be classified into four main categories.Some of the ways to build multi-term adders are demonstrated in the high-level diagrams of Figure 1.
We will use a term fused.From the user's perspective, in most cases it means that only one rounding error is incurred in the computation, except where stated otherwise.

Class I: Adders that use long accumulators
One approach is to retain all the bits in the summation of multiple values and round it once at the end.This is advocated by Kulisch [7,Sec. 8].See the design-space exploration by Uguen and de Dinechin [8] for a detailed analysis of the costs.An implementation by Koenig, Bachrach, and Asanović [9] used 4288 bits internally for multiplying and accumulating binary64 values exactly.These kind of multiterm adders are fused because they contain only one rounding across the whole computation; however, keeping all of the bits can be expensive in circuit area and latency due to carry propagation.
While not directly a multi-term adder, the accumulator of binary16 products by Brunie [10] uses an exact 80-bit fixed-point internal format and therefore can be used to implement fused dot products or matrix multiplies with a Fig. 1.Various methods for implementing 4-term addition in hardware.The left-most method, which is present in virtually all modern hardware, is to iterate over an IEEE 754 floating-point addition multiple times to sum a vector of numbers x; with this method, every iteration requires a new addition instruction to be fetched and executed.The second and third approaches demonstrate two ways to utilize multiple IEEE 754 floating-point adders-the first one simply chains four adders while the second one creates an adder tree to reduce the latency.In these three methods, the order of addition can be changed by rearranging the inputs.The remaining method is a black box which contains a specialized hardware design for performing the summation, not necessarily using the operations outlined in IEEE 754 and usually enforcing a specific order of summation.The first three approaches correspond to Class III adders and the last one encapsulates Classes I, II, and IV (Section 1.2).
single rounding error.Brunie [11] also proposed an architectural extension to CPUs which adds basic linear algebra instructions that work on matrices packed in generalpurpose vector registers.The papers suggest that whether the accumulation is exact or not depends on the precision of input arguments and whether it is feasible to build the hardware required to accumulate exactly.Burgess, Goodyer, Hinds, and Lutz [12] propose High-Precision Anchored (HPA) accumulators for accurate floating-point summation and suggest extensions to ARM Scalable Vector Extension (SVE) units to efficiently support them.The HPA number format would be used for computation, while the input and output data would still be in an IEEE 754 format, such as the binary64, which requires conversion to and from the HPA.The main concept is to convert a floating-point value into the HPA format by placing different parts of the significand into different registers, based on the significance of the bits when taking into account the exponent.HPA numbers are therefore stored in a wider format, across multiple registers.Similar to fixedpoint representation, scaling is applied to choose the balance between the range and precision.HPA numbers can be configured to calculate correctly rounded sums of floatingpoint values, therefore we classify this software-hardware concept in the Class I of multi-term adders.

Class II: Adders that achieve correct rounding without the use of long accumulators
Tenca [13] provides an optimized algorithm for finding the largest exponent and choosing the amounts to shift the significands by.The general algorithm is not changed, however, with the main steps in Algorithm 1.2 still present.Tenca [13] actually proposes a fused design for performing fl(a+b+c) with only one rounding error, which complicates the problem in that bits that are shifted out in the significand alignment step have to be tracked.This is not what is implemented in the hardware; for example in the A100, which we used for the demonstration above, n − 1 rounding or truncation errors are incurred when aligning and adding significands in limited precision.
Sohn and Swartzlander propose a series of fused operators, such as a two-term dot product [14], a three-term adder [15], and a four-term dot product [16].A general-ized n-term fused dot product architecture is explored by Tao et al. [17].The goal is to implement fused operations, meaning that computation is not performed through standard hardware multipliers and adders joined together, but by making a new optimized unit without the intermediate rounding and normalization steps.Since these operators are fused, we do not expect non-monotonicity to appear when computing with them.
Multi-term adders also appear is in the hardware designs of the fast Fourier transform (FFT) operation.Swartzlander and Saleh [18] utilize a two-term adder for implementing a fused two-term dot product while Kaivani and Ko [19] discuss an implementation of FFT for which a five-term floating-point adder was used.The authors mention not using intermediate normalization and rounding blocks by implementing a custom-design five-term adder, which in turn allowed to reduce the area of the FFT design.This 5-term adder is fused, but it is not specified what the accumulator's size is and how the sticky bits are computed to replicate the exact accumulation.
A generalized algorithm by Boldo, Gallois-Wong, and Hillaire [20] computes a correctly rounded dot product of a series of fixed-point numbers with varied precisions.Instead of using a long accumulator that could cover all possible values, the algorithm uses some number of extra bits and round-odd rounding mode.

Class III: Adders that replicate software behaviour
Kim and Kim [21] propose a 4-term dot product unit without the intermediate normalization of sums but with intermediate rounding performed in correct places (by taking into account where the most significant nonzero bit is) to assure bit reproducible operation compared with an IEEE 754 software implementation.Even though the monotonicity is not addressed in this work, the implementation should be monotonic as it mimics a software implementation with the correctly rounded elementary operations.The application space is 3D graphics-for this a 4-term dot product in single precision is particularly useful and this is what the authors explored.It was noted that "the exact bit-level matching between hardware units and software models is more important in 3D graphics than the rounding errors to the real value."[21, p. 892] as the motivation for performing rounding in the intermediate calculations.

Class IV: Adders that use limited precision accumulator
In machine learning, Kaul et al. [22] discuss a generalized n-term dot product hardware design.It is proposed to split the calculation of the maximum exponent and the differences to all the other exponents into two phases, to reduce the critical path of the design.For the purposes of our study, the main feature of Kaul et al. [22] design is that it implements the behaviour of Algorithm 1.2: it aligns the products relative to the product with the maximum exponent, the alignment right-shifts are in limited precision and the addition is performed with extra bits for carries, with a single normalization step of the sum.
Lopes and Constantinides [23] have designed a configurable dot product unit which was tested on FPGAs for up to 150 terms and compared it with a basic implementation that uses a tree of multipliers and adders.The main feature of the design is that internally it uses a configurable precision fixed-point register to accumulate the products in before normalizing and rounding it to produce the floatingpoint answer.This design follows the general structure of Algorithm 1.2 with precision growth due to no intermediate normalization (as the authors point out, precision growth allows to avoid overflows), and therefore should be nonmonotonic.
Hickmann et al. [24] present a 32 × 32 matrix multiply accelerator with 16-bit floating-point inputs and 32-bit outputs.The internal accumulation of products is limited to 37 bits.One interesting aspect of this work is that the term fused is used, but since only the products are exact, not the accumulation of them, this has a different meaning than the Class I/II designs which perform everything as though the computation is exact and rounded once.Another aspect worth noting is that this design explicitly adds the products a i b i of the 32 × 32 dot product operation before adding their sum to the accumulator c i in order to reduce the error accumulation when the value in the accumulator is growing in magnitude.
Bertaccini et al. [25] developed a three-term addition unit for performing dot products or sums of floating-point values of various formats.The internal accumulator is expanded twice, after each addition, but it is not reported to be an exact accumulator that would cover all right-shift distances in the alignment of addends.
Lee et al. [26] implemented a four-core mixed-precision AI chip which includes a three-term floating-point adder as part of the FMMA (fused multiply-multiply-accumulate) instruction.The multiplicands are 8-bit floating-point values and the accumulator is 16-bit.The three-term addition is performed in 16-bit precision followed by normalization and rounding.

Adders that lie in multiple classes
Ledoux and Casas [27] proposed a hardware generator of general matrix multiply-accumulate (GEMM) accelerators.The generator is parameterized and provides a choice of numerical formats and the option for setting the sizes of internal accumulators, including making them long accumulators to accumulate products exactly.Irrespective of the setup of the accumulator, rounding and normalization from the internal hardware numerical format to some chosen standard format is performed at the end, once the whole dot product has been computed.The work does not mention the use of multi-term adders and implements GEMM accelerators through the accumulation of values by iterating through the hardware.Nevertheless, in terms of resultant numerical behaviour, this work potentially can generate hardware of classes I and IV listed above.

Commercial hardware
Table 1 lists hardware that is available and contains multiterm floating-point addition, as part of dot product and matrix operations.
Most of the companies do not provide information on low level numerical hardware details which makes it hard to classify them and say what numerical features, such as rounding and monotonicity, are present.An attempt can be made at deducing some of the features from the numerical results that are obtained when computing on these devices, as demonstrated with NVIDIA V100 GPUs by Hickann and Bradford [34] and with V100, T4, and the A100 by Fasi et al. [6].Fasi et al. [35] have subsequently demonstrated, through error analysis, that low level features such as rounding can become significant when multiplying matrices with matrix arithmetic hardware.

Our contributions
In summary, the present manuscript's contributions to computer arithmetic and beyond are three-fold: 1) We identify conditions in which floating-point operations that involve multi-term addition can be nonmonotonic-this allows to explain surprising numerical results of some of the commercial hardware and construct tests that can be used to look for nonmonotonicity of summation within the vector and matrix operations in hardware devices.We show that Class IV operations are not monotonic, but Class I-III are and provide proofs in each case.2) We demonstrate various applications that may be impacted.3) We propose ways to modify architectures that contain units for adding multiple floating-point numbers in order for the computed approximations of sums to be monotonic.4) The paper acts as a survey of hardware designs that are and are not monotonic, and fills an important gap in the literature by addressing the monotonicity of multi-term addition.

Floating-point representation and arithmetic
We will be using the following IEEE-compliant floatingpoint systems and properties.A binary floating-point number x has the form (−1) s × m × 2 e−p+1 , where s is the sign bit, p is the precision, m ∈ [0, 2 p − 1] is the integer TABLE 1 List of devices that contain vector or matrix arithmetic hardware, such as dot product and matrix multiply.In the last column we make a prediction on the class of the multi-term addition based on the available information.† This number is determined only from the H100 whitepaper as no other information is available, to the best of our knowledge, on what inputs tensor cores take at hardware layer; * Two 8-bit floating-point formats are available, one with a 4-bit exponent and a 3-bit significand, and one with a 5-bit exponent and a 2-bit significand; The results of floating-point operations may not be normalized and must be normalized by shifting the significand left or right until it falls within [2 p−1 , 2 p − 1] and adjusting the exponent accordingly.Those numbers that cannot be normalized in such a way, due to requiring exponents lower than the minimum exponent value, form subnormal numbers.
The IEEE 754 standard for floating-point arithmetic provides a limited set of requirements for reduction operations such as multi-term addition [1, Sec.9.4] or vector and matrix operations: a particular order of adding the partial sums is not required, and the use of arbitrary precision accumulator is allowed.The standard does not specify: 1) whether this internal format should be normalized after each addition, 2) which rounding mode should be used, and 3) when the rounding should happen.IEEE 754-2019 [1] specifies six rounding modes for various purposes.These requirements provide a lot of freedom in implementation choices and can potentially introduce a wide array of different numerical behaviours.We identified four main classes of algorithms that are present in literature and made their way into various devices (Section 1.2).

Monotonicity of IEEE 754 arithmetics
In this section we demonstrate a few results about the approximation of a sum computed using the 2-term correctly rounded addition operation [1].

Rounding
The default rounding mode of IEEE 754 arithmetics is round-to-nearest ties-to-even (RN) and it can be shown that it is monotonic.Take x, y ∈ R, x ≤ y, and two neighbouring floating-point values in some precision-p arithmetic over F, a and b.Assume that x and y lie between a and b such that a ≤ x ≤ y ≤ b.Normalization of the floating-point significand [1] followed by rounding x ∈ R to F is denoted by fl(x) and in this case x and y can be rounded to a or b.
The definition of round-to-nearest does not allow nonmonotonic behaviour: since a ≤ x, y ≤ b and x ≤ y, fl(x) ≤ fl(y) because a = fl(y) < fl(x) = b contradicts the definition of round-to-nearest [1].
Other IEEE 754 [1] rounding modes can also be shown to preserve monotonicity: round-toward-zero (RZ), roundtoward-negative (RD) and round-toward-positive (RU) are all monotonic because when they are used, fl(x) = fl(y) = a or fl(x) = fl(y) = b.

Addition of two operands
Now we look at f add (x, y) = fl(x+y) where as per IEEE 754, x + y is computed as though in infinite precision arithmetic and then rounded to the nearest value in some F. Take a, b, and c > b, s 1 = a+b, and s 2 = a+c, then s 1 < s 2 .Using the same approach as in Section 2.2.1 we can show that fl(s 1 ) ≤ fl(s 2 ) and therefore that the addition of two operands in IEEE 754 arithmetics is also a monotonic function.
Theorem 2.1.Addition of two operands, fl(x+y) with x, y ∈ R, computed using the addition operation as defined in the IEEE 754 is monotonic with round-to-nearest, round-towards-zero, roundtoward-negative, and round-toward-positive.
Proof.Since the addition in IEEE 754 arithmetics is first performed as though in infinite precision, the computed quantities will be fl(s 1 ) and fl(s 2 ).Since s 1 < s 2 the reasoning is equivalent to that of Section 2.2.1 and fl(s 1 ) ≤ fl(s 2 ).

Addition of three or more operands
Multi-term addition in IEEE 754 floating-point arithmetics is also monotonic.Let x 1 , . . ., x n ∈ R. Consider fl(fl(x 1 + x 2 ) + x 3 ).Take x 3 = a and set s 1 = fl(x 1 + x 2 ) + a. Then take x 3 = a + ε, with ε > ulp(a)/2, where ulp(a) = 2 ea−p+1 is the size of the gap between a and the following floatingpoint number, and set s 2 = fl(x 1 +x 2 )+a+ε.Since a+ε > a we get that s 1 < s 2 and we can show that fl(fl(x 1 +x 2 )+x 3 ) is monotonic by showing that fl(s 1 ) ≤ fl(s 2 ) is, using the reasoning in Sec.2.2.2.
This can be repeated for showing that fl(fl also monotonic, and when any of the addends is increased, not necessarily the last one.
Theorem 2.2.Summation n i x i , with x i ∈ R and n ≥ 3, computed using the floating-point addition operation as defined in the IEEE 754 is monotonic with round-to-nearest, round-towardszero, round-toward-negative, and round-toward-positive, in any ordering, such as fl(fl Then, consider increasing the last addend x n .We can define the partial sum of the first n − 1 addends as . Then consider two cases, s 1 = fl(s p + x n ) and s 2 = fl(s p + (x n + ε)).Then s 1 ≤ s 2 through the result in Section 2.2.1.
Secondly, we can check what can happen when any of the addends x i for 1 ≤ i ≤ n − 1 are increased before the sum is computed.Take j to be the index of an addend which we modify.Let and define the partial sums as We need to prove that s ′ n ≥ s n .If i < j, then s ′ i = s i .Using Theorem 2.2 and the fact that x ′ j ≥ x j we can conclude that s ′ j = fl(s ′ j−1 + x ′ j ) = fl(s j−1 + x ′ j ) ≥ fl(s j−1 + x j ) = s j .For j < i ≤ n, the result follows by induction: The proof is analogous for other orderings of evaluation of the sum.
An anonymous referee has pointed out that each ordering can be represented by a tree with vertices representing rounded operations.For each ordering we need two trees, one with and one without the ε update to one of the addends x j .Similarly to the proof above, we can prove monotonicity for each operation and use induction to prove the monotonicity at all levels of the tree as the expression is being computed.This would allow us to confirm that s ′ n ≥ s n which are at the roots of the trees.

Multiplication
Now we look at f mul (x, y) = fl(x×y) where as per IEEE 754, x × y is computed as though in infinite precision arithmetic and then rounded to the nearest value in some F. Take a, b, c > b, m 1 = a × b, and m 2 = a × c.If a > 0, m 1 < m 2 (multiplication is monotonic increasing).If a < 0 we have m 1 > m 2 (monotonic decreasing).
Theorem 2.3.Multiplication of two operands, fl(x×y) with x ∈ R and y ∈ R, computed using the floating-point multiplication operation as defined in the IEEE 754 is monotonic with round-tonearest, round-towards-zero, round-toward-negative, and roundtoward-positive.
Proof.Since the multiplication in IEEE 754 arithmetics is first performed as though in infinite precision, the computed quantities will be fl(m 1 ) and fl(m 2 ).Since m 1 < m 2 the reasoning is equivalent to that of Section 2.2.1 and fl(m 1 ) ≤ fl(m 2 ).The proof is analogous for a < 0 which gives fl(m 1 ) ≥ fl(m 2 ).
Theorem 2.4.The inner product of column vectors a, b ∈ R n , a T b , computed using the floating-point multiplication and addition operations as defined in IEEE 754 with round-to-nearest, round-towards-zero, round-toward-negative, and round-towardpositive is monotonic for any ordering, such as fl( Proof.The proof follows from the monotonicity of the scalar multiplication (Theorem 2.3) and the monotonicity of the n-term sum (Theorem 2.2).
Since matrix multiplication is comprised of inner products, elementwise monotonicity results from the monotoncicity of the inner product operation.This proves the monotonicity properties of the units that lie in Class III (Section 1.2.3), which implement multi-term addition hardware to mimic the behaviour of IEEE 754, equivalent to normalizing and rounding after every operation.

Fused multi-term adders
Theorem 2.5.Summation using fused multi-term adders which perform addition as though in infinite precision and then round once, is monotonic.
Proof.Since fused multi-term adders compute as though the overall sum is computed in infinite precision and rounded once, fl(x 1 + x n + • • • + x n ), they are monotonic due to monotonicity of rounding, showed in Section 2.2.1.This proves the monotonicity properties of the Class I/II units (Sections 1.2.1 and 1.2.2).

RESULTS
In this section we prove a few results about the Class IV multi-term floating-point adders (Section 1.2.4).

Modified IEEE 754 arithmetics: addition without normalization
We need a modified floating-point addition model to describe Class IV multi-term addition units with precision growth.We take the normalized significand of a floatingpoint number to be 2 p−1 ≤ m < 2 p [1].In the binary representation of m the binary point is defined to be between the first and second left-most bits of m [1].We now consider a modified version of IEEE 754 addition operation without this constraint, meaning that the normalization step in the addition is not performed.Specifically, we will focus on the normalization that requires the right-shift of the significand by one step (Figure 2).Namely, instead of having one bit to the left of the binary point we assume there are multiple bits for carries to propagate when the result of the partial summation reaches or crosses the powers of two.Equivalently, we can keep the normalization but add one bit of precision if the sum reaches the next power of two.
Take a, b ∈ R. If |a| < |b| swap them so that in general we assure |a| ≥ |b|.Define t = 2 1+⌊log 2 |a|⌋ : this finds the absolute value of the power of two nearest to |a| with |a| < |t|.Then the adder with precision increase (which describes an adder without the right-shift normalization) can be defined as When we use this adder multiple times, for example to compute flr(flr(x 1 + x 2 ) + x 3 ) any precision increase in the first adder is propagated into the next adder.Therefore this expression can grow precision from p to p + 1, while n additions could grow precision from p to p + ⌈log 2 n⌉-we call this precision growth after each addition in a multi-term summation a gradual precision growth.In practice the final result may also be rounded to some desired target precision: fl(flr(x 1 + x 2 )), fl(flr(flr(x 1 + x 2 ) + x 3 )), and so on.As an aside, this double rounding can cause issues with accuracy of the final result [36], [37], but this is not the cause of the non-monotonicity and is not addressed further in this paper.This model of addition does not model the lack of left-shift normalization (Figure 2); it is not required for the purposes of this article.
A similar device was used by Ashenhurst and Metropolis [38, p. 418] for error analysis of unnormalized floatingpoint arithmetic.

Monotonicity of the modified addition
It can be shown using the similar reasoning as in Section 2.2 that fl(flr(x+y)) and fl(flr(flr(x 1 +x 2 )+x 3 )) are monotonic.Theorem 3.1.Addition of two operands, flr(x + y) with x ∈ R and y ∈ R, is monotonic with round-to-nearest, round-towardszero, round-toward-negative, and round-toward-positive.
Proof.First, if the internal adder does not grow precision, the final rounding does not have any effect and the summations are monotonic as shown in Section 2.2.Let us consider the monotonicity of fl(flr(x + y)) when the precision grows by one bit.Take s 1 = flr(a+b) and s 2 = flr(a+c) with c > b.Due to monotonicity of rounding, s 1 ≤ s 2 , and therefore fl(s 1 ) ≤ fl(s 2 ).Theorem 3.2.Addition of three operands, flr(flr(x 1 + x 2 ) + x 3 ) with x i ∈ R, is non-monotonic with round-to-nearest, round-towards-zero, and round-toward-negative (x i > 0) or round-toward-positive (x i < 0), except if the final rounding fl(flr(flr(x 1 + x 2 ) + x 3 )) to the starting precision is performed.However, as we now show, a sum that includes n > 3 terms computed with non-normalized additions modelled by Equation 2 can be non-monotonic in general.
and n ≥ 4 is not monotonic with round-to-nearest, roundtowards-zero, and round-toward-negative (x i > 0) or roundtoward-positive (x i < 0), with and without the final rounding to the starting precision.
with round-to-nearest, round-toward-zero, and round-towardnegative (a i × b i > 0) or round-toward-positive (a i × b i < 0) is non-monotonic.In the multi-term adder the carry bits on the left are kept but the bits past precision p in the fraction are discarded; the normalization and rounding steps are performed at the end, after all the addition operations have been completed.We take p = 5.On the left is the case in which the significand grows and requires a right-shift to be normalized.On the right is the case with a significant cancellation [2, p. 242] which requires multiple left-shifts to normalize.Notice that the former improves the accuracy of the second addition operation, while the latter makes it worse for the multi-term adder compared with the sum computed using IEEE 754 2-term addition operations.IEEE 754 arithmetic uses round-to-nearest even-on-ties in this example.The monotonicity issue is caused by the lack of right-shift normalization and Equation 2 models the adder that lacks only this normalization in order to simplify.
Proof.Set all elements of b to 1. Then the proof of Theorem 3.3 concludes this proof.Corollary 3.3.2.Matrix-vector multiplication Ax and matrixmatrix multiplication AB, where A ∈ R m×n , x ∈ R n , B ∈ R n×l , n ≥ 4, with round-to-nearest, round-toward-zero, and round-toward-negative (a ik × b kj > 0) or round-towardpositive (a ik × b kj < 0) are element-wise non-monotonic.
Proof.Each element of the output vector or matrix is computed by the inner product.

Impact of the order of addition
Class IV multi-term adders align all input significands relative to the largest magnitude addend.This is performed so that all the significands can be right-shifted, with lower order bits dropped or rounded.If the alignments were performed relative to an addend of arbitrary choice, shifting both to the left and right would be required.When the shifts are to the left, shifted bits cannot be dropped and would have to be preserved, which would introduce a high hardware cost similar to Class I/II adders.As many bits from the left shifts would have to be preserved as the highest difference between the exponents of addends.
After the significands are aligned relative to the largest magnitude addend's exponent, if there is no normalization of intermediate sums, the addition acts like the addition of fixed-point values and is associative; the order of performing additions in such a method will not impact the final result.Whether all the addends are added in series or in parallel also does not have any impact on the final result.
When adding a series of positive floating-point values, doing so in an increasing rather than a decreasing order in absolute value reduces the worst-case error bound [3,Sec. 4.2]; this ordering may not reduce the actual error [39, Sec.2].When considering both positive and negative numbers, the decreasing order can yield better accuracy in the face of cancellation [39, Sec.2].However, the hardware starts at the largest magnitude addend, as discussed above; therefore it can be interesting to check the accuracy of the summation, which we do in the following section.Note that that when there is no intermediate normalization implemented, the addition is in effect growing precision, which may improve the accuracy.

NUMERICAL EXPERIMENTS
We have simulated the Class IV multi-term adders with the gradual precision growth in MATLAB, using the custom precision simulator CPFloat [40].The code for reproducing the results is available 2 .

Order of addends and associativity
In Figure 3  p = 11 and e max = 15, compared with the sum in binary64 arithmetic (s n ).The addends are pseudo-random numbers generated in MATLAB using the mrg32k3a random number generator with a seed of 500, in the range (0, 0.001).We vary n and perform summations with recursive summation algorithm with addends in the decreasing and increasing orders, as well as in the original order with the multi-term adders with terms 4, 64, 512, and 1024.As expected, the decreasing ordering results in the largest errors.In most cases, multi-term adders are worse or very close to recursive summation with increasing ordering of addends.However, with the 1024-term adder, sums become more accurate.This can be explained through precision growth-with the larger adders there is a possibility for more precision growth, which improves accuracy.In this experiment we observed precision to grow to the total of 19 bits, from the default 11.
In test_associativity.mwe generated a vector of 64 pseudo-random binary16 values, randomly permuted them 10 4 times and each time computed the sum in IEEE 754 arithmetic and the model of a 64-term Class IV adder.Checking the range of computed sums we found that it is non-zero for the IEEE 754 arithmetic and zero for the 64-term Class IV adder, confirming non-associativity and associativity, respectively.

Monotonicity
For the purposes of demonstrating non-monotonicity of the Class IV adders, we compute in three small floating-point systems: p = 3, e max = 3; p = 4, e max = 3; and p = 5, e max = 4.We construct the sum for severe non-monotonicity to appear, as follows.
First, we set all x i = 0.25 and then vary x 1 by changing it to the adjacent floating-point value towards +∞ until all representable values are covered.On each iteration we sum the values x i with the two different addition models, the multi-term adder with precision growth (Class IV) and the IEEE 754 adder with normalization and rounding after each addition operation (Class III).We report the value of the sum as well as the relative error compared with the same sum performed in binary64 arithmetic.The results are plotted in Figure 4. First, consider the first column of diagrams in Figure 4.In the top diagram, we see that from the beginning the sum saturates to some quantity and for a while stagnates with the IEEE 754 arithmetic.Relative error starts increasing.When the sum reaches this point, all remaining addends (set to 0.25) are rounded down and do not contribute to the sum.With the multi-term adder this does not occur because precision grows on powers of two, allowing the sum to keep changing as x 1 is being increased.At a certain point, when x 1 crosses the value at which the sum stagnates, the overall value of the sum becomes x 1 and both arithmetics align.At the beginning of this, non-monotonicity appears in the multi-term addition.As precision is increased (rows of the matrix of diagrams in Figure 4), the point at which the IEEE 754 stagnates, and the point at which the multi-term addition shows non-monotonicity, moves to higher values of the sum.
Other columns in Figure 4 correspond to the larger number of terms being added.The main observation is that with more terms the severity of non-monotonicity increases because a larger number of terms can grow the sum more in the range where IEEE 754 arithmetic stagnates and the multi-term adder grows precision.

Where non-monotonicity can cause issues
In this subsection we discuss various algorithms that may be impacted by the non-monotonicity of floating-point arithmetic.
In a 1967 paper, Forsythe [41] notes that despite the lack of associativity and distributivity in floating-point addition and multiplication, one can do good analysis provided only that the arithmetic is monotonic.At that time monotonicity was not always preserved in arithmetics, like it was after the IEEE 754 was released: "Such properties seem elemental but they are extremely helpful.And they are surprisingly absent!" [41,Sec. 4].
Demmel, Dhillon, and Ren [42] explore the bisection algorithm for finding eigenvalues of real symmetric tridiagonal matrices.The algorithm relies on the count(x) to   Computing sums recursively left to right with IEEE 754 additions in order we get √ 4, while computing them with the 8-term Class IV adder we get √ −4 due to the nonmonotonicity in the sum.The correct result, computed in binary64 arithmetic, is √ 2. Note that with the IEEE 754 binary32 arithmetic we can change the order of addends and impact the result: gives √ −4.However, as discussed, with the Class IV adder the order does not impact the final result and this example results in √ −4 irrespective of it; as a result it cannot be fixed at software layer by reordering.
In computing elementary functions often monotonicity is one of the properties that is desired [43].Silverstein et al. [44] in the report on the UNIX math library libm, note: "Monotonicity failures can cause problems, for example, in evaluating divided diferences.The only function we have observed to violate monotonicity for C/SVR4 is lgamma.We would not be surprised to learn of others.".This is of general interest, and we are not aware of a link between the non-monotonicity of multi-term adders and mathematical functions.
Interval arithmetic can be impacted by non-monotonicity of floating-point.Rump [45,Sec. 2.1] gives an example of computing lower and upper limits of the interval of matrix multiplication of point matrices (lower and upper limits are equal) through the change of rounding modes in floating-point arithmetic.Similar to Rump's example, in test_interval.m we compute the interval of summation operation, through round-toward-negative and roundtoward-positive, for lower and upper limits, respectively.First we sum a vector a = [16777216, 1, 1, 1, 1, 1, 1, 1] which yields an interval [16777216,16777230].We then decrease a 1 and perform the same for b = [16777214, 1, 1, 1, 1, 1, 1, 1] which yields an interval [16777220,16777222].By decreasing one of the arguments we expect the interval to shift down the real axis, but the lower end shifts up and the interval becomes narrower due to precision growth in the multi-term adder.As before, it is possible to cause a similar issue with the IEEE 754 arithmetic, by changing the order of the addends.

Possible solutions
Having gradual precision growth in the dot product can be beneficial in getting more accurate results.However, if monotonicity is needed, a straightforward solution would be to always normalize after each addition to stop the gradual precision growth in the internal accumulator.This can replicate the behaviour of a software implementation of summation based on standard IEEE 754 operations.The hardware cost would most likely be increased substantially.
Another approach could be to detect where precision growth occurs by monitoring the carry-out bits in the significands of the sums and then informing any further additions to cancel an appropriate number of bits from the bottom of the significand and not take them into account when adding.However, this is highly dependent on what particular implementations are doing and would probably impact guard, round and sticky bits which might or might not be used in the intermediate additions inside the multiterm adder, depending on the rounding properties needed.The additional logic for this may make the Class IV adders as expensive as Class I/II adders, but we leave this for a separate hardware-oriented study.
As a summary: • Class I/II adders perform the summation as though only one rounding error is induced at the end, and with these adders monotonicity and associativity of summation are preserved.
• Class III adders perform the summation which numerically behaves as a software implementation with the IEEE 754 addition operations, with normalization and rounding after each.With these adders, associativity is not preserved, but monotonicity is.
• With the Class IV adders, monotonicity is not preserverd, as we have demonstrated, but the associativity is.
One can choose an appropriate implementation based on the numerical properties required in a particular architecture.

CONCLUSION
IEEE 754-2019 provides the following guidance for implementing multi-term summation or dot product operations (collectively called reduction operations) [1, Sec.9.4]: "Language standards should define the following reduction operations for all supported arithmetic formats.Unlike the other operations in this standard, these operate on vectors of operands in one format and return a result in the same format.Implementations may associate in any order or evaluate in any wider format.".In the present manuscript we analysed how various hardware designs in the literature as well as in the available hardware implement this.We demonstrated that there are four classes of implementation, each with different hardware complexity and numerical properties.
Focusing on Class IV multi-term addition, where the significand alignment of the summands is performed in limited precision and the sum is performed without the normalization, all resulting in precision growth during the summation, we proved that non-monotonicity can occur.We also showed that Class I-III devices are not subject to this.Our results should assist in understanding numerical differences between different implementations and show what is needed in order to preserve the property of monotonicity in floating-point multi-term addition.This applies to dot products and matrix multiplication operations, which use multiterm addition.The results do not necessarily apply only to hardware as the same low level floating-point algorithms could be implemented in software, for example on devices that do not contain floating-point units.
Finally, our results indicate that monotonicity should be considered in the next iteration of the IEEE 754 standard and the new standard, P3109 3 , of low-precision floatingpoint formats.Suitable recommendations for the reduction operations may need to be provided when the preservation of monotonicity is needed.

Algorithm 1 . 1 : 2 From S, remove two numbers a and b 3
Given numbers S = {x 1 , ..., x n }, compute s n = n i=1 x i . 1 Repeat while S contains more than one element Put fl(a + b) to S 4 The remaining element in S is s n distributivity in basic arithmetic operations, when transitioning from exact to floating-point arithmetic, are not [2, Sec.2.6].

Proof.
Take a, b, and c where b is a power of two, a is the floating-point number preceding b, and c is the floatingpoint number following b, such that a < b < c.We consider positive values, but the proof for negative values is analogous.Also, take ε = c−b 2 .Then, flr(b + ε) = b with RN, RD, and RZ, as is flr(flr(b + ε) + ε) = b.However, flr(a + ε) = b and precision grows by one bit.Due to precision growth, flr(flr(a + ε) + ε) > b.Therefore monotonicity is not preserved.However, the final rounding fl(flr(flr(a + ε) + ε)) = b and overall the monotonicity is preserved.With RU monotonicity is present even without the final rounding.

Corollary 3 . 3 . 1 .
Since the sum evaluates to b when x = b and to c when x = a < b, we have shown that the 4-term sum in this modified arithmetic is non-monotonic.The final rounding would not change the result because fl(flr(b + ε + ε)) = c since 2ε is a value stored in the bits to the left of the rounding point.The inner product of vectors a, b ∈ R n , fl(• • • flr(fl(a 1 × b

Fig. 2 .
Fig.2.Example summation of three precision-p numbers (significands showed) in IEEE 754 arithmetic and a Class IV multi-term adder without the intermediate normalization and rounding.In the multi-term adder the carry bits on the left are kept but the bits past precision p in the fraction are discarded; the normalization and rounding steps are performed at the end, after all the addition operations have been completed.We take p = 5.On the left is the case in which the significand grows and requires a right-shift to be normalized.On the right is the case with a significant cancellation [2, p. 242] which requires multiple left-shifts to normalize.Notice that the former improves the accuracy of the second addition operation, while the latter makes it worse for the multi-term adder compared with the sum computed using IEEE 754 2-term addition operations.IEEE 754 arithmetic uses round-to-nearest even-on-ties in this example.The monotonicity issue is caused by the lack of right-shift normalization and Equation 2 models the adder that lacks only this normalization in order to simplify.

Fig. 3 .
Fig.3.Summation of positive random binary16 vectors of increasing length.Three summation algorithms are used: recursive summation in increasing order of magnitude using the binary16 IEEE 754 arithmetic, recursive summation in decreasing order of magnitude using the binary16 IEEE 754 arithmetic, and recursive blocking summation using Class IV multi-term adders of various sizes.Relative errors are measured by comparing with the summation of the same values in binary64 arithmetic.

Fig. 4 .
Fig.4.Summation with various floating-point formats and two types of addition: multi-term addition with precision growth (Class IV) and IEEE 754 addition with normalization and rounding after each operation.Value of of the sum (top of each subdiagram) and the relative error (bottom) compared with the sum performed in the binary64 arithmetic.The x-axis corresponds to the quantity of x 1 which we vary, with the rest of x i = 0.25.

2 −b 2 −
2bx + c = 0. Computing an expression √ ac is required and if b 2 = ac but fl(b 2 ) < b 2 , computing fl(fl(b 2 ) − ac) with an FMA can result in a negative number passed into the square root function.While this would not require a 4-term or wider adder, which we explore in this paper, if an expression n i=1 a i − n i=1 b i , where a and b are vectors, which uses the multi-term adder twice and then subtracts the sums, appears in applications, a similar issue to that discussed by Higham can appear.We provide an example in test_sqrt.m.We use a binary32 arithmetic with p = 24 and e max = 127, and n = 8.Then take a = [1, 1, 1, 1, 1, 1, 1, 16777216] and b = [1, 1, 1, 1, 1, 1, 1, 16777214].
‡ Configurable Floating Point 8-bit data type, with programmable bias.
significand, and e ∈ [e min , e max ], with e min = 1 − e max , is the integer exponent.In order for x to have a unique representation, the number system is normalized so that the most significant bit of m is set to 1 if |x| ≥ 2 emin .Therefore, all floating-point numbers with m ≥ 2 p−1 are normalized.Numbers below the smallest normalized number 2 emin in absolute value are called subnormal numbers, and are such that e = e min and 0 < m < 2 p−1 .The set of floating-point numbers is denoted by F.
) − count(x a )) < 0 over a specific range x a to x b with x b > x a .We have shown that nonmonotonicity can appear in the three-term adder if no final rounding is performed (Theorem 3.2), which would invalidate the Assumption 1A [42, p 120] and introduce a source of non-monotonicity in an implementation of count(x).