Skip to Main Content
A single format for the representation of numbers in a computer is proposed to accommodate both exact and inexact quantities. A consistent set of rules is described for addition (subtraction), multiplication, and division of such quantities, both within their separate types, as well as in combination. Error correlation aside, the propagation of inherent errors is monitored in operations with at least one imprecise value. A definitive algorithm must, of course take into account any correlations of inherent errors; these correlations must be recognized and incorporated into the algorithm by the numerical analyst, not by the logical designer of the computer.