Explore the mysteries of floating point arithmetic. This class readily reproduces textbook examples and provides immediate demonstrations of representation error, loss of precision (subtractive cancellation), and the failure of the distributive, commutative, and associative laws. Beyond textbook work, this class is also useful for exploring the numerical robustness of alternative algorithms.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 | ```
prec = 8 # number of decimal digits (must be under 15)
class F:
def __init__(self, value, full=None):
self.value = float('%.*e' % (prec-1, value))
if full is None:
full = self.value
self.full = full
def __str__(self):
return str(self.value)
def __repr__(self):
return "F(%s, %r)" % (self, self.full)
def error(self):
ulp = float('1'+('%.4e' % self.value)[-5:]) * 10 ** (1-prec)
return int(abs(self.value - self.full) / ulp)
def __coerce__(self, other):
if not isinstance(other, F):
return (self, F(other))
return (self, other)
def __add__(self, other):
return F(self.value + other.value, self.full + other.full)
def __sub__(self, other):
return F(self.value - other.value, self.full - other.full)
def __mul__(self, other):
return F(self.value * other.value, self.full * other.full)
def __div__(self, other):
return F(self.value / other.value, self.full / other.full)
def __neg__(self):
return F(-self.value, -self.full)
def __abs__(self):
return F(abs(self.value), abs(self.full))
def __pow__(self, other):
return F(pow(self.value, other.value), pow(self.full, other.full))
def __cmp__(self, other):
return cmp(self.value, other.value)
# Example: Show failure of the associative law (Knuth Vol. 2 p.214)
u, v, w = F(11111113), F(-11111111), F(7.51111111)
assert (u+v)+w == 9.5111111
assert u+(v+w) == 10
# Example: Show failure of the commutative law for addition
assert u+v+w != v+w+u
# Example: Show failure of the distributive law (Knuth Vol. 2 p.215)
u, v, w = F(20000), F(-6), F(6.0000003)
assert u*v == -120000
assert u*w == 120000.01
assert v+w == .0000003
assert (u*v) + (u*w) == .01
assert u * (v+w) == .006
# Example: Compare numerical accuracy of three appoaches to computing averages
def avgsum(data): # Sum all of the elements
return sum(data, F(0)) / len(data)
def avgrun(data): # Make small adjustments to a running mean
m = data[0]
k = 1
for x in data[1:]:
k += 1
m += (x-m)/k # Recurrence formula for mean
return m
def avgrun_kahan(data): # Adjustment method with Kahan error correction term
m = data[0]
k = 1
dm = 0
for x in data[1:]:
k += 1
adjm = (x-m)/k - dm
newm = m + adjm
dm = (newm - m) - adjm
m = newm
return m
import random
prec = 5
data = [F(random.random()*10-5) for i in xrange(1000)]
print '%s\t%s\t%s' %('Computed', 'ULP Error', 'Method')
print '%s\t%s\t%s' %('--------', '---------', '------')
for f in avgsum, avgrun, avgrun_kahan:
result = f(data)
print '%s\t%6d\t\t%s' % (result, result.error(), f.__name__)
print '\n%r\tbaseline average using full precision' % result.full
# Sample output from the above example
#
# Computed ULP Error Method
# -------- --------- ------
# -0.020086 15 avgsum
# -0.020061 9 avgrun
# -0.020072 1 avgrun_kahan
#
# -0.020070327734999997 baseline average using full precision
``` |

The constructor rounds the input value to the given precision. This rounding is called representation error because the result is the nearest possible representable value for the specified number of digits. For instance, at three digits of precision, 3.527104 is stored as 3.53 so the representation error is 0.002896.

Since the underlying representation is regular floating point, the demonstration only works up to the 15 digits (the typical limit for double precision floating point).

The last example demonstrates how to extract a full precision floating point result from an instance of F using inst.full. This is helpful for running an algorithm to full precision as a baseline for seeing the effects of using less precision.

Note, the full precision result excludes the representation error in the original inputs. For example, with prec = 3 and d = F(3.8761) / F(2.7181), d.full gives 1.4264705882352939 which is the same result as a regular division starting from the nearest representable values, 3.88 / 2.72. This is helpful because it isolates accumulated floating point operation errors from the artifacts of the orignal data entry. This is reasonable since real floating point systems have to start with representable constants; however, if the original representation error needs to be tracked, enter the number twice in the constructor: F(2.7181, 2.7181).

Typing d.error() returns 0 which is the accumulated difference between the low precision result and the full precision result as expressed in term of units in the last place (ULP).

This is a nice, lightweight, and easy-to-follow class. As an exercise and demonstration to my numerical analysis class, I created SimFloat, a sophisticated simulator of the many styles of IEEE-compatible floating- and fixed-point numbers, using a similar API to the Decimal module. This may be of interest to advanced readers. More details are available in this announcement. Due to its relatively large size the code is hosted here and in the math sub-package of econpy.