This class permits full error propagation for numeric values. It wraps a value and an associated error (standard deviation, measurement uncertainty...). Numeric operations are overloaded and permit use with other Uncertain objects, precise values without errors, generally speakin gmost other numeric (even array) objects. The "traitification" can easily be reverted by dumping all references to traits.
Python, 90 lines
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
#!/usr/bin/python # -*- coding: utf8 -*- # Uncertain quantities. # (c) Robert Jordens <email@example.com> # Made available freely under the Python license import numpy as np from enthought.traits.api import HasStrictTraits from enthought.traits.api import Str, Float, Bool, List, Dict, Enum class Uncertain(HasStrictTraits): """ Represents a numeric value with a known small uncertainty (error, standard deviation...). Numeric operators are overloaded to work with other Uncertain or numeric objects. The uncertainty (error) must be small. Otherwise the linearization employed here becomes wrong. The usage of traits can easily be dumped. """ value = Float error = Float(0.) def __init__(self, value=0., error=0., *a, **t): self.value = value self.error = abs(error) super(Uncertain, self).__init__(*a, **t) def __str__(self): return "%g+-%g" % (self.value, self.error) def __repr__(self): return "Uncertain(%s, %s)" % (self.value, self.error) def __float__(self): return self.value def assign(self, other): if isinstance(other, Uncertain): self.value = other.value self.error = other.error else: self.value = other self.error = 0. def __abs__(self): return Uncertain(abs(self.value), self.error) def __add__(self, other): if isinstance(other, Uncertain): v = self.value + other.value e = (self.error**2+other.error**2)**.5 return Uncertain(v, e) else: return Uncertain(self.value+other, self.error) def __radd__(self, other): return self + other # __add__ def __sub__(self, other): return self + (-other) # other.__neg__ and __add__ def __rsub__(self, other): return -self + other # __neg__ and __add__ def __mul__(self, other): if isinstance(other, Uncertain): v = self.value * other.value e = ((self.error*other.value)**2+(other.error*self.value)**2)**.5 return Uncertain(v, e) else: return Uncertain(self.value*other, self.error*other) def __rmul__(self, other): return self * other # __mul__ def __neg__(self): return self*-1 # __mul__ def __pos__(self): return self def __div__(self, other): return self*(1./other) # other.__div__ and __mul__ def __rdiv__(self, other): return (self/other)**-1. # __pow__ and __div__ def __pow__(self, other): if isinstance(other, Uncertain): v = self.value**other.value e = ((self.error*other.value*self.value**(other.value-1.))**2+ (other.error*np.log(self.value)*self.value**other.value)**2)**.5 return Uncertain(v, e) else: return Uncertain(self.value**other, self.error*other*self.value**(other-1)) def __rpow__(self, other): assert not isinstance(other, Uncertain) # otherwise other.__pow__ would have been called return Uncertain(other**self.value, self.error*np.log(other)*other**self.value) def exp(self): return np.e**self def log(self): return Uncertain(np.log(self.value), self.error/self.value)
- error propagation formulas are probably correct
- error vallues are in units of the value and not in percent
- may operations are delegated to other simpler or known operations
- IMO this class is a significant improvement over the classes presented in:
- Python Cookbook, 2nd Edition - Recipe 18.14 and
I'd like to suggest my uncertainties module, which has many advantages over the above recipe:
It gives a correct result for
x-x, when x has an uncertainty: the correct result is exactly zero, contrary to what the above recipe gives. More generally, all the correlations between the various variables of a calculation are taken into account.
uncertainties.py allows many more mathematical functions to be used (almost all the function from the standard
mathmodule), including matrix inversion with Numpy. Logical functions can still be used as well (x == y), etc.
… and I'd like to add that the uncertainties module does not depend on any other module, which makes it easier to deploy!
Eric, you are absolutely right. Correlations (like in x-x) make the above module fail miserably. On the other hand people tend to forget that correlations also frequently appear hidden in e.g. the measuring process. Then all approaches that ignore the provenance of the quantities fail ;-)
I'll propose one other approach: symbolic calculation with sympy. Short and concise:
That'll be as good as your module with regards to correlations. It can even do crazy stuff like integrals...
Then I'd also like to propose a __str__ method for all uncertain implementations. It correctly truncates and rounds the representations of value error and also applies to large and small quantities that need exponential notation.
A snippet from the testcases:
Very interesting comments, Robert!
Your Sympy approach is quite simple. One problem is that it forces the user to keep track of variables. That's something that the uncertainties package does completely transparently (the equivalent to stdev(t, "x", "y") would directly be t.std_dev(), for instance).
Your printing routine produces really nice results! It's making me think that including something similar in the uncertainties package would be a good idea. :)
PS: The printing routine above yields surprising results for 0.1±1e-50:
or 0.1±1e-200 (the printing fails).
These are admittedly difficult cases, though… In any way, your printing routine contains many interesting ideas. :)