Welcome, guest | Sign In | My Account | Store | Cart

broken_test_XXX(reason) is a decorator for "inverting" the sense of the following unit test. Such tests will succeed where they would have failed (or failed because of a raised exception), and fail if the decorated test succeeds.

Python, 82 lines
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
import unittest
import types

class BrokenTest(unittest.TestCase.failureException):
    def __repr__(self):
        name, reason = self.args
        return '%s: %s: %s works now' % (
            (self.__class__.__name__, name, reason))

def broken_test_XXX(reason, *exceptions):
    '''Indicates a failing (or erroneous) test case fails that should succeed.
    If the test fails with an exception, list the exception type in args'''
    def wrapper(test_method):
        def replacement(*args, **kwargs):
            try:
                test_method(*args, **kwargs)
            except exceptions or unittest.TestCase.failureException:
                pass
            else:
                raise BrokenTest(test_method.__name__, reason)
        replacement.__doc__ = test_method.__doc__
        replacement.__name__ = 'XXX_' + test_method.__name__
        replacement.todo = reason
        return replacement
    return wrapper

def find_broken_tests(module):
    '''Generate class, methodname for test cases marked "broken".'''
    for class_name in dir(module):
        class_ = getattr(module, class_name)
        if (isinstance(class_, (type, types.ClassType)) and
                      issubclass(class_, unittest.TestCase)):
            for test_name in dir(class_):
                if test_name.startswith('test'):
                    test = getattr(class_, test_name)
                    if (hasattr(test, '__name__') and
                                test.__name__.startswith('XXX_')):
                        yield class_, test_name

#######################################
##### Typical use in a test suite:
import unittest

class SillyTestCase(unittest.TestCase):
    def test_one(self):
        self.assertEqual(2 + 2, 4)

    def test_two(self):
        self.assertEqual(2 * 2, 4)

    @broken_test_XXX('arithmetic might change')
    def test_three(self):
        self.assertEqual(2 - 2, 4)

    @broken_test_XXX('exception failure demo', TypeError)
    def test_four(self):
        real, imaginary = 2 - 2j
        self.assertEqual(real, imaginary)

    @broken_test_XXX('exception failure demo', TypeError,
                             unittest.TestCase.failureException)
    def test_five(self):
        value = 2 - 2j
        real, imaginary = value.real, value.imag
        self.assertEqual(real, imaginary)


if __name__ == '__main__':
    # Typical report generation for a large suite
    import sys
    import sometests  # which may import other tests
    import moretests  # and so on.

    for module_name, module in sys.modules.iteritems():
        for class_, message in find_broken_tests(module):
            if module_name:
                print '\nIn module', module_name
                module_name = last_class = None
            if class_ != last_class:
                print '\nclass %s:' % class_.__name__
                last_class = class_
            print '  ', message, '\t', getattr(class_, message).todo

When maintaining a large body of unit tests (as Python itself does), it can be useful to store a test that should pass but does not. Such tests can be marked with the broken_test_XXX decorator to indicate why the test is broken. Any tests so decorated will pass if (and only if) the test fails or raises an exception provided in its args. When the underlying problem is fixed, the decorated test will fail. Upon finding such errors, you can remove the decoration and have a good test in your suite.

The value of such a decoration is in large bodies of work where time-to-fix is not always available when a problem has been diagnosed. It is valuable to capture a good test of the intended functionality even if the repair associated with that test is not yet available. This decoration and test case can be seen as a more precise (and verifiable) version of a bug report.

By incorporating the standard 'XXX' indicator in both the name of decorator and in the __name__ attribute of the 'wrapped' test, searching for such marks is easy.

When specifying an exception you expect to be raised (as in test_four above), a simple failure of the underlying test will _not_ pass silently by. If you want it to do so, you need to use multiple exceptions in the broken_test_XXX decoration, as shown in the decoration on test_five above.

The "if __name__ =='__main__':" section of the code above provides an example of using the find_broken_tests function to generate reports of "known broken" tests in multi-module suites.

3 comments

Scott David Daniels (author) 18 years, 3 months ago  # | flag

Credits that belong above. Sorry, didn't mention this in the original: the idea for this code began in a discussion on comp.python.devel, and in particular ideas from Martin v. Löwis and James Y Knight.

Nick Coghlan 18 years, 3 months ago  # | flag

Check for exception arguments at test definition time not run time. While embedding the 'or' in the except clause header is cute, its also a little subtle and obscure, and causes the check to be executed every time the test is run, rather than when the test is defined.

This can be fixed with a straightforward test in the decorator function (outside the nested helper):

if not exceptions:
    exceptions = unittest.TestCase.failureException
Ori Peleg 18 years, 3 months ago  # | flag

Cool! I like the concept, thanks!