Code which runs across your application and validates its behavior
#!/usr/bin/env python
def func():
jfdkls
while True:
print "> ",
if raw_input() == 'x':
func()
What should be tested?
The percentage of code which gets run in a test is known as the coverage.
100% coverage is an ideal to strive for. Tests require maintenance and time
Unit testing tools
http://docs.python.org/2/library/unittest.html
import unittest
class TestTest(unittest.TestCase):
def setUp(self):
pass
def test_add(self):
self.assertEqual(2+2, 4)
def test_len(self):
self.assertEqual(len('foo'), 3)
Call unittest.main() right in your module
if __name__ == "__main__":
unittest.main()
TestCase contains a number of methods named assert* which can be used for validation, here are a few common ones:
Modify one test method of /examples/week-01/unittest_ex/example1.py:
Fixtures can be set up fresh before each test for reusable contexts
unittest provides fixture support via these methods:
Let's play with the code in /examples/week-01/unittest_ex/example3_flow.py
Why can't we just test if .5 == .5 ?
In [19]: 3*.15 == .45
Out[19]: False
In [24]: 3*.15 * 10 / 10 == .45
Out[24]: True
There is an infinite amount of precision
Floats are stored as an approximation in computing hardware
Floating point numbers are stored in IEEE 754 64-bit double precision format, which allows 1 bit for the sign, 11 bits for the exponent, and the remaining 52 for the fraction
So we can count on 16 digits of precision in decimal:
In [39]: len(str(2**52))
Out[39]: 16
In [40]: .1+.2
Out[40]: 0.30000000000000004
In [41]: len('3000000000000000')
Out[41]: 16
# we can visualize the approximation
In [2]: "%.16f" % 0.1
Out[2]: '0.1000000000000000'
# with repeated operations, errors build up
In [43]: sum([0.1 for i in range(10)])
Out[43]: 0.9999999999999999
Verifies two floating point values are close enough
places kwarg specifies the number of significant digits to test against
import unittest
class TestAlmostEqual(unittest.TestCase):
def setUp(self):
pass
def test_floating_point(self):
self.assertEqual(3*.15, .45)
def test_almost_equal(self):
self.assertAlmostEqual(3*.15, .45, places=7)
See /examples/week-01/unittest_ex/almostequal.py
Practice makes perfect. See /examples/week-01/unittest_ex/ascii.py
More practice makes perfect. See /examples/week-01/unittest_ex/ascii_counter.py
Test suites group test cases into a single testable unit
unittest.main()
gets cumbersome after awhile
import unittest
from calculator_test import TestCalculatorFunctions
suite = unittest.TestLoader().loadTestsFromTestCase(TestCalculatorFunctions)
unittest.TextTestRunner(verbosity=2).run(suite)
In terms of verbosity, that's not much better
A test runner which autodiscovers test cases
Nose will find tests for you so you can focus on writing tests, not maintaining test suites
Any file matching the testMatch conditions* will be searched for tests. They can't be executable!
Running your tests is as easy as
$ nosetests
https://nose.readthedocs.org/en/latest/finding_tests.html
*defined as self.testMatch = re.compile(r'(?:^|[\\b_\\.%s-])[Tt]est' % os.sep)
Many plugins exist for nose, such as code coverage:
# requires full path to nosetests:
$ ~/virtualenvs/uwpce/bin/nosetests --with-coverage
or drop in to the debugger on failure
$ nosetests --pdb
or parallel process your tests. Remember, unit tests should be independent of each other:
$ nosetests --processes=5
We don't always want to run the entire test suite
In order to run a single test with nose:
nosetests calculator_test.py:TestCalculatorFunctions.test_add
To run coverage on your test suite:
coverage run my_program.py arg1 arg2
This generates a .coverage file. To analyze it on the console:
coverage report
Else generate an HTML report in the current directory:
coverage html
To find out coverage across the standard library, add -L:
-L, --pylib Measure coverage even inside the Python installed
library, which isn't done by default.
consider the following code:
x = False # 1
if x: # 2
print "in branch" # 3
print "out of branch" # 4
We want to make sure the branch is being bypassed correctly in the False case
Track which branch destinations were not visited with the --branch option to run
coverage run --branch myprog.py
Tests placed in docstrings to demonstrate usage of a component to a human in a machine testable way
def square(x):
"""Squares x.
>>> square(2)
4
>>> square(-2)
4
"""
return x * x
if __name__ == '__main__':
import doctest
doctest.testmod()
As of Python 2.6, the __main__ check is unnecessary:
python -m doctest -v example.py
Using the existing code in /examples/week-01/calculator/calculator_test.py:
Introduced in Python 2.5
If you've been opening files using "with" (and you probably should be), you have been using context managers:
with open("file.txt", "w") as f:
f.write("foo")
A context manager is just a class with __enter__ and __exit__ methods defined to handle setting up and tearing down the context
Provides generalizable execution contexts in which setup and teardown of context are executed no matter what happens
This allows us to do things like setup/teardown and separate out exception handling code
Define __enter__(self) and __exit__(self, type, value, traceback) on a class
If __exit__ returns a true value, a caught exception is not re-raised
Let's play with a simple example in /examples/week-01/contextmanagers/simple.py
See the code below in /examples/week-01/contextmanagers/context_manager.py
import os, random, shutil, time
class TemporaryDirectory(object):
"""A context manager for creating a temp directory that gets destroyed on exit"""
def __init__(self,directory):
self.base_directory = directory
def __enter__(self):
# set things up
self.directory = os.path.join(
self.base_directory, str(random.random())
)
os.makedirs(self.directory)
return self.directory
def __exit__(self, type, value, traceback):
# tear it down
shutil.rmtree(self.directory)
with TemporaryDirectory("/tmp/foo") as dir:
# write some temp data into dir
with open(os.path.join(dir, "foo.txt"), 'wb') as f:
f.write("foo")
time.sleep(5)
We can skip all the boiler plate when we need a context manager
See the example /examples/week-01/context_manager/simple_as_decorater.py
from contextlib import contextmanager
@contextmanager
def func(name):
print "__enter__"
yield
print "__exit__"
Goal: Create a context manager which prints information on all exceptions which occur in the context and continues execution
with YourExceptionHandler():
print "do some stuff here"
1/0
print "should still reach this point"
Why might using a context manager be better than implementing this with try..except..finally ?
Also see the contextlib module
Consider the application in the examples/week-01/wikidef directory
Give the command line utility a subject, and it will return a definition
./define.py Robot | html2text
How can we test our application code without abusing (and waiting for) Wikipedia?
Mock objects replace real objects in your code at runtime during test
This allows you to test code which calls these objects without having their actual code run
Useful for testing objects which depend on unimplemented code, resources which are expensive, or resources which are unavailable during test execution
The MagickMock can do a lot of "magic" things that help in testing:
Let's look at /examples/week-01/magicmock/basic_example.py
import mock
mock_object = mock.MagicMock()
mock_object.foo.return_value = "foo return"
print mock_object.foo.call_count
print mock_object.foo()
print mock_object.foo.call_count
# raise an exception by assigning to the side_effect attribute
mock_object.foo.side_effect = Exception
mock_object.foo()
Remember that hard to test ascii code in /examples/week-01/unittest_ex/ascii.py?
Python functions can accept functions as arguments and return them:
def x( function_z ):
def y():
# execute the passed fn
function_z()
return y
We also know that Python functions are just objects:
In [1]: def foo():pass
In [2]: isinstance( foo, object )
Out[2]: True
I want to add additional functionality to a class or function
I want to add this at runtime without changing the code of the class or function
If we do this type of thing a lot, then we are just following a Gang-of-Four pattern
Find the example in /examples/week-01/decorators/loggly.py
def loggly(func):
def logger(*args, **kwargs):
if not kwargs.get( 'muffle', False ):
print "executing '{}'".format( func.__name__ )
print "\twith args: {}".format( args )
print "\twith kwargs: {}".format( kwargs )
return func(*args, **kwargs)
return logger
That muffle arg passing is extremely ugly
@loggly
def test2(x,y,muffle=True):
return x * y
Is there a better way? Yes and No:
def one(arg=False):
def two_decorator(func):
def three(*args, **kwargs):
# important code happens here...
return func(*args, **kwargs)
return three
return two_decorator
Using the loggly decorator as an example:
Gut checking use cases:
Developers always need to weigh design choices against drawbacks:
Let's look at the code in /examples/week-01/decorators/memoize.py
patch acts as a function decorator, class decorator, or a context manager
Inside the body of the function or with statement, the target is patched with a new object. When the function/with statement exits the patch is undone
See /examples/week-01/solutions/wikidef_test_mock_methods.py
# patch with a decorator
@patch.object(Wikipedia, 'article')
def test_article_success_decorator_mocked(self, mock_method):
article = Definitions.article("Robot")
mock_method.assert_called_once_with("Robot")
def test_article_success_context_manager_mocked(self):
with patch.object(Wikipedia, 'article') as mock_method:
article = Definitions.article("Robot")
mock_method.assert_called_once_with("Robot")
When define.py is given the name of a non-existant article, an exception is thrown.
Add a new test that confirms this behavior
/