On Automated Testing
What is automated testing?
I've encountered many different opinions about automated software testing. What it is, how it should be done, and often impressions of where it is or is not applicable. As with many subject within software development, these impressions on the subject of testing carry a lot of baggage with them, and I would like to encourage we cut through this and get back to basics.
There are two obvious parts to automated testing: the automation, and the testing.
Testing is simply running your code to see if it does what you expected.
To automate this process means that you write code to help you run your code, and to help you check if it does what you expected.
It becomes very clear now that the testing portion here is not just a best practice to be argued about, it's an inevitable reality of all software development. Whether you are intentional about it or not, spinning up your web server, hacking on some code and then reloading to see your changes, is a testing activity.
And more than that: there is little doubt that these testing activities, viewed broadly, make up a substantial portion of every developer's active programming time. This is regardless of whether or not the programmer writes any automated tests.
When I worked developing web applications in Python, the testing activity I saw most frequently involved running a development server with so-called "hot-reload":
flask --app myapp --debug run
gunicorn --reload myapp:factory
The developer then clicks on things to see if their code does what they wanted.
With these --debug
or --reload
options enabled, the program will automatically
rerun through your source code after every change, and serve the new output.
This process is great, as you get a fast feedback cycle between:
- Making changes
- Seeing the results
This loop between making changes and testing changes is frequently referred to as the inner programming loop, to draw attention to its position as a performance bottleneck for programmer productivity. If we can speed up this tight inner loop, so the story goes, we can speed up our delivery of software.
While there is no doubt that you can devote a lot of time to writing tests with other goals in mind, even if your only priority were speed of development, the lesson is clear:
- The faster you can test, the faster you can develop features.
In my experience, automation of testing tasks is one of the easiest places to win back development time.
Arange, Act, Assert
Software testing of all kinds (included manual testing) is most commonly broken down into three steps
- Arrange: setting up the entities that will be subject to test
- Act: Carry out the behavior you want to test
- Assert: Check to see if everything did what you wanted it to do
This falls nicely into a given, when, then format. An example of a test written in this style:
- Given an agency in the database with id =
:id
- When I hit the
delete/:id
endpoint - Then the agency with id =
:id
should no longer be in the database
Arrange 🏗
Arranging is the first step, and it's one of the hardest steps of the whole process.
I often hear stories like this:
"The bug only happens with specific orders that have this combination of three obscure properties"
The difficulty is essential, and still persists with automated testing. Indeed, a big part of automated testing frameworks are utilities to help you package up precious canned entities so that they can be reused in tests. These are called test fixtures.
Automated testing provides a win here because by writing text fixtures as code, you can easily reproduce them for many testing scenarios, saving you the time of painstakingly reconstructing them by hand.
Act 🎬
The acting portion of testing usually isn't as hard to automate:
- You probably know what method you wrote to facilitate the action you want to test
- You named it with a descriptive name that says what it does (right?), and that's what you're testing now!
When testing manually, non-reversible actions can make life hard:
"I found a few orders that I can use for testing this, but now that I've deleted them I have none left to test with"
Test as small as possible: start by testing the functions you are calling from that endpoint, rather than with the endpoint itself.
Writing for automated tests can help you to write simpler, better designed functions, by focusing on testing one thing at a time.
Assert 🔬
For asserting, the first piece of advice I would suggest for web development, is to move the act of observation back from the frontend as much as possible.
Also, take care not to assert too much: you are writing a test for one thing, so don't assert everything you know about the entity.
CAUTION: don't run your function, copy the output, and then paste it as the expected result of an assert. ✂️
- You will assert too much, leading to fragile tests that break when they don't need to.
- There are full-featured tools, such as Jest's Snapshot Testing features,
which can capturing the exact behavior of an
old version of a program and alert you to any changes of
behavior in a new version. This strategy of testing is sometimes called approval testing. This can be a useful strategy, but if you'd like to use this approach I highly recommend adopting a tool for it rather than attempting to manage hand-written snapshots in your ordinary unit tests.
Automation isn't all-or-nothing
The main point I'd like to get across here is that you, as a developer, are empowered to use the tools of automation to make your own development easier.
Even if you are not fully onboard to write an entire automated test suite, you can start writing code on your own to help you test. Even if these tests are never checked into a test suite or run in a continuous-integration pipeline, simply putting your own programming skills to work automating your own workload will pay dividends.
While a fully-automated test will do all the arranging, acting, and checking of the results, you can start automating parts of the process today.
Consider the following short test script, mixing automation to drive the test with test data you found by hand:
def function_you_want_to_test(order_id = None):
# your business logic functions up here
return 1
if __name__ == '__main__':
from my_app.config import config
config.load()
order_id = 27 # order_id you found in mysql workbench
expected = 0
assert function_you_want_to_test(order_id) == expected
print("Everything passed!")
To run a file, just press the ▶️ button in your code editor or IDE,
everything in the __name__ == '__main__'
block will be executed.
I would never recommend checking a test like this into a test suite, but as a substitute for the manual process there is no comparison. One click is a lot faster than the steps you'd have to take to test manually: starting your server, opening up your browser, logging in, navigating to orders, finding the right order, selecting the option you want to test from a dropdown ...
General advice
Test code as you write it
"The earlier a problem is found, the better. If you think systematically about what you are writing as you write it, you can verify simple properties of the program as it is being constructed, with the result that your code will have gone through one round of testing before it is even compiled." -- Practice of Programming, P&K
Start with boundary conditions
What should happen in the boundary cases? null case, blank input, negative case etc. Cover these first. They are usually the easiest tests to write and easiest cases to handle if you do them first, but can get hairy to shoehorn in if you leave them to the end.
Test to ease refactoring
Adding tests gives you freedom to refactor boldly and with abandon. The smoothest and fastest path to developing a new feature is a scaffolding process:
- Write tests to make refactoring easy
- Refactor to make adding the feature easy
- Write simple tests for your feature
- Write code to get your tests to pass
- Refactor boldly until satisfied