Guideline for writing good automatic test cases
In the following sections we will use the word component in the broadest sense, meaning the class, module, system were are testing.
The definition and implementation of automatic tests go though these phazses:
- Specification: No code has been written, but a specification of the test has been written, eg. a description of the test has been made, or some of the test step have been described.
- Test-first: The test implementation is underway, but the test fails because the corresponding application functionality hasn't been implemented.
- Regression test: The Test and corresponding application functionality has been implemented, and the test is now part of the regression testsuite, eg. the test can be used to validate that the code isn't broken.
The test state is declared with the TestNG test group annotation:
where only one of the groups will be used in a given test state.
Test should be writing directly target at investigation a specific aspect of the component we are testing. This means a test should clearly reflect a component interface feature, which can be understood by a component user. The name of the test should directly reflect the purpose of the test. Examples are:
- System test: System test should be targeted at investigating the systems status in regard to the acceptance criterias. Examples could be:
An important test code quality which leverages the previous point is the test implementations ability to communication what is going on (this is by the way very similar to general code quality guidelines). Example:
- Bad code:
- Readable code:
Note: Readable design is much better than unreadable code + inline comments.
A good test should be focused on testing as single independent aspect of component we are test. This will make the test more readable and provide more detailed information on the test status when the tests are run.
More concretely this can be achieved by striving to minimize the number of assertions in a single test, where you should instead try to separate the assertions into different focused tests.
On of the primary qualities of automatic test is the ability to use it for efficient regression testing. The automatic tests can only be used for this purpose if they are robust towards all changes (code or environment) which doesn't directly modify the functionality aspect the test is target at. This f.ex. means the test should in general be invariant towards:
- Functionality changes not directly related to the test purpose.
- When the test are run are run.
- Race conditions.
- Environment change: This means the tests should pass on all environment variations which might be relevant for the component. Variables here could be OS, language settings, etc.
- Permissions: Tests should not assume they can change anything outside of the directory they are run in, unless explicitly configured in the test.
- Sequence of tests. Test should not be dependent on which test has run, or not run, prior to this test.
One of the critical means of debugging the automatic test are the out generated in the applications log file and consule output. To maximize the value of these sources for runtime information, the information generated by the test should be kept separate from the application output. This includes:
- Do not commit test code with and System.out.printlns
- Use the TestLogger class for generating test log output.
- Do not log stacktraces for expected exceptions.