Proofs about test contents and test results have to be regularly generated in an increasing number of testing projects. It is a continuously recurring challenge to warrant the formal correctness and the conformity with the used process definition of the data basis in order to generate proofs with consistent content.
Anyone who has ever checked all his tests and examined whether all conducted tests with result “defective” did indeed reference a defect or the reviewer of a test was also its creator, knows the problem.
When it comes to programming, the static code analysis has already been used for a long time in order to early detect defects in the source code by means of formal checks.
Referring to this test procedure, the static analysis of tests conducts a series of checks based on test specifications and test results. These checks can cover all elements of a test, i.e. besides its test specification also its test results, for example.
This entails opportunities and benefits:
- Process requirements for the tests and the compliance with organizational rules can be checked automatically and assured easily.
- Formal defects in the tests can be avoided from the start.
- Reports needn’t be elaborately checked by hand for formal correctness after their generation.
As every test process varies in detail from organisation to organisation, the various checks are stored in modules. That way, every project may decide whether and if so which checks should be conducted:
To every check a severity level is additionally assigned, which determines the importance of the so found problems. A distinction is drawn between information, warnings and defects in this context.
To facilitate the handling of the check results the affected elements (e.g. conduct results) are marked and all problems can be worked off centrally: