Ampelofilosofies

homeaboutrss
The mind is like a parachute. If it doesn't open, you're meat.

On test automation and management, part II

15 Apr 2007

The simplest way to define a test is as a sequence of steps which can succeed or fail.

The test is successful when all steps are successful, or it fails when as much as one step fails.

In it’s simplest form a test consists of a single step, a script or a call to an application that runs a(nother) script.

A test specification would contain a description of the test, pre- and post-conditions together with the steps required for execution plus information about the requirements leading to this test.

Pre- and post-conditions are actually satisfied through steps taken during the test execution and become an integral part of the test.

A step can be any action taken when testing but in the context of automated tests it can be defined as a single command for which success or failure can be easily determined (the easiest way being that of a meaningful exit code).

A specification can thus be expressed in two parts: human-readable text that describes the test and can be used in documents and reports and a machine-readable scenario that defines the sequence of steps/commands to execute in order to run the test.

Putting the two together in a parseable format is the next logical step:

<specification id="id">
  <title>Super Test</title>
  <description>Why does this test exist? What is it's purpose?</description>
  <scenario>
    <step/>
    <step/>
  </scenario>
</specification>

Parsing these specifications allows for:

  • Execution of the scenario elements.
  • Generation of overview documents.
  • Generation of reports for each execution.

To do all of the above, lets separate the three major blocks of functionality and add a manager to handle coordination:

manager

The above XML will not work for every project. Things like units, test groups, attributes that define types, test conditions etc. are universally…different. Establishing a format that satisfies the requirements of all projects is an effort that will lead to a too complex, all-knowing and nothing-achieving behemoth. In short it would be futile.

What can be done is define a set of conventions and boundaries within which such specifications can be managed.

This leads unavoidably to a tool-set whose architecture allows it to be customized and adapted for specialized usage within those boundaries. As much as one would like to avoid it, the term framework immediately springs to mind.

The specification parser is the part of the system that needs to be adapted every time. The goal is to have a simple and intuitive way of defining the scenario (e.g. the sequence of execution steps) in a problem-specific way. In other words, define a testing DSL for the project. Using as basis the contrived example project from part I, a DSL-expressed scenario could be:

<scenario>
  <flash/>
  <start_sniffer log="testcase.log"/>
  <guitester script="somescript.gui"/>
  <stop_sniffer/>
  <analyze input="testcase.log" script="check_for_errors.usb"/>
</scenario> 

Additionally in many cases the specification itself will have to contain more information (i.e. requirement tracing). In a later installment I will present a solution that provides enough flexibility for the adaptation of the parser without exponentially increasing it’s complexity.

blog comments powered by Disqus