Difference between revisions of "Regression checking suites"

From TASTE
Jump to: navigation, search
(Execution output)
 
m (1 revision imported)
 
(No difference)

Latest revision as of 21:01, 4 August 2017

Regression checking suites

As the functionality of TASTE evolves, with new features added and bugs fixed, there is always the potential of new issues being introduced that break existing functionality. To shield against this kind of problem, TASTE uses regression-checking suites.

Under ~/tool-src/testSuites/Regression_AADLv2, there is a large list of projects that are compiled automatically every night in the ESA premises, sending e-mails to the developers if something breaks.

The regression check is executed via...

cd ~/tool-src/testSuites/Regression_AADLv2/
./regression.sh

...and will proceed to compile all the projects, testing a large part of TASTE functionality (model generation, code generation, vertical transformation, compilation and linking) across many combinations of standalone and distributed projects.

Execution output

Compiling and linking is important when one deals with code generation technology - but it is obviously not enough. The individual projects themselves also contain regression checking Python scripts - that verify the execution output of the generated applications is as it should be. These scripts use pseudo-terminal technology (accessed via the pexpect Python module), checking for expected patterns of logging outputs. They also make use of a small library we developed, that makes verification easy:

sys.path.append("..")
import commonRegression
timeout = 5
binaries = [
   "binary.linux.ada/binaries/demo_obj106", 
   "binary.linux.pohic/binaries/demo_obj106"]
expected = [
   '\[B\] startup'
]
result = commonRegression.test(binaries, expected, timeout)
sys.exit(result)

In this simple example, we check that for all the programs in the "binaries" list, once they are executed, their first output is "[B] startup".

The list of expected messages can be as long as necessary:

expected = [
    "Invoking TC",
    "destination:",
    "    0",
    "action:",
    "    display:",
    "        abc1",
    "Received Telemetry: gui_send_tm",
    "Parameter tm:",
    "abc1"
]

...and the "binaries" can also point to scripts:

binaries = [
    "python Test_TM_TC_with_Demo_Ada.py"
]

Finally, the list of messages can also provide alternatives:

expected = [
   ["\[hello\] startup", "\[world\] startup"],
   ["\[hello\] startup", "\[world\] startup"],
   "\[hello\] cyclic operation",
   "\[world\] Ping",

In this case, the first two messages can be any of the entries in the list - which allows for the following 4 combinations of the two first output messages:

  • [hello] startup, [hello] startup
  • [hello] startup, [world] startup
  • [world] startup, [hello] startup
  • [world] startup, [world] startup

Since this is Python, we can provide arbitrarily complex checking scenarios - e.g. in a distributed system, spawn first the main node, then spawn a Python script that uses the TASTE Python API to communicate with the main node, sending TCs and expecting TMs to arrive back - and at the same time, verify the output messages via pexpect.

MSC based testing

Finally, TASTE also provides MSC-based recording and playback of scenarios (see Chapter 9.6 in the main documentation). These allow easy creation of test scenarios, which are then transformed (via msc2py) into executable Python scripts that can be added to the regression checking suite.