MuBPEL

MuBPEL is a mutation testing tool for the Web Services Business Process Execution Language (WS-BPEL) 2.0. It can be used to evaluate the quality of a test suite by checking if it can tell apart a mutant from the original program. Mutants are slightly modified (mutated) versions of the original program in which a single syntactical change has been made: for example, "2 < 3" may have been changed to "2 > 3" or "2 < 4".

MuBPEL includes an embedded instance of ActiveBPEL 4.1 running in a Jetty application server. The modifications are available in the UCASE fork of ActiveBPEL 4.1. It also embeds the BPELUnit unit testing framework. Therefore, using it is as simple as unpacking a tar.gz and running a command in a terminal.

Installation

UNIX-based systems (automatic)

The recommended way to install MuBPEL on UNIX systems is using our install.sh script:

  1. Download the script from here.
  2. Run it from a terminal by using bash install.sh mubpel.
  3. Once the script has completed its execution, log out of your current session and log in again.
  4. You may now run MuBPEL from any terminal by using the mubpel command.

UNIX-based systems (manual)

If the above shell script does not work for your system, MuBPEL can be installed manually by following these steps:

  1. Download the -uberjar.tar.gz file from the "mubpel/(version)" subdirectory of our release or snapshot repositories.
  2. Unpack the distribution to any directory.
    ~/bin$ tar xzf mubpel-VERSION-uberjar.tar.gz
    
  3. Add a symbolic link to the mubpel launcher script from a directory in your $PATH. For example, if ~/bin is in your $PATH:
    ~/bin$ ln -s ~/bin/mubpel-VERSION/mubpel ~/bin/mubpel
    
  4. You can now run MuBPEL with:
    mubpel (arguments)
    
  5. You can look up the available options for a certain subcommand using the -h or --help option:
    mubpel run -h
    

Windows

For the time being, no .bat launcher script is included in the distribution. Just download the same distribution and run the .jar manually. Assuming the java executable is in your %PATH, you can run MuBPEL using:

java -jar path/to/mubpel.jar (arguments)

Tutorials

In the following sections, we will illustrate some common usage scenarios for MuBPEL, using the sample WS-BPEL composition in LoanApprovalRPC.zip. After installing MuBPEL as described above, please download and unpack the sample composition to a directory and open a shell inside the new LoanApprovalRPC subdirectory.

You should have the following files:

ApprovalService.wsdl
AssessorService.wsdl
loanApprovalProcess.bpel
loanApprovalProcess.bpts
loanServicePT.wsdl
LoanService.wsdl

MuBPEL only requires the .bpts file with the BPELUnit test specification and the .bpel file with the WS-BPEL composition. However, the WS-BPEL composition requires the .wsdl descriptions of the Web Services to be invoked and its own service interface. It may also need the .xsd files used by the .wsdl files, but none are required for this simple example.

We will use the following notation for the commands to be run and their output:

$ literal command which should be run in the terminal
output line 1
output line 2
...
output line N

Generate and compare all mutants

First, we will analyze the original WS-BPEL composition and generate every possible mutant with:

$ mubpel applyall loanApprovalProcess.bpel

Mutants are named following a mOO-LL-AA.bpel format, where OO is the operator index, LL is the location index and AA is the attribute index. Indices start from 1 and are zero-filled to a minimum length of 2 digits. For instance, m04-01-01.bpel refers to the fourth mutation operator (ERR) applied to the first location with the attribute set to 1.

Next, we will run the original WS-BPEL composition against the unit tests in the .bpts file, saving the BPELUnit XML report in original-output.xml:

$ mubpel run loanApprovalProcess.bpts loanApprovalProcess.bpel > original-output.xml
ActiveBPEL is now RUNNING

This report contains the output produced by the original composition.

Note: the "ActiveBPEL is now RUNNING" messages indicate when ActiveBPEL has been successfully started up. It may take over 10 seconds to come up, as it is an intensive operation.

Finally, we only have to compare the original output with the outputs of the mutants, using the following command:

$ mubpel compare --keep-going loanApprovalProcess.bpts loanApprovalProcess.bpel original-output.xml m*.bpel
ActiveBPEL is now RUNNING
m04-01-01.bpel 0 0 0 0 0 T 578649500 364561079 354552274 329768384 332416762
m04-01-02.bpel 1 1 1 1 1 T 1254624906 1129502043 1084306097 1062853251 1098438638
m04-01-03.bpel 1 1 1 1 1 T 1242538343 1140063423 1090126535 1060262407 1083304561
m04-01-04.bpel 1 1 1 0 0 T 1265053758 1131398837 1096913075 328841133 335459026
... more lines ...

By default, mubpel compare jumps to the next mutant as soon as a test case produces a different result (i.e. the mutant is killed). By using --keep-going (or -k), we force mubpel compare to run every test case, regardless of whether the mutant has been already killed or not.

The output from mubpel compare follows the format:

FILENAME (space-separated list of 0s, 1s and 2s) T (space-separated list of test execution times in nanoseconds)

0 means "same output", 1 means "different output" and 2 means "invalid (stillborn) mutant". Stillborn mutants could not be run, as they were rejected by the static checks in the WS-BPEL engine. For instance, the line

m04-01-01.bpel 0 0 0 0 0 T 578649500 364561079 354552274 329768384 332416762

means that the mutant m04-01-01.bpel produced the same results as the original file (all 0s). The first test case took about 579 milliseconds. On the other hand, the line

m04-01-04.bpel 1 1 1 0 0 T 1265053758 1131398837 1096913075 328841133 335459026

shows that ERR on the first location with the attribute set to 4 was killed by the first 3 cases (first 3 values are set to 1), and that the first case took 1265 milliseconds.

Generate and compare a single mutant

Alternatively, we may want to generate and compare a single mutant. We will now analyze the program without generating any mutants:

$ mubpel analyze loanApprovalProcess.bpel | nl
     1  ISV 0 1
     2  EAA 0 4
     3  EEU 0 1
     4  ERR 2 5
... more lines ...
    30  CFA 9 1
    31  CDE 2 2
    32  CCO 2 2
    33  CDC 2 2

mubpel analyze prints a sequence of OP NL NA lines, where OP is the identifier for the mutation operator, NL is the number of locations where the operator can be applied, and NA is the maximum value that the attribute field can take (the minimum value is always set to 1). For illustration purposes, we have passed this output to nl to number each line with the index of the mutation operator, but this is not really necessary. You may run any of these two commands, as they are equivalent:

$ mubpel apply loanApprovalProcess.bpel cdc 1 1 > cdc-1-1.bpel
$ mubpel apply loanApprovalProcess.bpel 33 1 1 > cdc-1-1.bpel

We have applied the CDC operator on its first location with the attribute set to 1, and saved the mutant to cdc-1-1.bpel. Let us now compare it with the output from the original WS-BPEL composition produced in the previous tutorial:

$ mubpel compare loanApprovalProcess.bpts loanApprovalProcess.bpel original-output.xml cdc-1-1.bpel 
ActiveBPEL is now RUNNING
cdc-1-1.bpel 0 0 0 1 0 T 535131493 380391194 347667771 1088080278 0

Apparently, this mutant has been killed by the fourth test case. The fifth test case was not run, and so its execution time is zero.

Compare the code of the original composition and the mutant

If we wanted to see what the actual change in the cdc-1-1.bpel mutant produced in the previous tutorial was, we would normally attempt to use diff:

$ diff loanApprovalProcess.bpel cdc-1-1.bpel

However, this would not produce a minimal set of changes, due to differing whitespace, indentation and line breaks. To solve this problem, we need to "normalize" the original WS-BPEL process so it follows the same style, while preserving all the logic in it. We can do this with:

$ mubpel normalize loanApprovalProcess.bpel > normalized.bpel

We can now perform a regular comparison and obtain a minimal set of differences:

$ diff normalized.bpel cdc-1-1.bpel 
64c64
<          <condition>           ( $request.amount &lt;= 10000 )           </condition>
---
>          <condition>true()</condition>

As we can see, the mutant has replaced the original <condition> with true().

Note: there are better tools than diff for comparing files. We tend to prefer tkdiff or kompare ourselves.

Advanced topics

Obtain execution logs

If the results from the .bpts file are not what we expected, we may need to inspect the ActiveBPEL execution logs. In order to save disk space and reduce execution times, these logs are normally not produced. We need to enable these logs with --bpel-loglevel full and place the ActiveBPEL instance started up by MuBPEL in a well-known place with --work-directory activebpel. The entire command would look as follows:

$ mubpel run --work-directory activebpel --bpel-loglevel full loanApprovalProcess.bpts loanApprovalProcess.bpel > output.xml
ActiveBPEL is now RUNNING
$ ls activebpel/process-logs/
1.log  2.log  3.log  4.log  5.log  6.log

ActiveBPEL has produced six logs: one .log file for each test in the .bpts. Each of these logs contains lines like these:

[1][2012-09-27 21:50:19.196] : Executing [/process/sequence/receive[@name='ReceiveRequest']]
[1][2012-09-27 21:50:19.201] : Completed normally [/process/sequence/receive[@name='ReceiveRequest']]

The first part of the line indicates the process ID and the timestamp of the line. The rest of the line describes the action performed and the location in the BPEL process, as an XPath query.

Note: --work-directory and --bpel-loglevel are also available for compare and comparefull.

Count killed and invalid mutants and list surviving mutants automatically

Users with access to the grep and egrep tools (bundled with most GNU/Linux and Mac setups) can take advantage of several useful commands to process the output of mubpel compare.

  • Count how many valid mutants were killed:
    grep -wc 1 results.txt
  • Count how many mutants could not be deployed (invalid mutants):
    grep -wc 2 results.txt
  • List surviving valid mutants (drop the | awk... to get the entire line):
    egrep -vw '1|2' results.txt | awk '{print $1}'

Note: we assume that the user saved the output of this command to results.txt by using something like:

mubpel compare (options...) f.bpts f.bpel output.xml mutants... | tee results.txt

Use a different engine instead of the embedded ActiveBPEL server

Users wishing to use their own BPEL server instead of the embedded BPEL engine can do so, providing they have configured the deployer in the PUT section of their .bpts file properly. The embedded server can be disabled with the --engine-type none option for the compare and run subcommands, like this:

mubpel run --engine-type none f.bpts f.bpel > output.xml
mubpel compare --engine-type none f.bpts f.bpel output.xml mutants...

Call real processes instead of mockups

If you do not want to have MuBPEL replace all references to external WS with references to BPELUnit mockups, you may want to use the --preserve-urls option with compare and run:

mubpel run --preserve-urls f.bpts f.bpel > output.xml
mubpel compare --preserve-urls f.bpts f.bpel output.xml mutants...

Evaluate path coverage using ActiveBPEL execution logs

As of today (2013/12/09), test coverage evaluation in BPELUnit is under a heavy redesign and is not available to end users. However, an alternative tool is available if using the ActiveBPEL engine: the activebpel-path-coverage tool. Follow these steps to install it:

  1. Download the latest -uberjar.tar.gz distribution from the Nexus binary repository.
  2. Unpack the archive and make sure that the main activebpel-path-coverage is in a directory within your PATH environment variable, so you can run it simply with activebpel-path-coverage from your shell.

After it has been installed and the ActiveBPEL process execution logs have been produced according to the "Obtain execution logs" tutorial above in this page, the coverage reports can be generated in plain text or XML format. For instance, the command below will produce three reports: not-run.xml with the paths that were not run, executed.xml with the paths that were run by each test case and possible.xml with the possible paths throughout the compositions.

activebpel-path-coverage \
 --possiblePaths possible.xml \
 --executedPaths executed.xml \
 f.bpel activebpel/process-logs/*.log 2> not-run.xml

Plain text reports can be obtained by adding the --plain flag, like this:

activebpel-path-coverage --plain \
 --possiblePaths possible.txt \
 --executedPaths executed.txt \
 f.bpel activebpel/process-logs/*.log 2> not-run.txt

Evaluate sentence and branch coverage using ActiveBPEL execution logs

While activebpel-path-coverage produces reports for path coverage, it does not provide information about sentence or branch coverage. This information can be extracted using the Perl scripts from Takuan tool with the appropriate options.

The most reliable way across distributions and architectures to install Takuan is through its .par file:

  1. Install the PAR::Packer module using CPAN or the distribution's package manager. In Debian-based systems, this can be done with sudo apt-get install libpar-packer-perl.
  2. Download the 2.0.1 .par release from the Nexus repository.
  3. Make sure the Takuan Perl scripts can be run using parl path/to/takuan-perl-2.0.1.par. A help message with all the available options should be printed.

After Takuan has been installed and the process logs have been produced, the coverage report can be generated in XML format with:

parl path/to/takuan-perl-2.0.1.par \
  --coverage --no-java \
  f.bpel f.bpel activebpel/process-logs/*.log

The resulting coverage log will be placed in activebpel/process-logs/coverage.log. To generate the coverage report in plain text format, --coverage=plain should be used instead:

parl path/to/takuan-perl-2.0.1.par \
  --coverage=plain --no-java \
  f.bpel f.bpel activebpel/process-logs/*.log

Validation checklist

Before reporting issues, please make sure the .bpts and .bpel files meet the following constraints:

  • Partners in the .bpts file should have the same names as their corresponding partner links in the .bpel file.
  • When using the ActiveBPEL engine embedded in MuBPEL, the service URL in the WSDL interface of the composition should be of the form http://localhost:(port)/active-bpel/services/(service name), where (port) is irrelevant (MuBPEL will place the correct value during execution) and (service name) should be the name of the <service> element in the WSDL interface of the composition.

If these constraints aren't met, you may run into 404 errors while running the test suite. If invoking a mockup service results in a 404 error nonetheless, please check that the corresponding partner track has the expected activities. BPELUnit will not set up mockups for an empty partner track.

How do I count equivalent mutants?

Manually :-). In theory, there is no way to fully automate the process, as knowing whether a mutant is equivalent or not is an undecidable problem. Some work on detecting part of the equivalent mutants has been done in the literature, but the only way to be 99,9% sure is reading the code yourself.

In order to find all the equivalent mutants, you may follow these rough steps:

  1. Find all surviving mutants (see the recipes above).
  2. Compare each mutant against the original program:
    • If you can think of a test that kills it, add it. Remember that you need to a) reach the mutant, b) make the mutant do something different, and c) propagate the change to the output.
    • If you think it cannot be possibly killed as it is the same as the original process, mark it as equivalent.
    • If you're not sure, continue with the next mutant for now.
  3. Run the surviving mutants against the new test cases to see if you've really killed them. You may kill mutants that you didn't expect (even apparently equivalent ones): that's a good thing.
  4. Go back to 2 until you think that all the remaining mutants are equivalent.
  5. Count how many mutants you have tagged as equivalent.

Additional example compositions

The test suite used in MuBPEL includes several compositions which can be used to test drive it. You can check them out using Subversion:

svn co https://neptuno.uca.es/svn/sources-fm/trunk/src/mubpel/src/test/resources

Alternatively, you may want to browse them through a web interface. Please visit source:trunk/src/mubpel/src/test/resources.

In addition, we are working on a public repository of WS-BPEL compositions.

Other learning materials

We have prepared some slides in Spanish about MuBPEL for a PhD lecture, along with an example composition. Please refer to the attached mubpel-phd-lecture.zip file.

LoanApprovalRPC.zip - Example composition (8.12 KB) Antonio García Domínguez, 09/24/2012 01:34 AM

mubpel-phd-lecture.zip - PhD seminar slides on MuBPEL (in Spanish) (287 KB) Antonio García Domínguez, 04/16/2013 07:29 PM