Graph transformation benchmarks

Up to this point there did not exist any collection of benchmarks for comparing different tools in the graph transformation area. The aim of this webpage is to bridge this gap and to provide a description of a basic set of benchmark examples together with scenarios for which the benchmarks can be used. Moreover, our initiative includes a quantitative comparison of the performance of graph transformation tools by defining certain parameter settings and optimization possibilities for different test cases that are requested to be implemented by tool providers.

Introduction

Benchmarking has a key role in decision making processes when a choice has to be made between several alternatives. In order to fill this role, system designers should get a proper view on the system, which means that characteristics of the system have to be measured under different circumstances (i.e. by using several parameter combinations for measurements).

Graph transformation provides a pattern and rule based manipulation of graph models. Since there is a couple of fields where graph based models can be used, graph transformation can be considered as a widely applicable approach. However, despite the large variety of graph transformation tools (AGG, Fujaba, GReAT, Groove, PROGRES, Viatra, up to this point there did not exist any collection of benchmarks for comparing such tools.

The aim of this webpage is to bridge this gap and to provide a description of a basic set of benchmark examples together with scenarios for which the benchmarks can be used. Moreover, our initiative includes a quantitative comparison of the performance of graph transformation tools by defining certain parameter settings and optimization possibilities for different test cases that are requested to be implemented by tool providers.

In case of graph transformation benchmarks the sole measurable feature, which composes the base of comparison in turn, is the execution time of pattern matching and updating phases. (Note that the time needed for generating the initial models does not take part in measurements, and thus, this topic is not discussed here.) Execution times are measured for several tools and on different test sets, while the underlying hardware remains the same for all benchmarks.

Benchmark examples

Section 1 introduces a benchmark example, which is typical for checking the specification of a system that is defined in a visual language with dynamic semantics. The benchmark of Sec. 2 is a model transformation example. In Sec. 3 a special test set is presented which is appropriate for testing only the pattern matching phase, but with different model and pattern sizes.

1. Distributed mutual exclusion algorithm

Specification: see Sec. 3 of the technical report
Available implementations:

1.1 STS test set

Test cases and runs for the STS test set
Multiplicity opt. Param. passing Parallel execution N AGG PROGRES FUJABA DB
OFF OFF OFF 10 STSmany10 STSmany10 STSmany10 STSmany10
100 STSmany100 STSmany100 STSmany100 STSmany100
1000 STSmany1000 STSmany1000 STSmany1000 STSmany1000
ON OFF OFF 10 STSone10 STSone10 STSone10
100 STSone100 STSone100 STSone100
1000 STSone1000 STSone1000 STSone1000
OFF ON OFF 10 STSmanyPP10 STSmanyPP10 STSmanyPP10 STSmanyPP10
100 STSmanyPP100 STSmanyPP100 STSmanyPP100 STSmanyPP100
1000 STSmanyPP1000 STSmanyPP1000 STSmanyPP1000 STSmanyPP1000
ON ON OFF 10 STSonePP10 STSonePP10 STSonePP10
100 STSonePP100 STSonePP100 STSonePP100
1000 STSonePP1000 STSonePP1000 STSonePP1000

1.2 LTS test set

Test cases and runs for the LTS test set
Multiplicity opt. Param. passing Parallel execution N R AGG PROGRES FUJABA DB
OFF OFF OFF 4 100 LTS4,100 LTS4,100 LTS4,100 LTS4,100
1000 1 LTS1000,1 LTS1000,1 LTS1000,1 LTS1000,1

1.3 ALAP test set

Test cases and runs for the ALAP test set
Multiplicity opt. Param. passing Parallel execution N AGG PROGRES FUJABA DB
OFF OFF OFF 10 ALAP10 ALAP10 ALAP10 ALAP10
100 ALAP100 ALAP100 ALAP100 ALAP100
1000 ALAP1000 ALAP1000 ALAP1000 ALAP1000
OFF OFF ON 10 ALAPpar10 ALAPpar10 ALAPpar10
100 ALAPpar100 ALAPpar100 ALAPpar100
1000 ALAPpar1000 ALAPpar1000 ALAPpar1000

2. Object-relational mapping

Specification: see Sec. 4 of the technical report

3. Comb structure

Specification: see Sec. 5 of the technical report

Links to graph transformation tools

Related publications