Skip to content

tudasc/Artifact-FortranContractAnalysis

Repository files navigation

Artifact for 'Extending Contract Verification for Parallel Programming Models to Fortran'

To reproduce the results in the paper, the JUBE benchmarking system is required. Additionally, the module structure may need to be adapted to the target system.

The artifact is structured as follows:

  • artifact/: Contains all results used in the paper
  • jube_common.xml: Generic Setup of CoVer, MUST and module system for both the correctness and performance benchmarks
  • jube_performance.xml: JUBE benchmark definitions for the performance analysis. See the reproduction steps on how to use it.
  • jube_correctness.xml: JUBE benchmark definitions for the classification quality analysis. See the reproduction steps on how to use it.
  • bench_data/: Supporting files required to execute the benchmarks. See here for details.
  • bench_scripts/: Supporting files used to parse the results of the benchmarks. See here for details.
  • Tool_Sources/: Source Code for the MUST and CoVer versions used in this evaluation.
  • Correctness_Sources/: Source Code for the classification quality test files from MPI-BugBench.
  • Performance_Sources/: Source Code for the proxy applications used for the runtime overhead analysis.

Reproduction Steps

The benchmarks are designed to run on the Lichtenberg II cluster. To adapt to a different environment, modify the job submission script in bench_data/submit.job.in and the module setup in jube_common.xml as needed. Installation of the JUBE benchmarking framework is required. See here for details.

The benchmark system makes use of the JUBE tag system to specify options. Some of them are shared across the performance and correctness benchmarks:

  • must: Run the benchmark using MUST / MUST with filtered instrumentation
  • cover: Run the benchmark using CoVer-Dynamic / CoVer-Dynamic with filtered instrumentation
  • base: Run the benchmark without any analysis tools
  • ompi: Use OpenMPI 4.1.6 for this benchmark. Will disable Fortran tests for correctness checking.
    • If omitted, will use a modified MPICH 4.3.2 instead (using the patch given in bench_data/7479.patch). Specifying e.g. both must and cover will lead to both MUST and CoVer being tested for this benchmark execution.

Reproduction Steps - Performance

To run the performance benchmarks, run jube run jube_performance.xml -t must cover base submit. The tags given specify the options used during the original execution for the paper result data: Additional options used:

  • submit: Submit the benchmarks as jobs to SLURM. If omitted, the benchmarks will run locally (one-node execution, useful for testing)

Reproduction Steps - Correctness

To run the correctness benchmarks, run jube run jube_correctness.xml. For the MPICH Fortran + C run, use the following additional arguments: -t must cover For the MUST run using OpenMPI on C, use: -t must ompi

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors