Software Engineering for Smart Data Analytics & Smart Data Analytics for Software Engineering

User Tools

Site Tools

Prolog Benchmark Tool

With the Prolog Benchmark Tool (PBT) you can:

  • measure the runtime of specific analyses, written in Prolog
    • … on different Prolog versions (not only SWI)
    • … on different factbases (you just need the pl file, exported from JTransformer)
    • measure the influence of indexing on the runtime
  • count the number of results of a specific analysis
    • filter duplicates if needed



Before running the PBT you have to update two property files and adjust them to your system:


In this file you have to define some directories and files:

  • base_dir: currently not important, just set some existing directory
  • factbase_folder: The folder where you keep your factbases (as pl files). Add an empty subfolder called “qlf” to this folder and the tool will automatically transform the pl files to qlf files.
  • pdt_load_file: “path to your pdt installation”\\prolog.library\\pl\\
  • st_java_load_file: “path to your JTransformer installation”\\\\pl\\
  • jt_prolog_load_file: “path to your JTransformer installation”\\jtransformer.prolog.engine\\pl\\
  • output_file: The file where you want to save your results (this file will NOT be overriden, new results will be written to the end of the file). This is just a default setting, you can change the file for a specific benchmark run in the GUI.


In this file you have to add all Prolog versions you want to run benchmarks on. It's always the same pattern:

"Name"|"Path to executable"

Example: To compare the run-time of your queries on SWI-Prolog 7.1.13, 6.2.0 and 6.0.2 specify

SWI 7.1.13|C:\prolog\7.1.13\bin\swipl-win.exe
SWI 6.2.0|C:\prolog\6.2.0\bin\swipl-win.exe
SWI 6.0.2|C:\prolog\6.0.2\bin\swipl-win.exe

Creating a configuration

To create a configuration, you just have to start the Program (main class: org.cs3.plbenchmarks.RunPBT) without any additional parameters. Depending on your property files, you will see something similar to this:

GUI version of the Prolog Benchmark Tool

In the “Configuration” section, you can:

  • select different factbase/Prolog version combinations on which your benchmark is going to be executed
    • you can use the contextmenu to select a whole row/column
  • give a name to the current configuration
  • select a load file with your prolog analyses (can be left empty if you just want to check JTransformer predicates)
  • select if the JTransformer Prolog Code should be loaded or not
  • select an output file

Benchmark code

In the “Benchmark code” section, you can write the goals which you want to check. Every line is a unique goal 1). For every goal the benchmark tool will count the number of results and measure the time it took to get all these results.

There are additional flags you can add to the goals.

Only count unique results

Sometimes your result set contains duplicates. If you want to filter these duplicates and only count the unique results, you can use the following flag:


Actually this means, that the goal will be executed twice, so you should just use it if the predicate is cheap or if you really need it.

Run a goal more than once

In many cases the first execution of a predicate might be slower, because Prolog is building up index structures on demand. Any subsequent executions can make use of these index structures and are therefore faster. If you want to measure the influence of the index creation you can use the following flag:


This means, that the goal is executed five times. In the results you can distinguish between the first call and any subsequent call, to see how the runtime changed.

Run a configuration from the GUI

After you are done with your configuration, you can click the button “Add to batch” and your configuration will appear in the list on the right. You can add as many different configurations as you like. If you are changing something, you always have to click “Update current” to apply the changes, otherwise the changes will be ignored.

In the small toolbar on the bottom of the list you can:

  • run the complete list of configurations
  • delete the selected configuration
  • load the list of configurations from a file
  • save the list of configurations to a file (Use this before running the benchmarks. Otherwise, in case of crashes, your configurations might be lost.)

Run a configuration without the GUI

For running the benchmarks, you don't need to use the GUI. Just use the GUI to create the configurations and save them to a file. Now you can start the program by adding an additional parameter (the absolute path the your configuration file) and it will start directly, without any GUI.

Run the PBT without a GUI

Common problems

QLF files are not used correctly

Make sure, that your factbase folder only contains the pl files and not the qlf files. For the qlf files you need an additional subfolder (in the factbase folder) called “qlf”. If it is not there, create it manually, and the PBT will do the rest.

"Exit Status is 1"-message

This means, that there is a bug in your code (e.g. some predicate could not be found). Do you need to check the “JT” box in the PBT (because some of your predicates uses JT-specific predicates)? Did you try to run the benchmark code manually in a Prolog process? Any exception in the code will lead to this message. Unfortunatelly there is currently no precise handling of exceptions, so you just have to check for bugs manually.

Prolog crashed, and the PBT doesn't respond

Before starting the Prolog process, the PBT will create a lockfile and wait for the Prolog process to delete it. If Prolog crashes the file will not be deleted and the PBT thinks, that Prolog is still busy. Delete the lockfile manually (refresh the project in eclipse and you should see it) and the PBT will continue.

Results remain unchanged, even if the factbase was changed

If you change the pl file, you also have to make sure, that you delete all qlf files (in the qlf subfolder) which belonged to this pl file. The PBT doesn't check if a qlf file and a pl file are fitting (and it will just load the old qlf file, while ignoring the new pl file).

PBT reports 0 results, but in manual testing there are results

This might be a case, where your benchmark code uses derived facts (e.g. ast_node_type_value). These facts are not created automatically by the PBT (because it takes time and they are not always used). If you need these facts, just add “create_derived_facts” as first goal in your benchmark code.

You don't need to put a dot at the end
research/pdt/benchmark_tool.txt · Last modified: 2018/05/09 01:59 (external edit)

SEWiki, © 2020