I did some simple Subplot stress testing. These are the results.
The test machine is a VM: single vCPU, 8 GiB RAM. The bare metal host CPU is Intel(R) Core(TM) i7-4790K CPU @ 4.00GHz. Almost nothing else was running on the host or in the guest.
I used a simple script to generate test documents, with N scenarios of M steps each, and corresponding bindings and functions files. There is a binding for each of the M steps, but only one function.
I varied N and M. The script reports the elapsed wall clock time for generating the test document (and related files), for docgen, and for codegen, and the time to run the generated test program. The table below collects results for different runs.
I used the following command lines to run the tests on a freshly booted system:
for i in 1 10 100 1000; do ./stress $i 1; done
for i in 1 10 100 1000; do ./stress 1 $i; done
for i in 1 10 100 1000; do ./stress 1000 $i; done
N | M | generate | docgen | codegen | runtime |
---|---|---|---|---|---|
1 | 1 | 0 | 0 | 0 | 0 |
10 | 1 | 0 | 1 | 1 | 1 |
100 | 1 | 0 | 1 | 1 | 1 |
1000 | 1 | 0 | 3 | 3 | 3 |
1 | 1 | 0 | 1 | 1 | 1 |
1 | 10 | 0 | 1 | 1 | 1 |
1 | 100 | 0 | 1 | 1 | 1 |
1 | 1000 | 0 | 5 | 9 | 9 |
1000 | 1 | 0 | 3 | 3 | 3 |
1000 | 10 | 0 | 5 | 7 | 7 |
1000 | 100 | 0 | 49 | 82 | 84 |
The last round of 1000 scenarios of 1000 steps each failed due to a stack overflow panic.
I meant to also find the largest number of one-step scenarios that works, and the largest number steps in one scenario that works, but Subplot seems to not get significantly slower for an increasing number of scenarios. However, it gets quickly slower when the number of steps and bindings increases. I didn’t profile to see where the slowness comes from.
Also, possibly using a release build may affect this.