This page contains instructions on how to tune Clover's performance when running your builds and measuring code coverage.
On this page:
Unique coverage relates to a line of code that was hit by only one test. Unique coverage tracking can be switched off to reduce CPU & memory usage when running Clover. You can configure unique coverage reporting in the following Clover components:
If you use Clover in your build purely for Test Optimization purposes and not for coverage reporting, you can reduce the granularity of Clover instrumentation from statement to method level. The 'instrumentationLevel
' attribute set to method level allows for speedier instrumentation, compilation & test execution.
This speeds up the build at the loss of some accuracy. This is the setting to use if you want to improve Clover's performance. When this attribute is set to 'statement' (the default), the builds will take longer but the optimization intelligence will also be stronger.
You can configure instrumentation level in the following Clover-for-Ant tasks:
See the Ant Task Reference for more information.
During your test runs, Clover tries to record total code coverage and per-test code coverage as efficiently as possible but defaults to settings best for applications which are not highly CPU intensive. If your application is highly CPU intensive and code coverage recording is causing slow running tests, the following options may assist:
Supply this option to the JVM running your tests:
-Dclover.pertest.coverage=diff
This changes the way per-test coverage is recorded at runtime to work faster for CPU intensive applications.
Supply this option to the JVM running your tests:
-Dclover.pertest.coverage=off
This tells Clover to not record any per-test coverage data at runtime. With this you gain a faster running time for CPU intensive applications, although you lose per-test coverage information.
This appendix contains few sample performance results based on a synthetic and a real code. Results in your project may be different.
A following open source libraries were tested using statement- and method-level instrumentation.
Standard test of unit tests was executed and a total time of test execution was measured.
(*) only 200 test classes were executed for Commons Math
Performance penalty for a method and statement instrumentation level may vary significantly, especially for CPU-intensive applications.
Strategy | システム プロパティ | コメント |
---|---|---|
disabled | -Dclover.pertest.coverage=off | Use this if to disable per-test coverage recording. |
diffing | -Dclover.pertest.coverage=diff | Theoretically it is good for highly CPU-intensive code (lot of hit counts to be recorded) and a relatively small code base (smaller hit count arrays to be compared). |
single threaded | (nothing) | Default strategy. Designed for single-threaded applications. Per-test coverage might be inaccurate if used with multi-threaded application. Very good performance. |
synchronized | -Dclover.pertest.coverage.threading=synchronized | Safe for multi-threaded applications. Performance penalty as recording of every hit count is encapsulated in synchronized block. |
volatile | -Dclover.pertest.coverage.threading=volatile | Safe for multi-threaded applications, but requires at least JRE 1.5. |
The following class:
import junit.framework.TestCase; public class PerformanceTest extends TestCase { static int hitsPerTest; static int numberOfTests; public void testPerformance() { for (int i = 0; i < hitsPerTest; i++); // empty loop, one R.inc(...) call per loop } public static void main(String[] args) { numberOfTests = Integer.valueOf(args[0]); hitsPerTest = Integer.valueOf(args[1]); PerformanceTest pt = new PerformanceTest(); for (int i = 0; i < numberOfTests; i++) { pt.testPerformance(); } } }
was instrumented using Fixed Coverage Recorder and executed with different per-test recording strategies. In order to have roughly 10'000'000 hits recorded by Clover, application was executed with following arguments:
PerformanceTest 10 1000000
PerformanceTest 20 500000
PerformanceTest 50 200000
PerformanceTest 100 100000
PerformanceTest 200 500000
PerformanceTest 500 200000
PerformanceTest 1000 10000
PerformanceTest 10000 1000
Test environment: JDK 1.5, Windows 7; Core i7 2670QM 2.2 GHz; 8GB RAM; HDD 750GB 7200RPM.
For a large number of tests, performance is mainly affected by a fact that the coverage recording file is being created on a hard disk. If you have more than 1000 tests there is practically no difference which strategy is used.
For CPU intensive applications the "diffing" strategy is slightly faster than the "single-threaded" or "volatile".