Skip to content

Add a guide to benchmark configuration options #168

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Feb 8, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
107 changes: 107 additions & 0 deletions docs/configuration-options.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,107 @@
# Mastering kotlinx-benchmark Configuration

This is a comprehensive guide to configuration options that help fine-tune your benchmarking setup to suit your specific needs.

## The `configurations` Section

The `configurations` section of the `benchmark` block serves as the control center for setting the parameters of your benchmark profiles. The library provides a default configuration profile named "main", which can be configured according to your needs just like any other profile. Here's a basic structure of how configurations can be set up:

```kotlin
// build.gradle.kts
benchmark {
configurations {
register("smoke") {
// Configure this configuration profile here
}
// here you can create additional profiles
}
}
```

## Understanding Configuration Profiles

Configuration profiles dictate the execution pattern of benchmarks:

- Utilize `include` and `exclude` options to select specific benchmarks for a profile. By default, every benchmark is included.
- Each configuration profile translates to a task in the `kotlinx-benchmark` Gradle plugin. For instance, the task `smokeBenchmark` is tailored to run benchmarks based on the `"smoke"` configuration profile. For an overview of tasks, refer to [tasks-overview.md](tasks-overview.md).

## Core Configuration Options

Note that values defined in the build script take precedence over those specified by annotations in the code.

| Option | Description | Possible Values | Corresponding Annotation |
|-------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------|-------------------------------------------------------|
| `iterations` | Sets the number of iterations for measurements. | Positive Integer | `@Measurement(iterations: Int, ...)` |
| `warmups` | Sets the number of iterations for system warming, ensuring accurate measurements. | Non-negative Integer | `@Warmup(iterations: Int)` |
| `iterationTime` | Sets the duration for each iteration, both measurement and warm-up. | Positive Integer | `@Measurement(..., time: Int, ...)` |
| `iterationTimeUnit` | Defines the unit for `iterationTime`. | Time unit, see below | `@Measurement(..., timeUnit: BenchmarkTimeUnit, ...)` |
| `outputTimeUnit` | Sets the unit for the results display. | Time unit, see below | `@OutputTimeUnit(value: BenchmarkTimeUnit)` |
| `mode` | Selects "thrpt" (Throughput) for measuring the number of function calls per unit time or "avgt" (AverageTime) for measuring the time per function call. | `"thrpt"`, `"Throughput"`, `"avgt"`, `"AverageTime"` | `@BenchmarkMode(value: Mode)` |
| `include("…")` | Applies a regular expression to include benchmarks that match the substring in their fully qualified names. | Regex pattern | - |
| `exclude("…")` | Applies a regular expression to exclude benchmarks that match the substring in their fully qualified names. | Regex pattern | - |
| `param("name", "value1", "value2")` | Assigns values to a public mutable property with the specified name, annotated with `@Param`. | String values that represent valid values for the property | `@Param` |
| `reportFormat` | Defines the benchmark report's format options. | `"json"`(default), `"csv"`, `"scsv"`, `"text"` | - |

The following values can be used for specifying time unit:
- "NANOSECONDS", "ns", "nanos"
- "MICROSECONDS", "us", "micros"
- "MILLISECONDS", "ms", "millis"
- "SECONDS", "s", "sec"
- "MINUTES", "m", "min"

## Platform-Specific Configuration Options

The options listed in the following sections allow you to tailor the benchmark execution behavior for specific platforms:

### Kotlin/Native
| Option | Description | Possible Values | Default Value |
|-----------------------------------------------|------------------------------------------------------------------------------------------------------------------------|------------------------------------|------------------|
| `advanced("nativeFork", "value")` | Executes iterations within the same process ("perBenchmark") or each iteration in a separate process ("perIteration"). | `"perBenchmark"`, `"perIteration"` | `"perBenchmark"` |
| `advanced("nativeGCAfterIteration", value)` | Whether to trigger garbage collection after each iteration. | `true`, `false` | `false` |

### Kotlin/JVM
| Option | Description | Possible Values | Default Value |
|---------------------------------------------|------------------------------------------------------------|----------------------------------------|----------------|
| `advanced("jvmForks", value)` | Specifies the number of times the harness should fork. | Non-negative Integer, `"definedByJmh"` | `1` |

**Notes on "jvmForks":**
- **0** - "no fork", i.e., no subprocesses are forked to run benchmarks.
- A positive integer value – the amount used for all benchmarks in this configuration.
- **"definedByJmh"** – Let JMH determine the amount, using the value in the [`@Fork` annotation](https://javadoc.io/static/org.openjdk.jmh/jmh-core/1.21/org/openjdk/jmh/annotations/Fork.html) for the benchmark function or its enclosing class. If not specified by `@Fork`, it defaults to [Defaults.MEASUREMENT_FORKS (`5`)](https://javadoc.io/static/org.openjdk.jmh/jmh-core/1.21/org/openjdk/jmh/runner/Defaults.html#MEASUREMENT_FORKS).

The library offers the flexibility to specify the version of the Java Microbenchmark Harness (JMH) to use when running benchmarks on the JVM.
The default version is set to `1.21`, but you can customize it while registering a JVM target for benchmarking:

```kotlin
benchmark {
targets {
register("jvmBenchmarks") {
this as JvmBenchmarkTarget
jmhVersion = "1.36"
}
}
}
```

Alternatively, you can utilize the project property `benchmarks_jmh_version` to achieve the same effect.

### Kotlin/JS & Kotlin/Wasm
| Option | Description | Possible Values | Default Value |
|-----------------------------------------------|-------------------------------------------------------------------------------------------------------|-----------------|---------------|
| `advanced("jsUseBridge", value)` | Generate special benchmark bridges to stop inlining optimizations. | `true`, `false` | `true` |

**Note:** In the Kotlin/JS target, the "jsUseBridge" option only takes effect when the `BuiltIn` benchmark executor is selected.

By default, kotlinx-benchmark employs the `benchmark.js` library for running benchmarks in Kotlin/JS.
However, users have the option to select the library's built-in benchmarking implementation:

```kotlin
benchmark {
targets {
register("jsBenchmarks") {
this as JsBenchmarkTarget
jsBenchmarksExecutor = JsBenchmarksExecutor.BuiltIn
}
}
}
```