diff --git a/.github/ISSUE_TEMPLATE/bug_report.md b/.github/ISSUE_TEMPLATE/bug_report.md new file mode 100644 index 00000000..019ffb40 --- /dev/null +++ b/.github/ISSUE_TEMPLATE/bug_report.md @@ -0,0 +1,23 @@ +--- +name: Bug report +about: Create a report to help us improve +title: "" +labels: "" +assignees: "" +--- + +**Describe the bug** + +**To Reproduce** +Attach a code snippet or test data if possible. + +**Expected behavior** + +**Environment** + +- Kotlin version: [e.g. 1.3.30] +- Library version: [e.g. 0.11.0] +- Kotlin platforms: [e.g. JVM, JS, Native or their combinations] +- Gradle version: [e.g. 4.10] +- IDE version (if bug is related to the IDE) [e.g. IntellijIDEA 2019.1, Android Studio 3.4] +- Other relevant context [e.g. OS version, JRE version, ... ] diff --git a/CHANGELOG.md b/CHANGELOG.md index a6a4dae0..c5dc9961 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,5 +1,27 @@ # CHANGELOG +## 0.4.8 + +- Drop legacy JS support +- Support building large JARs [#95](https://github.com/Kotlin/kotlinx-benchmark/issues/95) +- Support Kotlin 1.8.20 +- Fix JVM and Native configuration cache warnings + +## 0.4.7 + +- Support Kotlin 1.8.0 + +## 0.4.6 + +- Support Gradle 8.0 +- Sign kotlinx-benchmark-plugin artifacts with the Signing Plugin +- Upgrade Kotlin version to 1.7.20 +- Upgrade Gradle version to 7.4.2 + +## 0.4.5 + +- Remove redundant jmh-core dependency from plugin + ## 0.4.4 - Require the minimum Kotlin version of 1.7.0 diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md new file mode 100644 index 00000000..7b2c3c5e --- /dev/null +++ b/CONTRIBUTING.md @@ -0,0 +1,62 @@ +# Contributing Guidelines + +There are two main ways to contribute to the project — submitting issues and submitting +fixes/changes/improvements via pull requests. + +## Submitting issues + +Both bug reports and feature requests are welcome. +Submit issues [here](https://github.com/Kotlin/kotlinx-benchmark/issues). + +- Search for existing issues to avoid reporting duplicates. +- When submitting a bug report: + - Use a 'bug report' template when creating a new issue. + - Test it against the most recently released version. It might have been already fixed. + - By default, we assume that your problem reproduces in Kotlin/JVM. Please, mention if the problem is + specific to a platform. + - Include the code that reproduces the problem. Provide the complete reproducer code, yet minimize it as much as possible. + - However, don't put off reporting any unusual or rarely appearing issues just because you cannot consistently + reproduce them. + - If the bug is in behavior, then explain what behavior you've expected and what you've got. +- When submitting a feature request: + - Use a 'feature request' template when creating a new issue. + - Explain why you need the feature — what's your use-case, what's your domain. + - Explaining the problem you're facing is more important than suggesting a solution. + Report your problem even if you don't have any proposed solution. + - If there is an alternative way to do what you need, then show the code of the alternative. + +## Submitting PRs + +We love PRs. Submit PRs [here](https://github.com/Kotlin/kotlinx-benchmark/pulls). +However, please keep in mind that maintainers will have to support the resulting code of the project, +so do familiarize yourself with the following guidelines. + +- If you fix documentation: + - If you plan extensive rewrites/additions to the docs, then please [contact the maintainers](#contacting-maintainers) + to coordinate the work in advance. +- If you make any code changes: + - Follow the [Kotlin Coding Conventions](https://kotlinlang.org/docs/reference/coding-conventions.html). + - Use 4 spaces for indentation. + - Use imports with '\*'. + - Build the project to make sure it all works and passes the tests. +- If you fix a bug: + - Write the test that reproduces the bug. + - Fixes without tests are accepted only in exceptional circumstances if it can be shown that writing the + corresponding test is too hard or otherwise impractical. + - Follow the style of writing tests that is used in this project: + name test functions as `testXxx`. Don't use backticks in test names. +- Comment on the existing issue if you want to work on it. Ensure that the issue not only describes a problem, + but also describes a solution that has received positive feedback. Propose a solution if there isn't any. + +## Building + +This library is built with Gradle. + +- Run `./gradlew build` to build. It also runs all the tests. +- Run `./gradlew :check` to test the the module you're currently working on to speed things up during development. + +## Contacting maintainers + +- If something cannot be done, is not convenient, or does not work, — submit an [issue](https://github.com/Kotlin/kotlinx-benchmark/issues). +- "How to do something" questions — [StackOverflow](https://stackoverflow.com). +- Discussions and general inquiries — use `#benchmarks` channel in [KotlinLang Slack](https://kotl.in/slack). diff --git a/README.md b/README.md index 03eb087b..40d84ae9 100644 --- a/README.md +++ b/README.md @@ -1,273 +1,441 @@ +# kotlinx-benchmark + [![Kotlin Alpha](https://kotl.in/badges/alpha.svg)](https://kotlinlang.org/docs/components-stability.html) [![JetBrains incubator project](https://jb.gg/badges/incubator.svg)](https://confluence.jetbrains.com/display/ALL/JetBrains+on+GitHub) [![GitHub license](https://img.shields.io/badge/license-Apache%20License%202.0-blue.svg?style=flat)](https://www.apache.org/licenses/LICENSE-2.0) -[![Build status](https://teamcity.jetbrains.com/guestAuth/app/rest/builds/buildType:(id:KotlinTools_KotlinxCollectionsImmutable_Build_All)/statusIcon.svg)](https://teamcity.jetbrains.com/viewType.html?buildTypeId=KotlinTools_KotlinxBenchmark_Build_All) +[![Build status](https://teamcity.jetbrains.com/guestAuth/app/rest/builds/buildType:(id:KotlinTools_KotlinxBenchmark_Build_All)/statusIcon.svg)](https://teamcity.jetbrains.com/viewType.html?buildTypeId=KotlinTools_KotlinxBenchmark_Build_All) [![Maven Central](https://img.shields.io/maven-central/v/org.jetbrains.kotlinx/kotlinx-benchmark-runtime.svg?label=Maven%20Central)](https://search.maven.org/search?q=g:%22org.jetbrains.kotlinx%22%20AND%20a:%22kotlinx-benchmark-runtime%22) [![Gradle Plugin Portal](https://img.shields.io/maven-metadata/v?label=Gradle%20Plugin&metadataUrl=https://plugins.gradle.org/m2/org/jetbrains/kotlinx/kotlinx-benchmark-plugin/maven-metadata.xml)](https://plugins.gradle.org/plugin/org.jetbrains.kotlinx.benchmark) [![IR](https://img.shields.io/badge/Kotlin%2FJS-IR%20supported-yellow)](https://kotl.in/jsirsupported) +kotlinx-benchmark is a toolkit for running benchmarks for multiplatform code written in Kotlin. -> **_NOTE:_**   Starting from version 0.3.0 of the library: -> * The library runtime is published to Maven Central and no longer published to Bintray. -> * The Gradle plugin is published to Gradle Plugin Portal -> * The Gradle plugin id has changed to `org.jetbrains.kotlinx.benchmark` -> * The library runtime artifact id has changed to `kotlinx-benchmark-runtime` +## Features +- Low noise and reliable results +- Statistical analysis +- Detailed performance reports -**kotlinx.benchmark** is a toolkit for running benchmarks for multiplatform code written in Kotlin -and running on the following supported targets: JVM, JavaScript and Native. +## Table of contents -Both Legacy and IR backends are supported for JS, however `kotlin.js.compiler=both` or `js(BOTH)` target declaration won't work. -You should declare each targeted backend separately. See build script of the [kotlin-multiplatform example project](https://github.com/Kotlin/kotlinx-benchmark/tree/master/examples/kotlin-multiplatform). + -On JVM [JMH](https://openjdk.java.net/projects/code-tools/jmh/) is used under the hoods to run benchmarks. -This library has a very similar way of defining benchmark methods. Thus, using this library you can run your JMH-based -Kotlin/JVM benchmarks on other platforms with minimum modifications, if any at all. +- [Using in Your Projects](#using-in-your-projects) + - [Project Setup](#project-setup) + - [Target-specific configurations](#target-specific-configurations) + - [Kotlin/JVM](#kotlinjvm) + - [Kotlin/JS](#kotlinjs) + - [Kotlin/Native](#kotlinnative) + - [Kotlin/WASM](#kotlinwasm) + - [Writing Benchmarks](#writing-benchmarks) + - [Running Benchmarks](#running-benchmarks) + - [Benchmark Configuration Profiles](#benchmark-configuration-profiles) + - [Separate source sets for benchmarks](#separate-source-sets-for-benchmarks) +- [Examples](#examples) +- [Contributing](#contributing) -# Requirements + -Gradle 7.0 or newer +- **Additional links** + - [Code Benchmarking: A Brief Overview](docs/benchmarking-overview.md) + - [Understanding Benchmark Runtime](docs/benchmark-runtime.md) + - [Configuring kotlinx-benchmark](docs/configuration-options.md) + - [Interpreting and Analyzing Results](docs/interpreting-results.md) + - [Creating Separate Source Sets](docs/seperate-source-sets.md) + - [Tasks Overview](docs/tasks-overview.md) + - [Compatibility Guide](docs/compatibility.md) + - [Submitting issues and PRs](CONTRIBUTING.md) -Kotlin 1.7.20 or newer +## Using in Your Projects -# Gradle plugin +The `kotlinx-benchmark` library is designed to work with Kotlin/JVM, Kotlin/JS, Kotlin/Native, and Kotlin/WASM (experimental) targets. +To get started, ensure you're using Kotlin 1.8.20 or newer and Gradle 8.0 or newer. -Use plugin in `build.gradle`: +### Project Setup -```groovy -plugins { - id 'org.jetbrains.kotlinx.benchmark' version '0.4.4' -} -``` +Follow the steps below to set up a Kotlin Multiplatform project for benchmarking. -For Kotlin/JS specify building `nodejs` flavour: +
+Kotlin DSL -```groovy -kotlin { - js { - nodejs() - … - } -} -``` +1. **Applying Benchmark Plugin**: Apply the benchmark plugin. -For Kotlin/JVM code, add `allopen` plugin to make JMH happy. Alternatively, make all benchmark classes and methods `open`. + ```kotlin + // build.gradle.kts + plugins { + id("org.jetbrains.kotlinx.benchmark") version "0.4.9" + } + ``` -For example, if you annotated each of your benchmark classes with `@State(Scope.Benchmark)`: -```kotlin -@State(Scope.Benchmark) -class Benchmark { - … -} -``` -and added the following code to your `build.gradle`: -```groovy -plugins { - id 'org.jetbrains.kotlin.plugin.allopen' -} +2. **Specifying Plugin Repository**: Ensure you have the Gradle Plugin Portal for plugin lookup in the list of repositories: -allOpen { - annotation("org.openjdk.jmh.annotations.State") -} -``` -then you don't have to make benchmark classes and methods `open`. + ```kotlin + // settings.gradle.kts + pluginManagement { + repositories { + gradlePluginPortal() + } + } + ``` + +3. **Adding Runtime Dependency**: Next, add the `kotlinx-benchmark-runtime` dependency to the common source set: + + ```kotlin + // build.gradle.kts + kotlin { + sourceSets { + commonMain { + dependencies { + implementation("org.jetbrains.kotlinx:kotlinx-benchmark-runtime:0.4.9") + } + } + } + } + ``` -# Runtime Library +4. **Specifying Runtime Repository**: Ensure you have `mavenCentral()` for dependencies lookup in the list of repositories: -You need a runtime library with annotations and code that will run benchmarks. + ```kotlin + // build.gradle.kts + repositories { + mavenCentral() + } + ``` -Enable Maven Central for dependencies lookup: -```groovy -repositories { - mavenCentral() -} -``` +
-Add the runtime to dependencies of the platform source set, e.g.: -``` -kotlin { - sourceSets { - commonMain { - dependencies { - implementation("org.jetbrains.kotlinx:kotlinx-benchmark-runtime:0.4.4") - } +
+Groovy DSL + +1. **Applying Benchmark Plugin**: Apply the benchmark plugin. + + ```groovy + // build.gradle + plugins { + id 'org.jetbrains.kotlinx.benchmark' version '0.4.9' + } + ``` + +2. **Specifying Plugin Repository**: Ensure you have the Gradle Plugin Portal for plugin lookup in the list of repositories: + + ```groovy + // settings.gradle + pluginManagement { + repositories { + gradlePluginPortal() } } -} -``` + ``` + +3. **Adding Runtime Dependency**: Next, add the `kotlinx-benchmark-runtime` dependency to the common source set: + + ```groovy + // build.gradle + kotlin { + sourceSets { + commonMain { + dependencies { + implementation 'org.jetbrains.kotlinx:kotlinx-benchmark-runtime:0.4.9' + } + } + } + } + ``` -# Configuration +4. **Specifying Runtime Repository**: Ensure you have `mavenCentral()` for dependencies lookup in the list of repositories: -In a `build.gradle` file create `benchmark` section, and inside it add a `targets` section. -In this section register all compilations you want to run benchmarks for. -`register` should either be called on the name of a target (e.g. `"jvm"`) which will register its `main` compilation -(meaning that `register("jvm")` and `register("jvmMain")` register the same compilation) -Or on the name of a source set (e.g. `"jvmTest"`, `"jsBenchmark"`) which will register the apt compilation -(e.g. `register("jsFoo")` uses the `foo` compilation defined for the `js` target) -Example for multiplatform project: + ```groovy + // build.gradle + repositories { + mavenCentral() + } + ``` -```groovy -benchmark { - targets { - register("jvm") - register("js") - register("native") - register("wasm") // Experimental +
+ +### Target-specific configurations + +To run benchmarks on a platform ensure your Kotlin Multiplatform project targets that platform. +For different platforms, there may be distinct requirements and settings that need to be configured. +The guide below contains the steps needed to configure each supported platform for benchmarking. + +#### Kotlin/JVM + +To run benchmarks in Kotlin/JVM: +1. Create a JVM target: + + ```kotlin + // build.gradle.kts + kotlin { + jvm() } -} -``` + ``` -This package can also be used for Java and Kotlin/JVM projects. Register a Java sourceSet as a target: +2. Register `jvm` as a benchmark target: -```groovy -benchmark { - targets { - register("main") + ```kotlin + // build.gradle.kts + benchmark { + targets { + register("jvm") + } } -} -``` + ``` + +3. Apply [allopen plugin](https://kotlinlang.org/docs/all-open-plugin.html) to ensure your benchmark classes and methods are `open`. -To configure benchmarks and create multiple profiles, create a `configurations` section in the `benchmark` block, -and place options inside. Toolkit creates `main` configuration by default, and you can create as many additional -configurations, as you need. + ```kotlin + // build.gradle.kts + plugins { + kotlin("plugin.allopen") version "1.8.21" + } + allOpen { + annotation("org.openjdk.jmh.annotations.State") + } + ``` -```groovy -benchmark { - configurations { - main { - // configure default configuration +
+ Explanation + + Assume that you've annotated each of your benchmark classes with `@State(Scope.Benchmark)`: + + ```kotlin + // MyBenchmark.kt + @State(Scope.Benchmark) + class MyBenchmark { + // Benchmarking-related methods and variables + fun benchmarkMethod() { + // benchmarking logic } - smoke { - // create and configure "smoke" configuration, e.g. with several fast benchmarks to quickly check - // if code changes result in something very wrong, or very right. - } } -} -``` + ``` -Available configuration options: - -* `iterations` – number of measuring iterations -* `warmups` – number of warm up iterations -* `iterationTime` – time to run each iteration (measuring and warmup) -* `iterationTimeUnit` – time unit for `iterationTime` (default is seconds) -* `outputTimeUnit` – time unit for results output -* `mode` - - "thrpt" (default) – measures number of benchmark function invocations per time - - "avgt" – measures time per benchmark function invocation -* `include("…")` – regular expression to include benchmarks with fully qualified names matching it, as a substring -* `exclude("…")` – regular expression to exclude benchmarks with fully qualified names matching it, as a substring -* `param("name", "value1", "value2")` – specify a parameter for a public mutable property `name` annotated with `@Param` -* `reportFormat` – format of report, can be `json`(default), `csv`, `scsv` or `text` -* There are also some advanced platform-specific settings that can be configured using `advanced("…", …)` function, - where the first argument is the name of the configuration parameter, and the second is its value. Valid options: - * (Kotlin/Native) `nativeFork` - - "perBenchmark" (default) – executes all iterations of a benchmark in the same process (one binary execution) - - "perIteration" – executes each iteration of a benchmark in a separate process, measures in cold Kotlin/Native runtime environment - * (Kotlin/Native) `nativeGCAfterIteration` – when set to `true`, additionally collects garbage after each measuring iteration (default is `false`). - * (Kotlin/JVM) `jvmForks` – number of times harness should fork (default is `1`) - - a non-negative integer value – the amount to use for all benchmarks included in this configuration, zero means "no fork" - - "definedByJmh" – let the underlying JMH determine, which uses the amount specified in [`@Fork` annotation](https://javadoc.io/static/org.openjdk.jmh/jmh-core/1.21/org/openjdk/jmh/annotations/Fork.html) defined for the benchmark function or its enclosing class, - or [Defaults.MEASUREMENT_FORKS (`5`)](https://javadoc.io/static/org.openjdk.jmh/jmh-core/1.21/org/openjdk/jmh/runner/Defaults.html#MEASUREMENT_FORKS) if it is not specified by `@Fork`. - * (Kotlin/Js and Wasm) `jsUseBridge` – when `false` disables to generate special benchmark bridges to prevent inlining optimisations (only for `BuiltIn` benchmark executors). - -Time units can be NANOSECONDS, MICROSECONDS, MILLISECONDS, SECONDS, MINUTES, or their short variants such as "ms" or "ns". - -Example: - -```groovy -benchmark { - // Create configurations - configurations { - main { // main configuration is created automatically, but you can change its defaults - warmups = 20 // number of warmup iterations - iterations = 10 // number of iterations - iterationTime = 3 // time in seconds per iteration + In Kotlin, classes are `final` by default, which means they can't be overridden. + This is incompatible with the operation of the Java Microbenchmark Harness (JMH), which kotlinx-benchmark uses under the hood for running benchmarks on JVM. + JMH requires benchmark classes and methods to be `open` to be able to generate subclasses and conduct the benchmark. + + This is where the `allopen` plugin comes into play. With the plugin applied, any class annotated with `@State` is treated as `open`, which allows JMH to work as intended: + + ```kotlin + // build.gradle.kts + plugins { + kotlin("plugin.allopen") version "1.8.21" + } + + allOpen { + annotation("org.openjdk.jmh.annotations.State") + } + ``` + + This configuration ensures that your `MyBenchmark` class and its `benchmarkMethod` function are treated as `open`. + +
+ + You can alternatively mark your benchmark classes and methods `open` manually, but using the `allopen` plugin enhances code maintainability. + +#### Kotlin/JS + +To run benchmarks in Kotlin/JS: +1. Create a JS target with Node.js execution environment: + + ```kotlin + // build.gradle.kts + kotlin { + js(IR) { + nodejs() + } + } + ``` + +2. Register `js` as a benchmark target: + + ```kotlin + // build.gradle.kts + benchmark { + targets { + register("js") + } + } + ``` + +For Kotlin/JS, only the [IR compiler backend](https://kotlinlang.org/docs/js-ir-compiler.html) is supported. + +#### Kotlin/Native + +To run benchmarks in Kotlin/Native: +1. Create a Native target: + + ```kotlin + // build.gradle.kts + kotlin { + linuxX64("native") + } + ``` + +2. Register `native` as a benchmark target: + + ```kotlin + // build.gradle.kts + benchmark { + targets { + register("native") } - smoke { - warmups = 5 // number of warmup iterations - iterations = 3 // number of iterations - iterationTime = 500 // time in seconds per iteration - iterationTimeUnit = "ms" // time unit for iterationTime, default is seconds - } } + ``` - // Setup targets - targets { - // This one matches compilation base name, e.g. 'jvm', 'jvmTest', etc - register("jvm") { - jmhVersion = "1.21" // available only for JVM compilations & Java source sets +This library supports all [targets supported by the Kotlin/Native compiler](https://kotlinlang.org/docs/native-target-support.html). + +#### Kotlin/WASM + +To run benchmarks in Kotlin/WASM: +1. Create a WASM target with D8 execution environment: + + ```kotlin + // build.gradle.kts + kotlin { + wasm { + d8() } - register("js") { - // Note, that benchmarks.js uses a different approach of minTime & maxTime and run benchmarks - // until results are stable. We estimate minTime as iterationTime and maxTime as iterationTime*iterations - // - // You can configure benchmark executor - benchmarkJs or buildIn (works only for JsIr backend) with the next line: - // jsBenchmarksExecutor = JsBenchmarksExecutor.BuiltIn + } + ``` + +2. Register `wasm` as a benchmark target: + + ```kotlin + // build.gradle.kts + benchmark { + targets { + register("wasm") } - register("native") - register("wasm") // Experimental } -} -``` - -# Separate source sets for benchmarks + ``` -Often you want to have benchmarks in the same project, but separated from main code, much like tests. Here is how: -For a Kotlin/JVM project: +Note: Kotlin/WASM is an experimental compilation target for Kotlin. It may be dropped or changed at any time. -Define source set: -```groovy -sourceSets { - benchmarks -} -``` +### Writing Benchmarks -Propagate dependencies and output from `main` sourceSet. +After setting up your project and configuring targets, you can start writing benchmarks. +As an example, let's write a simplified benchmark that tests how fast we can add up numbers in an ArrayList: -```groovy -dependencies { - benchmarksCompile sourceSets.main.output + sourceSets.main.runtimeClasspath -} -``` +1. **Create Benchmark Class**: Create a class in your source set where you'd like to add the benchmark. Annotate this class with `@State(Scope.Benchmark)`. -You can also add output and compileClasspath from `sourceSets.test` in the same way if you want -to reuse some of the test infrastructure. + ```kotlin + @State(Scope.Benchmark) + class MyBenchmark { + } + ``` -Register `benchmarks` source set: +2. **Set up Variables**: Define variables needed for the benchmark. -```groovy -benchmark { - targets { - register("benchmarks") + ```kotlin + private val size = 10 + private val list = ArrayList() + ``` + +3. **Initialize Resources**: Within the class, you can define any setup or teardown methods using `@Setup` and `@TearDown` annotations respectively. These methods will be executed before and after the entire benchmark run. + + ```kotlin + @Setup + fun prepare() { + for (i in 0 until size) { + list.add(i) + } + } + + @TearDown + fun cleanup() { + list.clear() + } + ``` + +4. **Define Benchmark Method**: Next, create methods that you would like to be benchmarked within this class and annotate them with `@Benchmark`. + + ```kotlin + @Benchmark + fun benchmarkMethod(): Int { + return list.sum() + } + ``` + +Your final benchmark class will look something like this: + +```kotlin +@State(Scope.Benchmark) +class MyBenchmark { + + private val size = 10 + private val list = ArrayList() + + @Setup + fun prepare() { + for (i in 0 until size) { + list.add(i) + } } -} -``` -For a Kotlin Multiplatform project: + @Benchmark + fun benchmarkMethod(): Int { + return list.sum() + } -Define a new compilation in whichever target you'd like (e.g. `jvm`, `js`, etc): -```groovy -kotlin { - jvm { - compilations.create('benchmark') { associateWith(compilations.main) } + @TearDown + fun cleanup() { + list.clear() } } ``` -Register it by its source set name (`jvmBenchmark` is the name for the `benchmark` compilation for `jvm` target): +Note: Benchmark classes located in the common source set will be run in all platforms, while those located in a platform-specific source set will be run only in the corresponding platform. + +See [writing benchmarks](docs/writing-benchmarks.md) for a complete guide for writing benchmarks. -```groovy +### Running Benchmarks + +To run your benchmarks in all registered platforms, run `benchmark` Gradle task in your project. +To run only on a specific platform, run `Benchmark`, e.g., `jvmBenchmark`. + +For more details about the tasks created by the kotlinx-benchmark plugin, refer to [this guide](docs/tasks-overview.md). + +### Benchmark Configuration Profiles + +The kotlinx-benchmark library provides the ability to create multiple configuration profiles. The `main` configuration is already created by the Toolkit. +Additional profiles can be created as needed in the `configurations` section of the `benchmark` block: + +```kotlin +// build.gradle.kts benchmark { - targets { - register("jvmBenchmark") + configurations { + named("main") { + warmups = 20 + iterations = 10 + iterationTime = 3 + iterationTimeUnit = "s" + } + register("smoke") { + include("") + warmups = 5 + iterations = 3 + iterationTime = 500 + iterationTimeUnit = "ms" + } } } -``` +``` + +Refer to our [comprehensive guide](docs/configuration-options.md) to learn about configuration options and how they affect benchmark execution. + +### Separate source sets for benchmarks + +Often you want to have benchmarks in the same project, but separated from main code, much like tests. +Refer to our [detailed documentation](docs/separate-source-sets.md) on configuring your project to add a separate source set for benchmarks. + +## Examples + +To help you better understand how to use the kotlinx-benchmark library, we've provided an [examples](examples) subproject. +These examples showcase various use cases and offer practical insights into the library's functionality. -# Examples +## Contributing -The project contains [examples](https://github.com/Kotlin/kotlinx-benchmark/tree/master/examples) subproject that demonstrates using the library. - +We welcome contributions to kotlinx-benchmark! If you want to contribute, please refer to our [Contribution Guidelines](CONTRIBUTING.md). \ No newline at end of file diff --git a/docs/benchmark-runtime.md b/docs/benchmark-runtime.md new file mode 100644 index 00000000..0edc0edf --- /dev/null +++ b/docs/benchmark-runtime.md @@ -0,0 +1,28 @@ +# Table of Contents +1. [Introduction](#Understanding-Benchmark-Runtime-Across-Targets) +2. [JVM: Harnessing JMH](#jvm-harnessing-jmh) +3. [JavaScript: Benchmark.js Integration and In-built Support](#javascript-benchmarkjs-integration-and-in-built-support) +4. [Native: Harnessing Native Capabilities](#native-harnessing-native-capabilities) +5. [WebAssembly (Wasm): Custom-Built Benchmarking](#webassembly-wasm-custom-built-benchmarking) + +# Understanding Benchmark Runtime Across Targets + +This comprehensive guide aims to shed light on the underlying libraries that Kotlinx Benchmark utilizes to measure performance on these platforms, and elucidate the benchmark runtime process. + +## JVM: Harnessing JMH +In the JVM ecosystem, Kotlinx Benchmark capitalizes on the Java microbenchmarking harness [JMH](https://openjdk.org/projects/code-tools/jmh/). Designed by OpenJDK, JMH is a well-respected tool for creating, executing, and scrutinizing nano/micro/milli/macro benchmarks composed in Java and other JVM-compatible languages. + +Kotlinx Benchmark complements JMH with an array of advanced features that fine-tune JVM-specific settings. An exemplary feature is the handling of 'forks', a mechanism that facilitates running multiple tests in distinct JVM processes. By doing so, it assures a pristine environment for each test, enhancing the reliability of the benchmark results. Moreover, its sophisticated error and exception handling system ensures that any issues arising during testing are logged and addressed. + +## JavaScript: Benchmark.js Integration and In-built Support +Targeting JavaScript, Kotlinx Benchmark utilizes the `benchmark.js` library to measure performance. Catering to both synchronous and asynchronous benchmarks, this library enables evaluation of a vast array of JavaScript operations. `benchmark.js` operates by setting up a suite of benchmarks, where each benchmark corresponds to a distinct JavaScript operation to be evaluated. + +It's noteworthy that, alongside `benchmark.js`, Kotlinx Benchmark also incorporates its own built-in yet somewhat limited benchmarking system for Kotlin/JavaScript runtime. + +## Native: Harnessing Native Capabilities +For Native platforms, Kotlinx Benchmark resorts to its built-in benchmarking system, which is firmly rooted in platform-specific technologies. Benchmarks are defined in the form of suites, each representing a specific Kotlin/Native operation to be evaluated. + +## WebAssembly (Wasm): Custom-Built Benchmarking +For Kotlin code running on WebAssembly (Wasm), Kotlinx Benchmark deploys built-in mechanisms to establish a testing milieu and measure code performance. + +In this setup, similarly a suite of benchmarks is created, each pinpointing a different code segment. The execution time of each benchmark is gauged using high-resolution JavaScript functions, thereby providing accurate and precise performance measurements. \ No newline at end of file diff --git a/docs/benchmarking-overview.md b/docs/benchmarking-overview.md new file mode 100644 index 00000000..33ebc717 --- /dev/null +++ b/docs/benchmarking-overview.md @@ -0,0 +1,134 @@ +# Code Benchmarking: A Brief Overview + +This guide serves as your compass for mastering the art of benchmarking with kotlinx-benchmark. By harnessing the power of benchmarking, you can unlock performance insights in your code, uncover bottlenecks, compare different implementations, detect regressions, and make informed decisions for optimization. + +## Table of Contents + +1. [Understanding Benchmarking](#understanding-benchmarking) + - [Benchmarking Unveiled: A Beginner's Introduction](#benchmarking-unveiled-a-beginners-introduction) + - [Why Benchmarking Deserves Your Attention](#why-benchmarking-deserves-your-attention) + - [Benchmarking: A Developer's Torchlight](#benchmarking-a-developers-torchlight) +2. [Benchmarking Use Cases](#benchmarking-use-cases) +3. [Target Code for Benchmarking](#target-code-for-benchmarking) + - [What to Benchmark](#what-to-benchmark) + - [What Not to Benchmark](#what-not-to-benchmark) +4. [Maximizing Benchmarking](#maximizing-benchmarking) + - [Top Tips for Maximizing Benchmarking](#top-tips-for-maximizing-benchmarking) +5. [Community and Support](#community-and-support) +6. [Inquiring Minds: Your Benchmarking Questions Answered](#inquiring-minds-your-benchmarking-questions-answered) +7. [Further Reading and Resources](#further-reading-and-resources) + +## Understanding Benchmarking + +### Benchmarking Unveiled: A Beginner's Introduction + +Benchmarking is the magnifying glass for your code's performance. It helps you uncover performance bottlenecks, carry out comparative analyses, detect performance regressions, and evaluate different environments. By providing a standard and reliable method of performance measurement, benchmarking ensures code optimization and quality, and improves decision-making within the team and the wider development community. + +_kotlinx-benchmark_ is designed for microbenchmarking, providing a lightweight and accurate solution for measuring the performance of Kotlin code. + +### Why Benchmarking Deserves Your Attention + +The significance of benchmarking in software development is undeniable: + +- **Performance Analysis**: Benchmarks provide insights into performance characteristics, allowing you to identify bottlenecks and areas for improvement. +- **Algorithm Optimization**: By comparing different implementations, you can choose the most efficient solution. +- **Code Quality**: Benchmarking ensures that your code meets performance requirements and maintains high quality. +- **Scalability**: Understanding how your code performs at different scales helps you make optimization decisions and trade-offs. + +### Benchmarking: A Developer's Torchlight + +Benchmarking provides several benefits for software development projects: + +1. **Performance Optimization:** By benchmarking different parts of a system, developers can identify performance bottlenecks, areas for improvement, and potential optimizations. This helps in enhancing the overall efficiency and speed of the software. + +2. **Comparative Analysis:** Benchmarking allows developers to compare various implementations, libraries, or configurations to make informed decisions. It helps choose the best-performing option or measure the impact of changes made during development. + +3. **Regression Detection:** Regular benchmarking enables the detection of performance regressions, i.e., when a change causes a degradation in performance. This helps catch potential issues early in the development process and prevents performance degradation in production. + +4. **Hardware and Environment Variations:** Benchmarking helps evaluate the impact of different hardware configurations, system setups, or environments on performance. It enables developers to optimize their software for specific target platforms. + +## Benchmarking Use Cases + +Benchmarking serves as a critical tool across various scenarios in software development. Here are a few notable use cases: + +- **Performance Tuning:** Developers often employ benchmarking while optimizing algorithms, especially when subtle tweaks could lead to drastic performance changes. + +- **Library Selection:** When deciding between third-party libraries offering similar functionalities, benchmarking can help identify the most efficient option. + +- **Hardware Evaluation:** Benchmarking can help understand how a piece of software performs across different hardware configurations, aiding in better infrastructure decisions. + +- **Continuous Integration (CI) Systems:** Automated benchmarks as part of a CI pipeline help spot performance regressions in the early stages of development. + +## Target Code for Benchmarking + +### What to Benchmark + +Consider benchmarking these: + +- **Measurable Microcosms: Isolated Code Segments:** Benchmarking thrives on precision, making small, isolated code segments an excellent area of focus. These miniature microcosms of your codebase are more manageable and provide clearer, more focused insights into your application's performance characteristics. + +- **The Powerhouses: Performance-Critical Functions, Methods or Algorithms:** Your application's overall performance often hinges on a select few performance-critical sections of code. These powerhouses - whether they're specific functions, methods, or complex algorithms - have a significant influence on your application's overall performance and thus make for ideal benchmarking candidates. + +- **The Chameleons: Code Ripe for Optimization or Refactoring:** Change is the only constant in the world of software development. Parts of your code that are regularly refactored, updated, or optimized hold immense value from a benchmarking perspective. By tracking performance changes as this code evolves, you gain insights into the impact of your optimizations, ensuring that every tweak is a step forward in performance. + +### What Not to Benchmark + +It's best to avoid benchmarking: + +- **The Giants: Complex, Monolithic Code Segments:** Although it might be tempting to analyze large, intricate segments of your codebase, these can often lead to a benchmarking quagmire. Interdependencies within these sections can complicate your results, making it challenging to derive precise, actionable insights. Instead, concentrate your efforts on smaller, isolated parts of your code that can be analyzed in detail. + +- **The Bedrocks: Stagnant, Inflexible Code:** Code segments that are infrequently altered or have reached their final form may not provide much value from benchmarking. While it's important to understand their performance characteristics, it's the code that you actively optimize or refactor that can truly benefit from the continuous feedback loop that benchmarking provides. + +- **The Simples: Trivial or Overly Simplistic Code Segments:** While every line of code contributes to the overall performance, directing your benchmarking efforts towards overly simple or negligible impact parts of your code may not yield much fruit. Concentrate on areas that have a more pronounced impact on your application's performance to ensure your efforts are well spent. + +- **The Wild Cards: Non-Reproducible or Unpredictable Behavior Code:** Consistency is key in benchmarking, so code that's influenced by external, unpredictable factors, such as I/O operations, network conditions, or random data generation, should generally be avoided. The resulting inconsistent benchmark results may obstruct your path to precise insights, hindering your optimization efforts. + +## Maximizing Benchmarking + +### Top Tips for Maximizing Benchmarking + +To obtain accurate and insightful benchmark results, keep in mind these essential tips: + +1. **Focus on Vital Code Segments**: Benchmark small, isolated code segments that are critical to performance or likely to be optimized. + +2. **Employ Robust Tools**: Employ powerful benchmarking tools like kotlinx-benchmark that handle potential pitfalls and provide reliable measurement solutions. + +3. **Context is Crucial**: Supplement your benchmarking with performance evaluations on real applications to gain a holistic understanding of performance traits. + +4. **Control Your Environment**: Minimize external factors by running benchmarks in a controlled environment, reducing variations in results. + +5. **Warm-Up the Code**: Before benchmarking, execute your code multiple times. This allows the JVM to perform optimizations, leading to more accurate results. + +6. **Interpreting Results**: Understand that lower values are better in a benchmarking context. Also, consider the statistical variance and look for meaningful differences, not just any difference. + +## Community and Support + +For further assistance and learning, consider engaging with these communities: + +- **Stack Overflow:** Use the `kotlinx-benchmark` tag to find or ask questions related to this tool. + +- **Kotlinlang Slack:** The `#benchmarks` channels is the perfect place to discuss topics related to benchmarking. + +- **Github Discussions:** The kotlinx-benchmark Github repository is another place to discuss and ask questions about this library. + +## Inquiring Minds: Your Benchmarking Questions Answered + +Benchmarking may raise a myriad of questions, especially when you're first getting started. To help you navigate through these complexities, we've compiled answers to some commonly asked questions. + +**1. The Warm-Up Riddle: Why is it Needed Before Benchmarking?** + +The Java Virtual Machine (JVM) features sophisticated optimization techniques, such as Just-In-Time (JIT) compilation, which becomes more effective as your code runs. Warming up allows these optimizations to take place, providing a more accurate representation of how your code performs under standard operating conditions + +**2. Decoding Benchmark Results: How Should I Interpret Them?** + +In benchmarking, lower values represent better performance. But don't get too fixated on minuscule differences. Remember to take into account statistical variances and concentrate on significant performance disparities. It's the impactful insights, not every minor fluctuation, that matter most. + +**3. Multi-threaded Conundrum: Can I Benchmark Multi-threaded Code with kotlinx-benchmark?** + +While kotlinx-benchmark is geared towards microbenchmarking — typically examining single-threaded performance — it's possible to benchmark multi-threaded code. However, keep in mind that such benchmarking can introduce additional complexities due to thread synchronization, contention, and other concurrency challenges. Always ensure you understand these intricacies before proceeding. + +## Further Reading and Resources + +If you'd like to dig deeper into the world of benchmarking, here are some resources to help you on your journey: + +- [Mastering High Performance with Kotlin](https://www.amazon.com/Mastering-High-Performance-Kotlin-difficulties/dp/178899664X) \ No newline at end of file diff --git a/docs/compatibility.md b/docs/compatibility.md new file mode 100644 index 00000000..babcc226 --- /dev/null +++ b/docs/compatibility.md @@ -0,0 +1,21 @@ +# Compatibility Guide + +This guide provides you with information on the compatibility of different versions of `kotlinx-benchmark` with both Kotlin and Gradle. To use `kotlinx-benchmark` effectively, ensure that you have the minimum required versions of Kotlin and Gradle installed. + +| `kotlinx-benchmark` Version | Minimum Required Kotlin Version | Minimum Required Gradle Version | +| :-------------------------: | :-----------------------------: | :-----------------------------: | +| 0.4.8 | 1.8.2 | 8.0 or newer | +| 0.4.7 | 1.8.0 | 8.0 or newer | +| 0.4.6 | 1.7.20 | 8.0 or newer | +| 0.4.5 | 1.7.0 | 7.0 or newer | +| 0.4.4 | 1.7.0 | 7.0 or newer | +| 0.4.3 | 1.6.20 | 7.0 or newer | +| 0.4.2 | 1.6.0 | 7.0 or newer | +| 0.4.1 | 1.6.0 | 6.8 or newer | +| 0.4.0 | 1.5.30 | 6.8 or newer | +| 0.3.1 | 1.4.30 | 6.8 or newer | +| 0.3.0 | 1.4.30 | 6.8 or newer | + +*Note: "Minimum Required" implies that any higher version than the one mentioned will also be compatible.* + +For more details about the changes, improvements, and updates in each `kotlinx-benchmark` version, please refer to the [RELEASE NOTES](https://github.com/Kotlin/kotlinx-benchmark/releases) and [CHANGELOG](../CHANGELOG.md). \ No newline at end of file diff --git a/docs/configuration-options.md b/docs/configuration-options.md new file mode 100644 index 00000000..fee253b4 --- /dev/null +++ b/docs/configuration-options.md @@ -0,0 +1,78 @@ +# Mastering kotlinx-benchmark Configuration + +This is a comprehensive guide to configuration options that help fine-tune your benchmarking setup to suit your specific needs. + +## The `configurations` Section + +The `configurations` section of the `benchmark` block serves as the control center for setting the parameters of your benchmark profiles. The library provides a default configuration profile named "main", which can be configured according to your needs just like any other profile. Here's a basic structure of how configurations can be set up: + +```kotlin +// build.gradle.kts +benchmark { + configurations { + register("smoke") { + // Configure this configuration profile here + } + // here you can create additional profiles + } +} +``` + +## Understanding Configuration Profiles + +Configuration profiles dictate the execution pattern of benchmarks: + +- Utilize `include` and `exclude` options to select specific benchmarks for a profile. By default, every benchmark is included. +- Each configuration profile translates to a task in the `kotlinx-benchmark` Gradle plugin. For instance, the task `smokeBenchmark` is tailored to run benchmarks based on the `"smoke"` configuration profile. For an overview of tasks, refer to [tasks-overview.md](tasks-overview.md). + +## Core Configuration Options + +Note that values defined in the build script take precedence over those specified by annotations in the code. + +| Option | Description | Possible Values | Corresponding Annotation | +| ----------------------------------- |---------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------|-----------------------------------------------------| +| `iterations` | Sets the number of iterations for measurements. | Positive Integer | @Measurement(iterations: Int, ...) | +| `warmups` | Sets the number of iterations for system warming, ensuring accurate measurements. | Non-negative Integer | @Warmup(iterations: Int) | +| `iterationTime` | Sets the duration for each iteration, both measurement and warm-up. | Positive Integer | @Measurement(..., time: Int, ...) | +| `iterationTimeUnit` | Defines the unit for `iterationTime`. | Time unit, see below | @Measurement(..., timeUnit: BenchmarkTimeUnit, ...) | +| `outputTimeUnit` | Sets the unit for the results display. | Time unit, see below | @OutputTimeUnit(value: BenchmarkTimeUnit) | +| `mode` | Selects "thrpt" (Throughput) for measuring the number of function calls per unit time or "avgt" (AverageTime) for measuring the time per function call. | `thrpt`, `Throughput`, `avgt`, `AverageTime` | @BenchmarkMode(value: Mode) | +| `include("…")` | Applies a regular expression to include benchmarks that match the substring in their fully qualified names. | Regex pattern | - | +| `exclude("…")` | Applies a regular expression to exclude benchmarks that match the substring in their fully qualified names. | Regex pattern | - | +| `param("name", "value1", "value2")` | Assigns values to a public mutable property with the specified name, annotated with `@Param`. | String values that represent valid values for the property | @Param | +| `reportFormat` | Defines the benchmark report's format options. | `json`(default), `csv`, `scsv`, `text` | - | + +The following values can be used for specifying time unit: +- "NANOSECONDS", "ns", "nanos" +- "MICROSECONDS", "us", "micros" +- "MILLISECONDS", "ms", "millis" +- "SECONDS", "s", "sec" +- "MINUTES", "m", "min" + +## Platform-Specific Configuration Options + +The options listed in the following sections allow you to tailor the benchmark execution behavior for specific platforms: + +### Kotlin/Native +| Option | Description | Possible Values | Default Value | +|-----------------------------------------------|------------------------------------------------------------------------------------------------------------------------|--------------------------------|----------------| +| `advanced("nativeFork", "value")` | Executes iterations within the same process ("perBenchmark") or each iteration in a separate process ("perIteration"). | `perBenchmark`, `perIteration` | "perBenchmark" | +| `advanced("nativeGCAfterIteration", value)` | Whether to trigger garbage collection after each iteration. | `true`, `false` | `false` | + +### Kotlin/JVM +| Option | Description | Possible Values | Default Value | +|---------------------------------------------|------------------------------------------------------------|--------------------------------|----------------| +| `advanced("jvmForks", value)` | Specifies the number of times the harness should fork. | Integer, "definedByJmh" | `1` | +| `advanced("jvmProfiler", value)` | Sets the profiler to be used during benchmarking. | "[gc](https://github.com/openjdk/jmh/blob/master/jmh-samples/src/main/java/org/openjdk/jmh/samples/JMHSample_35_Profilers.java#L170-L212)", "[stack](https://github.com/openjdk/jmh/blob/master/jmh-samples/src/main/java/org/openjdk/jmh/samples/JMHSample_35_Profilers.java#L166-L168)", "[cl](https://github.com/openjdk/jmh/blob/master/jmh-samples/src/main/java/org/openjdk/jmh/samples/JMHSample_35_Profilers.java#L288-L304)", "[comp](https://github.com/openjdk/jmh/blob/master/jmh-samples/src/main/java/org/openjdk/jmh/samples/JMHSample_35_Profilers.java#L306-L318)" | No profiler | + +**Notes on "jvmForks":** +- **0** - "no fork", i.e., no subprocesses are forked to run benchmarks. +- A positive integer value – the amount used for all benchmarks in this configuration. +- **"definedByJmh"** – Let JMH determine the amount, using the value in the [`@Fork` annotation](https://javadoc.io/static/org.openjdk.jmh/jmh-core/1.21/org/openjdk/jmh/annotations/Fork.html) for the benchmark function or its enclosing class. If not specified by `@Fork`, it defaults to [Defaults.MEASUREMENT_FORKS (`5`)](https://javadoc.io/static/org.openjdk.jmh/jmh-core/1.21/org/openjdk/jmh/runner/Defaults.html#MEASUREMENT_FORKS). + +### Kotlin/JS & Kotlin/Wasm +| Option | Description | Possible Values | Default Value | +|-----------------------------------------------|-------------------------------------------------------------------------------------------------------|-----------------|---------------| +| `advanced("jsUseBridge", value)` | Generate special benchmark bridges to stop inlining optimizations. | `true`, `false` | `true` | + +**Note:** "jsUseBridge" works only when the `BuiltIn` benchmark executor is selected. \ No newline at end of file diff --git a/docs/interpreting-results.md b/docs/interpreting-results.md new file mode 100644 index 00000000..951c42ee --- /dev/null +++ b/docs/interpreting-results.md @@ -0,0 +1,51 @@ +# Interpreting and Analyzing Kotlinx-Benchmark Results + +When you use the kotlinx-benchmark library to profile your Kotlin code, it provides a detailed output that can help you identify bottlenecks, inefficiencies, and performance variations in your application. Here is a comprehensive guide on how to interpret and analyze these results. + +## Understanding the Output + +A typical kotlinx-benchmark result may look something like this: + +``` +Benchmark Mode Cnt Score Error Units +ListBenchmark.first thrpt 20 74512.866 ± 3415.994 ops/s +ListBenchmark.first thrpt 20 7685.378 ± 359.982 ops/s +ListBenchmark.first thrpt 20 619.714 ± 31.470 ops/s +``` + +Let's break down what each column represents: + +1. **Benchmark:** This is the name of the benchmark test. +2. **Mode:** This is the benchmark mode. It may be "avgt" (average time), "ss" (single shot time), "thrpt" (throughput), or "sample" (sampling time). +3. **Cnt:** This is the number of measurements taken for the benchmark. More measurements lead to more reliable results. +4. **Score:** This is the primary result of the benchmark. For "avgt", "ss" and "sample" modes, lower scores are better, as they represent time taken per operation. For "thrpt", higher scores are better, as they represent operations per unit of time. +5. **Error:** This is the error rate for the Score. It helps you understand the statistical dispersion in the data. A small error rate means the Score is more reliable. +6. **Units:** These indicate the units for Score and Error, like operations per second (ops/s) or time per operation (us/op, ms/op, etc.) + +## Analyzing the Results + +Here are some general steps to analyze your benchmark results: + +1. **Compare Scores:** The primary factor to consider is the Score. Remember to interpret it in the context of the benchmark mode - for throughput, higher is better, and for time-based modes, lower is better. + +2. **Consider Error:** The Error rate gives you an idea of the reliability of your Score. If the Error is high, the benchmark might need to be run more times to get a reliable Score. + +3. **Review Parameters:** Consider the impact of different parameters (like 'size' in the example) on your benchmark. They can give you insights into how your code performs under different conditions. + +4. **Factor in Units:** Be aware of the units in which your results are measured. Time can be measured in nanoseconds, microseconds, milliseconds, or seconds, and throughput in operations per second. + +5. **Compare Benchmarks:** If you have run multiple benchmarks, compare the results. This can help identify which parts of your code are slower or less efficient than others. + +## Common Pitfalls + +While analyzing benchmark results, watch out for these common pitfalls: + +1. **Variance:** If you're seeing a high amount of variance (a high Error rate), consider running the benchmark more times. + +2. **JVM Warmup:** Java's HotSpot VM optimizes the code as it runs, which can cause the first few runs to be significantly slower. Make sure you allow for adequate JVM warmup time to get accurate benchmark results. + +3. **Micro-benchmarks:** Be cautious when drawing conclusions from micro-benchmarks (benchmarks of very small pieces of code). They can be useful for testing small, isolated pieces of code, but real-world performance often depends on a wide array of factors that aren't captured in micro-benchmarks. + +4. **Dead Code Elimination:** The JVM is very good at optimizing your code, and sometimes it can optimize your benchmark right out of existence! Make sure your benchmarks do real work and that their results are used somehow (often by returning them from the benchmark method), or else the JVM might optimize them away. + +5. **Measurement error:** Ensure that you are not running any heavy processes in the background that could distort your benchmark results. \ No newline at end of file diff --git a/docs/multiplatform-setup.md b/docs/multiplatform-setup.md new file mode 100644 index 00000000..4fabc4da --- /dev/null +++ b/docs/multiplatform-setup.md @@ -0,0 +1,396 @@ +# Step-by-Step Setup Guide for a Multiplatform Benchmarking Project Using kotlinx-benchmark + +This guide will walk you through the process of setting up a multiplatform benchmarking project in Kotlin using kotlinx-benchmark. + +# Table of Contents + +1. [Prerequisites](#prerequisites) +2. [Kotlin/JS Project Setup](#kotlinjs-project-setup) +3. [Kotlin/Native Project Setup](#kotlinnative-project-setup) +4. [Kotlin/WASM Project Setup](#kotlinwasm-project-setup) +5. [Multiplatform Project Setup](#multiplatform-project-setup) +6. [Conclusion](#conclusion) + +## Prerequisites + +Ensure your development environment meets the following [requirements](compatibility.md): + +- **Kotlin**: Version 1.8.20 or newer. +- **Gradle**: Version 8.0 or newer. + +## Kotlin/JS Project Setup + +### Step 1: Add the Benchmark Plugin + +
+Kotlin DSL + +In your `build.gradle.kts` file, add the benchmarking plugin: + +```kotlin +plugins { + kotlin("multiplatform") + id("org.jetbrains.kotlinx.benchmark") version "0.4.8" +} +``` +
+ +
+Groovy DSL + +In your `build.gradle` file, add the benchmarking plugin: + +```groovy +plugins { + id 'org.jetbrains.kotlin.multiplatform' + id 'org.jetbrains.kotlinx.benchmark' version '0.4.8' +} +``` +
+ +### Step 2: Configure the Benchmark Plugin + +The next step is to configure the benchmark plugin to know which targets to run the benchmarks against. In this case, we're specifying `js` as the target platform: + +```groovy +benchmark { + targets { + register("js") + } +} +``` + +### Step 3: Specify the Node.js Target and Optional Compiler + +In Kotlin/JS, set the Node.js runtime as your target: + +```kotlin +kotlin { + js { + nodejs() + } +} +``` + +Optionally you can specify a compiler such as the [IR compiler](https://kotlinlang.org/docs/js-ir-compiler.html) and configure the benchmarking targets: + +```kotlin +kotlin { + js('jsIr', IR) { + nodejs() + } + js('jsIrBuiltIn', IR) { + nodejs() + } +} +``` + +In this configuration, `jsIr` and `jsIrBuiltIn` are both set up for Node.js and use the IR compiler. The `jsIr` target relies on an external benchmarking library (benchmark.js), whereas `jsIrBuiltIn` leverages the built-in Kotlin benchmarking plugin. Choosing one depends on your specific benchmarking requirements. + +### Step 4: Add the Runtime Library + +To run benchmarks, add the runtime library, `kotlinx-benchmark-runtime`, to the dependencies of your source set and enable Maven Central for dependencies lookup: + +```kotlin +kotlin { + sourceSets { + commonMain { + dependencies { + implementation("org.jetbrains.kotlinx:kotlinx-benchmark-runtime:0.4.4") + } + } + } +} + +repositories { + mavenCentral() +} +``` + +### Step 5: Write Benchmarks + +Create a new source file in your `src/main/kotlin` directory and write your benchmarks. Here's an example: + +```kotlin +package benchmark + +import org.openjdk.jmh.annotations.* + +@State(Scope.Benchmark) +open class JSBenchmark { + private var data = 0.0 + + @Setup + fun setUp() { + data = 3.0 + } + + @Benchmark + fun sqrtBenchmark(): Double { + return kotlin.math.sqrt(data) + } +} +``` + +### Step 6: Run Benchmarks + +In the terminal, navigate to your project's root directory and run `./gradlew benchmark`. + +## Kotlin/Native Project Setup + +### Step 1: Add the Benchmark Plugin + +
+Kotlin DSL + +In your `build.gradle.kts` file, add the benchmarking plugin: + +```kotlin +plugins { + kotlin("multiplatform") + id("org.jetbrains.kotlinx.benchmark") version "0.4.8" +} +``` +
+ +
+Groovy DSL + +In your `build.gradle` file, add the benchmarking plugin: + +```groovy +plugins { + id 'org.jetbrains.kotlin.multiplatform' + id 'org.jetbrains.kotlinx.benchmark' version '0.4.8' +} +``` +
+ +### Step 2: Configure the Benchmark Plugin + +The next step is to configure the benchmark plugin to know which targets to run the benchmarks against. In this case, we're specifying `native` as the target platform: + +```groovy +benchmark { + targets { + register("native") + } +} +``` + +### Step 3: Add the Runtime Library + +To run benchmarks, add the runtime library, `kotlinx-benchmark-runtime`, to the dependencies of your source set and enable Maven Central for dependencies lookup: + +```kotlin +kotlin { + sourceSets { + commonMain { + dependencies { + implementation("org.jetbrains.kotlinx:kotlinx-benchmark-runtime:0.4.4") + } + } + } +} + +repositories { + mavenCentral() +} +``` + +### Step 4: Write Benchmarks + +Create a new source file in your `src/nativeMain/kotlin` directory and write your benchmarks. Here's an example: + +```kotlin +package benchmark + +import org.openjdk.jmh.annotations.* + +@State(Scope.Benchmark) +open class NativeBenchmark { + private var data = 0.0 + + @Setup + fun setUp() { + data = 3.0 + } + + @Benchmark + fun sqrtBenchmark(): Double { + return kotlin.math.sqrt(data) + } +} +``` + +### Step 5: Run Benchmarks + +In the terminal, navigate to your project's root directory and run `./gradlew benchmark`. + +## Kotlin/WASM Project Setup + +### Step 1: Add the Benchmark Plugin + +
+Kotlin DSL + +In your `build.gradle.kts` file, add the following: + +```kotlin +plugins { + kotlin("multiplatform") + id("org.jetbrains.kotlinx.benchmark") version "0.4.8" +} +``` +
+ +
+Groovy DSL + +In your `build.gradle` file, add the following: + +```groovy +plugins { + id 'org.jetbrains.kotlin.multiplatform' + id 'org.jetbrains.kotlinx.benchmark' version '0.4.8' +} +``` +
+ +### Step 2: Configure the Benchmark Plugin + +The next step is to configure the benchmark plugin to know which targets to run the benchmarks against. In this case, we're specifying `wasm` as the target platform: + +```groovy +benchmark { + targets { + register("wasm") + } +} +``` + +### Step 3: Add Runtime Library + +To run benchmarks, add the runtime library, `kotlinx-benchmark-runtime`, to the dependencies of your source set and enable Maven Central for dependencies lookup: + +```kotlin +kotlin { + sourceSets { + commonMain { + dependencies { + implementation("org.jetbrains.kotlinx:kotlinx-benchmark-runtime:0.4.4") + } + } + } +} + +repositories { + mavenCentral() +} +``` + +### Step 4: Write Benchmarks + +Create a new source file in your `src/wasmMain/kotlin` directory and write your benchmarks. Here's an example: + +```kotlin +package benchmark + +import org.openjdk.jmh.annotations.* + +@State(Scope.Benchmark) +open class WASMBenchmark { + private var data = 0.0 + + @Setup + fun setUp() { + data = 3.0 + } + + @Benchmark + fun sqrtBenchmark(): Double { + return kotlin.math.sqrt(data) + } +} +``` + +### Step 5: Run Benchmarks + +In the terminal, navigate to your project's root directory and run `./gradlew benchmark`. For a practical example, please refer to [examples](../examples/multiplatform). + +## Kotlin Multiplatform Project Setup + +### Step 1: Add the Benchmark Plugin + +
+Kotlin DSL + +In your `build.gradle.kts` file, add the following: + +```kotlin +plugins { + kotlin("multiplatform") + id("org.jetbrains.kotlinx.benchmark") version "0.4.8" +} +``` +
+ +
+Groovy DSL + +In your `build.gradle` file, add the following: + +```groovy +plugins { + id 'org.jetbrains.kotlin.multiplatform' + id 'org.jetbrains.kotlinx.benchmark' version '0.4.8' +} +``` +
+ +### Step 2: Configure the Benchmark Plugin + +In your `build.gradle` or `build.gradle.kts` file, add the following: + +```groovy +benchmark { + targets { + register("jvm") + register("js") + register("native") + register("wasm") + } +} +``` + +### Step 3: Add the Runtime Library + +To run benchmarks, add the runtime library, `kotlinx-benchmark-runtime`, to the dependencies of your source set and enable Maven Central for dependencies lookup: + +```kotlin +kotlin { + sourceSets { + commonMain { + dependencies { + implementation("org.jetbrains.kotlinx:kotlinx-benchmark-runtime:0.4.8") + } + } + } +} + +repositories { + mavenCentral() +} +``` + +### Step 4: Write Benchmarks + +Create new source files in your respective `src/Main/kotlin` directories and write your benchmarks. + +### Step 5: Run Benchmarks + +In the terminal, navigate to your project's root directory and run `./gradlew benchmark`. + +## Conclusion + +This guide has walked you through setting up a multiplatform benchmarking project using the kotlinx-benchmark library in Kotlin. It has covered the creation of new projects, the addition and configuration of the benchmark plugin, writing benchmark tests, and running these benchmarks. Remember, performance benchmarking is an essential part of optimizing your code and ensuring it runs as efficiently as possible. Happy benchmarking! \ No newline at end of file diff --git a/docs/separate-source-sets.md b/docs/separate-source-sets.md new file mode 100644 index 00000000..5f75a1e8 --- /dev/null +++ b/docs/separate-source-sets.md @@ -0,0 +1,105 @@ +# Benchmarking with Gradle: Creating Separate Source Sets + +Elevate your project's performance potential with organized, efficient, and isolated benchmarks. This guide will walk you through the process of creating separate source sets for benchmarks in your Kotlin project with Gradle. + +## Table of Contents + +1. [What is a Source Set?](#what-is-a-source-set) +2. [Why Have Separate Source Sets for Benchmarks?](#why-have-separate-source-sets-for-benchmarks) +3. [Step-by-step Setup Guide](#setup-guide) + - [Kotlin Java & JVM Project](#kotlin-java-jvm-projects) + - [Kotlin Multiplatform Project](#multiplatform-project) +4. [Additional Resources](#additional-resources) + +## What is a Source Set? + +Before we delve into the details, let's clarify what a source set is. In Gradle, a source set represents a group of source files that are compiled and executed together. By default, every Gradle project includes two source sets: `main` for your application code and `test` for your test code. + +A source set defines the location of your source code, the names of compiled classes, and their placement. It also handles additional assets such as resources and configuration files. + +## Why Have Separate Source Sets for Benchmarks? + +Creating separate source sets for benchmarks is especially beneficial when you are integrating benchmarks into an existing project. Here are several advantages of doing so: + +1. **Organization**: It helps maintain a clean and organized project structure. Segregating benchmarks from the main code makes it easier to navigate and locate specific code segments. + +2. **Isolation**: Separating benchmarks ensures that the benchmarking code does not interfere with your main code or test code. This isolation guarantees accurate measurements without unintentional side effects. + +3. **Flexibility**: Creating a separate source set allows you to manage your benchmarking code independently. You can compile, test, and run benchmarks without impacting your main source code. + +## Step-by-step Setup Guide + +Below are the step-by-step instructions to set up separate source sets for benchmarks in both Kotlin JVM and Multiplatform projects: + +### Kotlin Java & JVM Projects + +Transform your Kotlin JVM project with separate benchmark source sets by following these simple steps: + +1. **Define Source Set**: + + Begin by defining a new source set in your `build.gradle` file. We'll use `benchmarks` as the name for the source set. + + ```groovy + sourceSets { + benchmarks + } + ``` + +2. **Propagate Dependencies**: + + Next, propagate dependencies and output from the `main` source set to your `benchmarks` source set. This ensures the `benchmarks` source set has access to classes and resources from the `main` source set. + + ```groovy + dependencies { + add("benchmarksImplementation", sourceSets.main.output + sourceSets.main.runtimeClasspath) + } + ``` + + You can also add output and `compileClasspath` from `sourceSets.test` in the same way if you wish to reuse some of the test infrastructure. + +3. **Register Benchmark Source Set**: + + Finally, register your benchmark source set. This informs the kotlinx-benchmark tool that benchmarks reside within this source set and need to be executed accordingly. + + ```groovy + benchmark { + targets { + register("benchmarks") + } + } + ``` + +### Kotlin Multiplatform Project + +Set up your Kotlin Multiplatform project to accommodate separate benchmark source sets by following these steps: + +1. **Define New Compilation**: + + Start by defining a new compilation in your target of choice (e.g. jvm, js, native, wasm etc.) in your `build.gradle.kts` file. In this example, we're associating the new compilation 'benchmark' with the `main` compilation of the `jvm` target. + + ```kotlin + kotlin { + jvm { + compilations.create('benchmark') { associateWith(compilations.main) } + } + } + ``` + +2. **Register Benchmark Compilation**: + + Conclude by registering your new benchmark compilation using its source set name. In this instance, `jvmBenchmark` is the name for the benchmark compilation for the `jvm` target. + + ```kotlin + benchmark { + targets { + register("jvmBenchmark") + } + } + ``` + + For more information on creating a custom compilation, you can refer to the [Kotlin documentation on creating a custom compilation](https://kotlinlang.org/docs/multiplatform-configure-compilations.html#create-a-custom-compilation). + +## Additional Resources + +**Q: Where can I ask additional questions?** +A: For any additional queries or issues, you can reach out via our Slack channel for real-time interactions, engage in comprehensive conversations on or GitHub Discussions page, or report specific problems on the kotlinx-benchmark GitHub page, with each platform maintained by a skilled, supportive community ready to assist you. \ No newline at end of file diff --git a/docs/singleplatform-setup.md b/docs/singleplatform-setup.md new file mode 100644 index 00000000..1f651e9c --- /dev/null +++ b/docs/singleplatform-setup.md @@ -0,0 +1,222 @@ +# Step-by-Step Setup Guide for Single-Platform Benchmarking Project Using kotlinx-benchmark + +This guide will walk you through the process of setting up a single-platform benchmarking project in both Kotlin and Java using the kotlinx-benchmark library. + +# Table of Contents + +1. [Prerequisites](#prerequisites) +2. [Kotlin Project Setup](#koltin-project-setup) + - [Step 1: Create a New Java Project](#step-1-create-a-new-java-project) + - [Step 2: Add the Benchmark and AllOpen Plugin](#step-2-add-the-benchmark-plugin-and-allopen-plugin) + - [Step 3: Configure the Benchmark Plugin](#step-3-configure-the-benchmark-plugin) + - [Step 4: Write Benchmarks](#step-4-write-benchmarks) + - [Step 5: Run Benchmarks](#step-5-run-benchmarks) +3. [Java Project Setup](#java-project-setup) + - [Step 1: Create a New Kotlin Project](#step-1-create-a-new-kotlin-project) + - [Step 2: Add the Benchmark Plugin](#step-2-add-the-benchmark-plugin-1) + - [Step 3: Configure the Benchmark Plugin](#step-3-configure-the-benchmark-plugin-1) + - [Step 4: Write Benchmarks](#step-4-write-benchmarks-1) + - [Step 5: Run Benchmarks](#step-5-run-benchmarks-1) +4. [Conclusion](#conclusion) + +## Prerequisites + +Ensure your development environment meets the following [requirements](compatibility.md): + +- **Kotlin**: Version 1.8.20 or newer. +- **Gradle**: Version 8.0 or newer. + +## Kotlin Project Setup + +### Step 1: Create a New Java Project + +#### IntelliJ IDEA + +Click `File` > `New` > `Project`, select `Java`, specify your `Project Name` and `Project Location`, ensure the `Project SDK` is 8 or higher, and click `Finish`. + +#### Gradle Command Line + +Open your terminal, navigate to the directory where you want to create your new project, and run `gradle init --type java-application`. + +### Step 2: Add the Benchmark and AllOpen Plugin + +When benchmarking Kotlin/JVM code with Java Microbenchmark Harness (JMH), it is necessary to use the [allopen plugin](https://kotlinlang.org/docs/all-open-plugin.html). This plugin ensures that your benchmark classes and methods are `open`, which is a requirement for JMH. + +
+Kotlin DSL + +In your `build.gradle.kts` file, add the following: + +```kotlin +plugins { + kotlin("jvm") version "1.8.21" + kotlin("plugin.allopen") version "1.8.21" + id("org.jetbrains.kotlinx.benchmark") version "0.4.8" +} + +allOpen { + annotation("org.openjdk.jmh.annotations.State") +} +``` +
+ +
+Groovy DSL + +In your `build.gradle` file, add the following: + +```groovy +plugins { + id 'org.jetbrains.kotlin.jvm' version '1.8.21' + id 'org.jetbrains.kotlin.plugin.allopen' version '1.8.21' + id 'org.jetbrains.kotlinx.benchmark' version '0.4.8' +} + +allOpen { + annotation 'org.openjdk.jmh.annotations.State' +} +``` +
+ +In Kotlin, classes and methods are `final` by default, which means they can't be overridden. However, JMH requires the ability to generate subclasses for benchmarking, which is why we need to use the allopen plugin. This configuration ensures that any class annotated with `@State` is treated as `open`, allowing JMH to work as expected. + +You can alternatively mark your benchmark classes and methods `open` manually, but using the `allopen` plugin improves code maintainability. + +### Step 3: Configure the Benchmark Plugin + +In your `build.gradle` or `build.gradle.kts` file, add the following: + +```groovy +benchmark { + targets { + register("jvm") + } +} +``` + +### Step 4: Write Benchmarks + +Create a new source file in your `main/src` directory and write your benchmarks. Here's an example: + +```java +package test; + +import org.openjdk.jmh.annotations.*; + +@State(Scope.Benchmark) +@Fork(1) +public class SampleJavaBenchmark { + @Param({"A", "B"}) + String stringValue; + + @Param({"1", "2"}) + int intValue; + + @Benchmark + public String stringBuilder() { + StringBuilder stringBuilder = new StringBuilder(); + stringBuilder.append(10); + stringBuilder.append(stringValue); + stringBuilder.append(intValue); + return stringBuilder.toString(); + } +} +``` + +### Step 5: Run Benchmarks + +In the terminal, navigate to your project's root directory and run `./gradlew benchmark`. + +## Java Project Setup + +### Step 1: Create a New Kotlin Project + +#### IntelliJ IDEA + +Click `File` > `New` > `Project`, select `Kotlin`, specify your `Project Name` and `Project Location`, ensure the `Project SDK` is 8 or higher, and click `Finish`. + +#### Gradle Command Line + +Open your terminal, navigate to the directory where you want to create your new project, and run `gradle init --type kotlin-application`. + +### Step 2: Add the Benchmark Plugin + +
+Kotlin DSL + +In your `build.gradle.kts` file, add the following: + +```kotlin +plugins { + id 'java' + id("org.jetbrains.kotlinx.benchmark") version "0.4.8" +} +``` +
+ +
+Groovy DSL + +In your `build.gradle` file, add the following: + +```groovy +plugins { + id 'java' + id 'org.jetbrains.kotlinx.benchmark' version '0.4.8' +} +``` +
+ +### Step 3: Configure the Benchmark Plugin + +In your `build.gradle` or `build.gradle.kts` file, add the following: + +```groovy +benchmark { + targets { + register("main") + } +} +``` + +### Step 4: Write Benchmarks + +Create a new source file in your `src/main/java` directory and write your benchmarks. Here's an example: + +```kotlin +package test + +import org.openjdk.jmh.annotations.* +import java.util.concurrent.* + +@State(Scope.Benchmark) +@Fork(1) +@Warmup(iterations = 0) +@Measurement(iterations = 1, time = 1, timeUnit = TimeUnit.SECONDS) +class KtsTestBenchmark { + private var data = 0.0 + + @Setup + fun setUp() { + data = 3.0 + } + + @Benchmark + fun sqrtBenchmark(): Double { + return Math.sqrt(data) + } + + @Benchmark + fun cosBenchmark(): Double { + return Math.cos(data) + } +} +``` + +### Step 5: Run Benchmarks + +In the terminal, navigate to your project's root directory and run `./gradlew benchmark`. + +## Conclusion + +Congratulations! You've set up a single-platform benchmarking project using `kotlinx-benchmark`. Now you can write your own benchmarks to test the performance of your Java or Kotlin code. Happy benchmarking! \ No newline at end of file diff --git a/docs/tasks-overview.md b/docs/tasks-overview.md new file mode 100644 index 00000000..a0c35bd1 --- /dev/null +++ b/docs/tasks-overview.md @@ -0,0 +1,64 @@ +## Overview of Tasks Provided by kotlinx-benchmark Gradle Plugin + +The kotlinx-benchmark plugin creates different Gradle tasks depending on how it is configured. +For each pair of configuration profile and registered target a task is created to execute that profile on the respective platform. +To learn more about configuration profiles, refer to [configuration-options.md](configuration-options.md). + +### Example Configuration + +To illustrate, consider the following `kotlinx-benchmark` configuration: + +```kotlin +// build.gradle.kts +benchmark { + configurations { + named("main") { + iterations = 20 + warmups = 20 + iterationTime = 1 + iterationTimeUnit = "s" + } + register("smoke") { + include("Essential") + iterations = 10 + warmups = 10 + iterationTime = 200 + iterationTimeUnit = "ms" + } + } + + targets { + register("jvm") + register("js") + } +} +``` + +## Tasks for the "main" Configuration Profile + +- **`benchmark`**: + - Runs benchmarks within the "main" profile for all registered targets. + - In our example, `benchmark` runs benchmarks within the "main" profile in both `jvm` and `js` targets. + +- **`Benchmark`**: + - Runs benchmarks within the "main" profile for a particular target. + - In our example, `jvmBenchmark` runs benchmarks within the "main" profile in the `jvm` target, while `jsBenchmark` runs them in the `js` target. + +## Tasks for Custom Configuration Profiles + +- **`Benchmark`**: + - Runs benchmarks within `` profile in all registered targets. + - In our example, `smokeBenchmark` runs benchmarks within the "smoke" profile. + +- **`Benchmark`**: + - Runs benchmarks within `` profile in `` target. + - In our example, `jvmSmokeBenchmark` runs benchmarks within the "smoke" profile in `jvm` target while `jsSmokeBenchmark` runs them in `js` target. + +## Other useful tasks + +- **`BenchmarkJar`**: + - Created only when a Kotlin/JVM target is registered for benchmarking. + - Produces a self-contained executable JAR in `build/benchmarks//jars/` directory of your project that contains your benchmarks in `` target, and all essential JMH infrastructure code. + - The JAR file can be run using `java -jar path-to-the.jar` command with relevant options. Run with `-h` to see the available options. + - The Jar file can be used for running JMH profilers + - In our example, `jvmBenchmarkJar` produces a JAR file in `build/benchmarks/jvm/jars/` directory that contains benchmarks in `jvm` target. diff --git a/docs/writing-benchmarks.md b/docs/writing-benchmarks.md new file mode 100644 index 00000000..dd25efef --- /dev/null +++ b/docs/writing-benchmarks.md @@ -0,0 +1,163 @@ +## Writing Benchmarks + +To get started, let's look at a simple multiplatform example: + +```kotlin +@BenchmarkMode(Mode.Throughput) +@OutputTimeUnit(TimeUnit.MILLISECONDS) +@Warmup(iterations = 20, time = 1, timeUnit = TimeUnit.SECONDS) +@Measurement(iterations = 20, time = 1, timeUnit = TimeUnit.SECONDS) +@BenchmarkTimeUnit(TimeUnit.MILLISECONDS) +@State(Scope.Benchmark) +class ExampleBenchmark { + + @Param("4", "10") + var size: Int = 0 + + private val list = ArrayList() + + @Setup + fun prepare() { + for (i in 0 until size) { + list.add(i) + } + } + + @Benchmark + fun benchmarkMethod(): Int { + return list.sum() + } + + @TearDown + fun cleanup() { + list.clear() + } +} +``` + +**Example Description**: +Our example tests the speed of summing numbers in an ArrayList. We try it with a list of 4 numbers and then with a list of 10 numbers. +This helps us determine the efficiency of our summing method with different list sizes. + +### Explaining the Annotations + +#### @State + +The `@State` annotation is used to mark benchmark classes. +In the Kotlin/JVM target, however, benchmark classes are not required to be annotated with `@State`. +In the Kotlin/JVM target, you can specify to which extent the state object is shared among the worker threads, e.g, `@State(Scope.Group)`. +Refer to [JMH documentation](https://javadoc.io/doc/org.openjdk.jmh/jmh-core/latest/org/openjdk/jmh/annotations/Scope.html) +for details about available scopes. Multi-threaded execution of a benchmark method is not supported in other Kotlin targets, +thus only `Scope.Benchmark` is available. +In our snippet, the ExampleBenchmark class is marked with `@State(Scope.Benchmark)`, +indicating that performance of benchmark methods in this class should be measured. + +#### @Setup + +The `@Setup` annotation is used to mark a method that sets up the necessary preconditions for your benchmark test. +It serves as a preparatory step where you set up the environment for the benchmark. +In the Kotlin/JVM target, you can specify when the setup method should be executed, e.g, `@Setup(Level.Iteration)`. +Refer to [JMH documentation](https://javadoc.io/doc/org.openjdk.jmh/jmh-core/latest/org/openjdk/jmh/annotations/Level.html) +for details about available levels. In other targets it operates always on the `Trial` level, that is, the setup method is +executed once before the entire set of benchmark method iterations. The key point to remember is that the `@Setup` +method's execution time is not included in the final benchmark results - the timer starts only when the `@Benchmark` +method begins. This makes `@Setup` an ideal place for initialization tasks that should not impact the timing results of your benchmark. +In the provided example, the `@Setup` annotation is used to populate an ArrayList with integers from 0 up to a specified size. + +#### @TearDown + +The `@TearDown` annotation is used to denote a method that's executed after the benchmarking method(s). +This method is typically responsible for cleaning up or deallocating any resources or conditions that were initialized in the `@Setup` method. +In the Kotlin/JVM target, you can specify when the tear down method should be executed, e.g, `@TearDown(Level.Iteration)`. +Refer to [JMH documentation](https://javadoc.io/doc/org.openjdk.jmh/jmh-core/latest/org/openjdk/jmh/annotations/Level.html) +for details about available levels. In other targets it operates always on `Trial` level, that is, the tear down method +is executed once after the entire set of benchmark method iterations. The `@TearDown` annotation helps you avoid +performance bias and ensure the proper maintenance of resources and the preparation of a clean environment for the next run. +As with the `@Setup` method, the `@TearDown` method's execution time is not included in the final benchmark results. +In our example, the `cleanup` function annotated with `@TearDown` is used to clear our ArrayList. + +#### @Benchmark + +The `@Benchmark` annotation is used to specify the methods that you want to measure the performance of. +Basically, it's the actual test you're running. It's important to note that the benchmark methods must always be public. +The code you want to benchmark goes inside this method. +In our example, the `benchmarkMethod` function is annotated with `@Benchmark`, +which means the toolkit will measure the performance of the operation of summing all the integers in the list. + +#### @BenchmarkMode + +The `@BenchmarkMode` annotation sets the mode of operation for the benchmark. +Applying the `@BenchmarkMode` annotation requires specifying a mode from the `Mode` enum, which includes several options. +`Mode.Throughput` measures the raw throughput of your code in terms of the number of operations it can perform per unit +of time, such as operations per second. `Mode.AverageTime` is used when you're more interested in the average time it +takes to execute an operation. Without an explicit `@BenchmarkMode` annotation, the toolkit defaults to `Mode.Throughput`. +In our example, `@BenchmarkMode(Mode.Throughput)` is used, meaning the benchmark focuses on the number of times the +benchmark method can be executed per unit of time. + +#### @OutputTimeUnit + +The `@OutputTimeUnit` annotation specifies the time unit in which your results will be presented. +This time unit can range from minutes to nanoseconds. If a piece of code executes within a few milliseconds, +presenting the result in milliseconds or microseconds provides a more accurate and detailed measurement. +Conversely, for operations with longer execution times, you might choose to display the output in microseconds, seconds, or even minutes. +Essentially, the `@OutputTimeUnit` annotation is about enhancing the readability and interpretability of benchmark results. +If this annotation isn't specified, it defaults to using seconds as the time unit. +In our example, the OutputTimeUnit is set to milliseconds. + +#### @Warmup + +The `@Warmup` annotation is used to specify a preliminary phase before the actual benchmarking takes place. +During this warmup phase, the code in your `@Benchmark` method is executed several times, but these runs aren't included +in the final benchmark results. The primary purpose of the warmup phase is to let the system "warm up" and reach its +optimal performance state so that the results of measurement iterations are more stable. +In our example, the `@Warmup` annotation is used to allow 20 iterations, each lasting one second, +of executing the benchmark method before the actual measurement starts. + +#### @Measurement + +The `@Measurement` annotation is used to control the properties of the actual benchmarking phase. +It sets how many iterations the benchmark method is run and how long each run should last. +The results from these runs are recorded and reported as the final benchmark results. +In our example, the `@Measurement` annotation specifies that the benchmark method will be run 20 iterations +for a duration of one second for the final performance measurement. + +#### @Param + +The `@Param` annotation is used to pass different parameters to your benchmark method. +It allows you to run the same benchmark method with different input values, so you can see how these variations affect +performance. The values you provide for the `@Param` annotation are the different inputs you want to use in your +benchmark test. The benchmark will run once for each provided value. +The property marked with this annotation must be public and mutable (`var`). +In our example, `@Param` annotation is used with values '4' and '10', which means the benchmarkMethod will be executed +twice, once with the `param` value as '4' and then with '10'. This helps to understand how the input list's size affects the time taken to sum its integers. + +#### Other JMH annotations + +In a Kotlin/JVM target, you can use annotations provided by JMH to further tune your benchmarks execution behavior. +Refer to [JMH documentation](https://javadoc.io/doc/org.openjdk.jmh/jmh-core/latest/org/openjdk/jmh/annotations/package-summary.html) for available annotations. + +## Blackhole + +Modern compilers often eliminate computations they find unnecessary, which can distort benchmark results. +In essence, `Blackhole` maintains the integrity of benchmarks by preventing unwanted optimizations such as dead-code +elimination by the compiler or the runtime virtual machine. A `Blackhole` is used when the benchmark produces several values. +If the benchmark produces a single value, just return it. It will be implicitly consumed by a `Blackhole`. + +#### How to Use Blackhole: + +Inject `Blackhole` into your benchmark method and use it to consume results of your computations: + +```kotlin +@Benchmark +fun iterateBenchmark(bh: Blackhole) { + for (e in myList) { + bh.consume(e) + } +} +``` + +By consuming results, you signal to the compiler that these computations are significant and shouldn't be optimized away. + +For a deeper dive into `Blackhole` and its nuances in JVM, you can refer to: +- [Official Javadocs](https://javadoc.io/static/org.openjdk.jmh/jmh-core/1.23/org/openjdk/jmh/infra/Blackhole.html) +- [JMH](https://github.com/openjdk/jmh/blob/1.37/jmh-core/src/main/java/org/openjdk/jmh/infra/Blackhole.java#L157-L254) \ No newline at end of file diff --git a/examples/README.md b/examples/README.md new file mode 100644 index 00000000..56e5e6a1 --- /dev/null +++ b/examples/README.md @@ -0,0 +1,35 @@ +# kotlinx-benchmark Examples Guide + +This guide is specifically designed for experienced Kotlin developers. It aims to help you smoothly navigate and run the benchmark examples included in this repository. + +## Getting Started + +To begin, you'll need to clone the `kotlinx-benchmark` repository to your local machine: + +``` +git clone https://github.com/Kotlin/kotlinx-benchmark.git +``` + +## Running the Examples + +Each example in this repository is an autonomous project, encapsulated in its own environment. Reference the [tasks-overview](../docs/tasks-overview.md) for a detailed list and explanation of available tasks. + +To execute all benchmarks for a specific example, you'll use the following command structure: + +``` +./gradlew :examples:[example-name]:benchmark +``` + +Here, `[example-name]` is the name of the example you wish to benchmark. For instance, to run benchmarks for the `kotlin-kts` example, the command would be: + +``` +./gradlew :examples:kotlin-kts:benchmark +``` + +This pattern applies to all examples in the repository. + +## Troubleshooting + +In case of any issues encountered while setting up or running the benchmarks, verify that you're executing commands from the correct directory. For persisting issues, don't hesitate to open an [issue](https://github.com/Kotlin/kotlinx-benchmark/issues). + +Happy benchmarking! diff --git a/examples/java/README.md b/examples/java/README.md new file mode 100644 index 00000000..84fc8c53 --- /dev/null +++ b/examples/java/README.md @@ -0,0 +1,36 @@ +# Java Example + +[![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/Kotlin/kotlinx-benchmark) + +## Project Structure + +Inside of this example, you'll see the following folders and files: + +``` +/ +├── build.gradle +└── src/ + └── main/ + └── java/ + └── test/ + └── SampleJavaBenchmark.java +``` + +## Tasks + +All tasks can be run from the root of the library: + +| Task Name | Action | +| --- | --- | +| `assembleBenchmarks` | Generate and build all benchmarks in the project | +| `benchmark` | Execute all benchmarks in the project | +| `mainBenchmark` | Execute benchmark for the 'main' source set | +| `mainBenchmarkCompile` | Compile JMH source files for the 'main' source set | +| `mainBenchmarkGenerate` | Generate JMH source files for the 'main' source set | +| `mainBenchmarkJar` | Build JAR for JMH compiled files for the 'main' source set | +| `mainSingleParamBenchmark` | Execute benchmark for the 'main' source set with the 'singleParam' configuration | +| `singleParamBenchmark` | Execute all benchmarks in the project with the 'singleParam' configuration | + +## Want to learn more? + +Feel free to engage in benchmarking discussions on the `#benchmarks` channel on [Kotlinlang Slack](https://kotlinlang.org/community/slack), explore the `kotlinx-benchmark` tagged questions on [Stack Overflow](https://stackoverflow.com/questions/tagged/kotlinx-benchmark), or dive into the [kotlinx-benchmark Github Discussions](https://github.com/Kotlin/kotlinx-benchmark/discussions) for more insights and interactions. \ No newline at end of file diff --git a/examples/kotlin-kts/README.md b/examples/kotlin-kts/README.md new file mode 100644 index 00000000..8f9c3893 --- /dev/null +++ b/examples/kotlin-kts/README.md @@ -0,0 +1,32 @@ +# Kotlin-KTS Example + +[![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/Kotlin/kotlinx-benchmark) + +## Project Structure + +Inside of this example, you'll see the following folders and files: + +``` +/ +├── build.gradle.kts +└── main/ + └── src/ + └── KtsTestBenchmark.kt +``` + +## Tasks + +All tasks can be run from the root of the project, from a terminal: + +| Task Name | Action | +| --- | --- | +| `assembleBenchmarks` | Generate and build all benchmarks in the project | +| `benchmark` | Execute all benchmarks in the project | +| `mainBenchmark` | Execute benchmark for 'main' | +| `mainBenchmarkCompile` | Compile JMH source files for 'main' | +| `mainBenchmarkGenerate` | Generate JMH source files for 'main' | +| `mainBenchmarkJar` | Build JAR for JMH compiled files for 'main' | + +## Want to learn more? + +Feel free to engage in benchmarking discussions on the `#benchmarks` channel on [Kotlinlang Slack](https://kotlinlang.org/community/slack), explore the `kotlinx-benchmark` tagged questions on [Stack Overflow](https://stackoverflow.com/questions/tagged/kotlinx-benchmark), or dive into the [kotlinx-benchmark Github Discussions](https://github.com/Kotlin/kotlinx-benchmark/discussions) for more insights and interactions. diff --git a/examples/kotlin-multiplatform/README.md b/examples/kotlin-multiplatform/README.md new file mode 100644 index 00000000..8da2d9ae --- /dev/null +++ b/examples/kotlin-multiplatform/README.md @@ -0,0 +1,103 @@ +# Kotlin-Multiplatform Example + +[![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/Kotlin/kotlinx-benchmark) + +## Project Structure + +Inside of this example, you'll see the following folders and files: + +``` +│ build.gradle ==> Build configuration file for Gradle +│ +└───src ==> Source code root + ├───commonMain ==> Shared code + │ └───kotlin + │ │ CommonBenchmark.kt ==> Common benchmarks + │ │ InheritedBenchmark.kt ==> Inherited benchmarks + │ │ ParamBenchmark.kt ==> Parameterized benchmarks + │ │ + │ └───nested ==> Nested benchmarks + │ CommonBenchmark.kt + │ + ├───jsMain ==> JavaScript-specific code + │ └───kotlin + │ JsAsyncBenchmarks.kt ==> JS async benchmarks + │ JsTestBenchmark.kt ==> JS benchmarks + │ + ├───jvmBenchmark ==> JVM-specific benchmarks + │ └───kotlin + │ JvmBenchmark.kt + │ + ├───jvmMain ==> JVM-specific code + │ └───kotlin + │ JvmTestBenchmark.kt ==> JVM benchmarks + │ + ├───nativeMain ==> Native-specific code + │ └───kotlin + │ NativeTestBenchmark.kt ==> Native benchmarks + │ + └───wasmMain ==> WebAssembly-specific code + └───kotlin + WasmTestBenchmark.kt ==> WebAssembly benchmarks +``` + +## Tasks + +All tasks can be run from the root of the library, from a terminal: + +| Task Name | Action | +| --- | --- | +| `assembleBenchmarks` | Generates and builds all benchmarks in the project. | +| `benchmark` | Executes all benchmarks in the project. | +| `compileJsIrBenchmarkKotlinJsIr` | Compiles the source files for 'jsIr' benchmark. | +| `compileJsIrBuiltInBenchmarkKotlinJsIrBuiltIn` | Compiles the source files for 'jsIrBuiltIn' benchmark. | +| `compileWasmBenchmarkKotlinWasm` | Compiles the source files for 'wasm' benchmark. | +| `csvBenchmark` | Executes all benchmarks in the project with the CSV configuration. | +| `fastBenchmark` | Executes all benchmarks in the project with the Fast configuration. | +| `forkBenchmark` | Executes all benchmarks in the project with the Fork configuration. | +| `jsIrBenchmark` | Executes benchmark for the 'jsIr' source set. | +| `jsIrBenchmarkGenerate` | Generates source files for the 'jsIr' source set. | +| `jsIrBuiltInBenchmark` | Executes benchmark for the 'jsIrBuiltIn' source set. | +| `jsIrBuiltInBenchmarkGenerate` | Generates source files for the 'jsIrBuiltIn' source set. | +| `jsIrBuiltInCsvBenchmark` | Executes benchmark for the 'jsIrBuiltIn' source set with the CSV configuration. | +| `jsIrBuiltInFastBenchmark` | Executes benchmark for the 'jsIrBuiltIn' source set with the Fast configuration. | +| `jsIrBuiltInForkBenchmark` | Executes benchmark for the 'jsIrBuiltIn' source set with the Fork configuration. | +| `jsIrBuiltInParamsBenchmark` | Executes benchmark for the 'jsIrBuiltIn' source set with the Params configuration. | +| `jsIrCsvBenchmark` | Executes benchmark for the 'jsIr' source set with the CSV configuration. | +| `jsIrFastBenchmark` | Executes benchmark for the 'jsIr' source set with the Fast configuration. | +| `jsIrForkBenchmark` | Executes benchmark for the 'jsIr' source set with the Fork configuration. | +| `jsIrParamsBenchmark` | Executes benchmark for the 'jsIr' source set with the Params configuration. | +| `jvmBenchmark` | Executes benchmark for the 'jvm' source set. | +| `jvmBenchmarkBenchmark` | Executes benchmark for the 'jvmBenchmark' source set. | +| `jvmBenchmarkBenchmarkCompile` | Compiles the source files for 'jvmBenchmark'. | +| `jvmBenchmarkBenchmarkGenerate` | Generates source files for the 'jvmBenchmark' source set. | +| `jvmBenchmarkBenchmarkJar` | Builds the JAR for 'jvmBenchmark' compiled files. | +| `jvmBenchmarkCompile` | Compiles the source files for the 'jvm' benchmark. | +| `jvmBenchmarkCsvBenchmark` | Executes benchmark for the 'jvmBenchmark' source set with the CSV configuration. | +| `jvmBenchmarkFastBenchmark` | Executes benchmark for the 'jvmBenchmark' source set with the Fast configuration. | +| `jvmBenchmarkForkBenchmark` | Executes benchmark for the 'jvmBenchmark' source set with the Fork configuration. | +| `jvmBenchmarkGenerate` | Generates source files for the 'jvm' source set. | +| `jvmBenchmarkJar` | Builds the JAR for 'jvm' compiled files. | +| `jvmBenchmarkParamsBenchmark` | Executes benchmark for the 'j| `jvmBenchmarkParamsBenchmark` | Executes benchmark for the 'jvmBenchmark' source set with the Params configuration. | +| `jvmCsvBenchmark` | Executes benchmark for the 'jvm' source set with the CSV configuration. | +| `jvmFastBenchmark` | Executes benchmark for the 'jvm' source set with the Fast configuration. | +| `jvmForkBenchmark` | Executes benchmark for the 'jvm' source set with the Fork configuration. | +| `jvmParamsBenchmark` | Executes benchmark for the 'jvm' source set with the Params configuration. | +| `linkNativeBenchmarkReleaseExecutableNative` | Compiles the source files for 'native' benchmark. | +| `nativeBenchmark` | Executes benchmark for the 'native' source set. | +| `nativeBenchmarkGenerate` | Generates source files for the 'native' source set. | +| `nativeCsvBenchmark` | Executes benchmark for the 'native' source set with the CSV configuration. | +| `nativeFastBenchmark` | Executes benchmark for the 'native' source set with the Fast configuration. | +| `nativeForkBenchmark` | Executes benchmark for the 'native' source set with the Fork configuration. | +| `nativeParamsBenchmark` | Executes benchmark for the 'native' source set with the Params configuration. | +| `paramsBenchmark` | Executes all benchmarks in the project with the Params configuration. | +| `wasmBenchmark` | Executes benchmark for the 'wasm' source set. | +| `wasmBenchmarkGenerate` | Generates source files for the 'wasm' source set. | +| `wasmCsvBenchmark` | Executes benchmark for the 'wasm' source set with the CSV configuration. | +| `wasmFastBenchmark` | Executes benchmark for the 'wasm' source set with the Fast configuration. | +| `wasmForkBenchmark` | Executes benchmark for the 'wasm' source set with the Fork configuration. | +| `wasmParamsBenchmark` | Executes benchmark for the 'wasm' source set with the Params configuration. | + +## Want to learn more? + +Feel free to engage in benchmarking discussions on the `#benchmarks` channel on [Kotlinlang Slack](https://kotlinlang.org/community/slack), explore the `kotlinx-benchmark` tagged questions on [Stack Overflow](https://stackoverflow.com/questions/tagged/kotlinx-benchmark), or dive into the [kotlinx-benchmark Github Discussions](https://github.com/Kotlin/kotlinx-benchmark/discussions) for more insights and interactions. \ No newline at end of file diff --git a/examples/kotlin/README.md b/examples/kotlin/README.md new file mode 100644 index 00000000..0f16cf3a --- /dev/null +++ b/examples/kotlin/README.md @@ -0,0 +1,35 @@ +# Kotlin Example + +[![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/Kotlin/kotlinx-benchmark) + +## Project Structure + +Inside of this example, you'll see the following folders and files: + +``` +/ +├── build.gradle +├── benchmarks/ +│ └── src/ +│ └── TestBenchmark.kt +└── main/ + └── src/ + └── TestData.kt +``` + +## Tasks + +All tasks can be run from the root of the project, from a terminal: + +| Task Name | Action | +| --- | --- | +| `assembleBenchmarks` | Generate and build all benchmarks in the project | +| `benchmark` | Execute all benchmarks in the project | +| `benchmarksBenchmark` | Execute benchmark for 'benchmarks' | +| `benchmarksBenchmarkCompile` | Compile JMH source files for 'benchmarks' | +| `benchmarksBenchmarkGenerate` | Generate JMH source files for 'benchmarks' | +| `benchmarksBenchmarkJar` | Build JAR for JMH compiled files for 'benchmarks' | + +## Want to learn more? + +Feel free to engage in benchmarking discussions on the `#benchmarks` channel on [Kotlinlang Slack](https://kotlinlang.org/community/slack), explore the `kotlinx-benchmark` tagged questions on [Stack Overflow](https://stackoverflow.com/questions/tagged/kotlinx-benchmark), or dive into the [kotlinx-benchmark Github Discussions](https://github.com/Kotlin/kotlinx-benchmark/discussions) for more insights and interactions.