-
Notifications
You must be signed in to change notification settings - Fork 51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
What's the status of this project? #88
Comments
Are there actual issues that need to be addressed and aren't being addressed? |
I wouldn't mind adding some of these things, but I'm not sure if this repository is being maintained. I think Swift could really use a better benchmarking library than Perhaps it could even be merged with |
As far as outputs / visual are concerned it might be useful to be able to emit data in JMH style json files, this way we could leverage a lot of existing tooling, like this https://jmh.morethan.io/ |
@ktoso if google/benchmark supports that format, I think would be a good reason to support it here as well. @karwa I think that graphical output is something that I would prefer not to be part of this project. We should instead generate the data in a format that can be consumed by other tools and be used for plotting and analysis. |
not sure about google/benchmark, but it is the de facto standard format for benchmark results in the jvm ecosystem (and part of the jdk), the format is very boring, so I think we could easily support it :) |
Hmm, happen to have a good reference for the format? |
I checked in with the primary maintainer, it isn't formally specified but has not changed since years: https://twitter.com/shipilev/status/1427889432451944449 On the page I linked there's example JSONs though if you just want a quick skim, it's a pretty simple format. E.g. "load single run example" is
|
Hmm, so, I looked into google/benchmark, and it does have JSON format output support. I would rather have the same output style. If JMH is important to you, then I'd be open to the idea of a |
Why not allow for |
|
Oh sorry I missed that then :) |
Not pretending it's clean, but by modifying your As an MVP: import Benchmark
import Foundation
var runner = BenchmarkRunner(
suites: [
movementBenchmarks,
],
settings: parseArguments(),
customDefaults: defaultSettings
)
try runner.run()
extension Array where Element == Double {
var sum: Double {
var total: Double = 0
for x in self {
total += x
}
return total
}
var mean: Double {
if count == 0 {
return 0
} else {
let invCount: Double = 1.0 / Double(count)
return sum * invCount
}
}
var median: Double {
guard count >= 2 else { return mean }
// If we have odd number of elements, then
// center element is the median.
let s = self.sorted()
let center = count / 2
if count % 2 == 1 {
return s[center]
}
// If have even number of elements we need
// to return an average between two middle elements.
let center2 = count / 2 - 1
return (s[center] + s[center2]) / 2
}
func percentile(_ v: Double) -> Double {
if v < 0 {
fatalError("Percentile can not be negative.")
}
if v > 100 {
fatalError("Percentile can not be more than 100.")
}
if count == 0 {
return 0
}
let sorted = self.sorted()
let p = v / 100.0
let index = (Double(count) - 1) * p
var low = index
low.round(.down)
var high = index
high.round(.up)
if low == high {
return sorted[Int(low)]
} else {
let lowValue = sorted[Int(low)] * (high - index)
let highValue = sorted[Int(high)] * (index - low)
return lowValue + highValue
}
}
}
extension Array {
func chunked(into size: Int) -> [[Element]] {
return stride(from: 0, to: count, by: size).map {
Array(self[$0 ..< Swift.min($0 + size, count)])
}
}
}
let jmhResult: [[String: Any]] = runner.results.map { result in
let chunks = Array(result.measurements.prefix(20 * 5)).chunked(into: 20)
return [
"benchmark": "\(result.suiteName).\(result.benchmarkName)",
"mode": "avgt",
"threads": 1,
"forks": chunks.count,
"measurementIterations": result.measurements.count,
"measurementTime": "\(result.measurements.sum) \(result.settings.timeUnit.description)",
"measurementBatchSize": 1,
"warmupIterations": result.warmupMeasurements.count,
"warmupTime": "\(result.warmupMeasurements.sum) \(result.settings.timeUnit.description)",
"warmupBatchSize": 1,
"primaryMetric": [
"score": "\(result.measurements.median)",
"scoreUnit": result.settings.timeUnit.description.trimmingCharacters(in: .whitespaces),
"scorePercentiles": [
"0.0": result.measurements.percentile(0),
"50.0": result.measurements.percentile(50),
"90.0": result.measurements.percentile(90),
"95.0": result.measurements.percentile(95),
"99.0": result.measurements.percentile(99),
"99.9": result.measurements.percentile(99.9),
"99.99": result.measurements.percentile(99.99),
"99.999": result.measurements.percentile(99.999),
"99.9999": result.measurements.percentile(99.9999),
"100.0": result.measurements.percentile(100),
],
"rawData": chunks
],
"secondaryMetrics": []
]
}
let jmh = try String(decoding: JSONSerialization.data(
withJSONObject: jmhResult,
options: [.prettyPrinted, .withoutEscapingSlashes]
), as: UTF8.self)
let path = FileManager.default.currentDirectoryPath + "/\(UUID().uuidString).json"
try jmh.write(toFile: path, atomically: true, encoding: .utf8)
print("\nWritten JMH results to \(path)") |
Given the recent news about S4TF being archived, I wonder if anybody will continue working on this.
Would it perhaps be a good idea to seek out a new home for it?
The text was updated successfully, but these errors were encountered: