-
Notifications
You must be signed in to change notification settings - Fork 149
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add a command and benchmark flags to remove artifact data #1643
Conversation
Can this be used to re-run a benchmark that happened to fail in previous invocations, and just replace its results ? Right now, when re-running collection a failure from a previous run will be skipped of course. Or would |
The Do you have a use-case for deleting just some of the benchmarks (locally)? In general (and for perf. server usage), I consider it to be safer to re-benchmark everything, because something may (and probably did) change between the two |
An example where being able to overwrite data would be useful to me: sometimes I have to babysit windows benchmarks, in case weird processes start up even though they are not allowed to and deactivated everywhere (à la virus scanners) which nullify the subsequent results. When this happens only a few benchmarks are tainted and I'd want to re-run only those locally, not re-launch a full collection since that will take at least 45 minutes. No artifacts have changed between the 2 sequential collections. |
Well, not, but possibly some AV started to run in the middle of some benchmark, then it's a question whether that benchmark execution was valid or not :) In the use-case that you have described, I assume that you just kill the benchmarking, then kill AV, and then restart it? In that case it should then compute only the benchmarks that haven't been finished yet (if you don't use |
I'm catching the AV red-handed so I know which benchmarks are invalid and I have to remove, and which benchmark has failed when I killed collection, and all these are ignored when re-running with the same artifact id, while I want to re-run them. |
3a1ba95
to
9823a23
Compare
9823a23
to
c160a58
Compare
You reviewed a bunch of my newer PRs, so I'm not sure if this one hasn't been forgotten. CC @Mark-Simulacrum |
c160a58
to
e201ed7
Compare
Pinging @Mark-Simulacrum to requeue this PR to avoid bitrot :) |
There are basically three things that we can do during benchmarking if there is already some data in the database for the benchmarked artifact:
I don't think that appending makes much sense. If nothing has changed between two benchmark invocations with the same artifact, you just want to fill in the data. If something has changed, then you probably don't want to mix and average results for the old and the new data. And we show a minimum of measured values in the dashboard anyway, so you might as well just remove the old data.
Removing the old data is useful when rerunning benchmarks repeatedly on a local machine, so that you don't have to remove the database over and over again.
This PR adds a new
--purge
flag tobench_local
andbench_runtime_local
that removes either any old data for the given artifact, or just data for benchmarks that had an error for the given artifact. The flag is idempotent and can be used even if there is no previous data.The PR also adds a separate
collector
commandpurge_artifact
, which removes all data for a specified artifact. This can be handy e.g. for removing data that should be re-benchmarked on the perf server.Best reviewed commit by commit.
Fixes: #804