-
Notifications
You must be signed in to change notification settings - Fork 2
Home
We use a python wrapper to automate benchmarking. This will configure a small C benchmarker that measures runtime of the algorithm.
The following should be considered:
- What algorithm to run,
- What objective function to use,
- What input size to use (number of iterations, population size, dimensions, input ranges),
- How many measuring iterations to perform (how many times to run the entire algorithm end to end)
Ideally, the C benchmarking tool will be configurable to take these parameters and simply output the desired timings. The output should have a very simple format to make it easy for the python wrapper to obtain the results.
The python wrapper should be able to perform the following automatically:
- Store the results of the benchmarking in a timestampled fine in order to have a history of results.
- Output runtime and performance graphs generated from the numbers returned from the C benchmarking too.
- (Possibly output a convergence graph for correctness.)
An indentation of 2 spaces should be used throughout the code.
An algorithm function must have the following signature:
double* algo_name(double(*objective_function)(const double* const, size_t),
size_t size,
size_t dim,
size_t max_iterations,
const double* const min_values,
const double* const max_values);Please ensure that all algorithm implementations allow to pass the objective function when calling them. The base signature of the objective functions is the following:
double(*obj)(const double* const, size_t)where the objective function obj takes an array of double arguments and the length of that array as arguments, and returns a single double value as the fitness value of the solution instance.
Such design will ease testing and performance measures on different types (and optimization levels) of objective functions.