-
-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactoring pairwise into pairwise and gemm #24
base: master
Are you sure you want to change the base?
Conversation
Looks good but github says there are conflicts. Could you please solve them and force push a branch rebased on the current master? |
@@ -0,0 +1,15 @@ | |||
# Authors: James Bergstra | |||
# License: MIT |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add a description of the benchmark and some motivation in a the module level docstring.
@ogrisel Thanks for the feedback, I think I fixed it all up. |
Actually I had thought that python-benchmarks-pyopencl would be an independent project and python-benchmarks would no longer have any direct pyopencl benchmarks included. Do you really want make python-benchmarks depend on another benchmark project? I think it adding such a dependency on another benchmark repo makes it unnecessarily complex for people to run the benchmark on their own hardware. |
How is that different from just failing and ignoring it if I'm not really sure what to do with that PyOpenCL code to be honest. One Anyway, in terms of helping people run the benchmarks on their own On Mon, Aug 19, 2013 at 12:25 PM, Olivier Grisel
|
The pairwise benchmark was contaminated by gemm-based solutions. I made a gemm benchmark dir, which tests the ability of various compilers to compete with BLAS libraries, and removed the gemm-based solutions to the pairwise benchmark. Currently the BLAS libs destroy the compiled functions, but I am curious if a readable Python base implementation that actually exposes a fast blocked algorithm could be compiled to run at a more competitive speed.
For the pyopencl stuff, I moved it to a new repo (https://github.com/jaberg/python-benchmarks-pyopencl) because I am interested in developing pyopencl auto-tunable code generators, and development along that line should not clutter up this repo.