Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactoring pairwise into pairwise and gemm #24

Open
wants to merge 12 commits into
base: master
Choose a base branch
from

Conversation

jaberg
Copy link
Contributor

@jaberg jaberg commented Aug 14, 2013

The pairwise benchmark was contaminated by gemm-based solutions. I made a gemm benchmark dir, which tests the ability of various compilers to compete with BLAS libraries, and removed the gemm-based solutions to the pairwise benchmark. Currently the BLAS libs destroy the compiled functions, but I am curious if a readable Python base implementation that actually exposes a fast blocked algorithm could be compiled to run at a more competitive speed.

For the pyopencl stuff, I moved it to a new repo (https://github.com/jaberg/python-benchmarks-pyopencl) because I am interested in developing pyopencl auto-tunable code generators, and development along that line should not clutter up this repo.

@ogrisel
Copy link
Contributor

ogrisel commented Aug 15, 2013

Looks good but github says there are conflicts. Could you please solve them and force push a branch rebased on the current master?

@@ -0,0 +1,15 @@
# Authors: James Bergstra
# License: MIT
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add a description of the benchmark and some motivation in a the module level docstring.

@jaberg
Copy link
Contributor Author

jaberg commented Aug 15, 2013

@ogrisel Thanks for the feedback, I think I fixed it all up.

@ogrisel
Copy link
Contributor

ogrisel commented Aug 19, 2013

Actually I had thought that python-benchmarks-pyopencl would be an independent project and python-benchmarks would no longer have any direct pyopencl benchmarks included. Do you really want make python-benchmarks depend on another benchmark project? I think it adding such a dependency on another benchmark repo makes it unnecessarily complex for people to run the benchmark on their own hardware.

@jaberg
Copy link
Contributor Author

jaberg commented Aug 19, 2013

How is that different from just failing and ignoring it if
the python-benchmarks-pyopencl repo isn't installed (current behaviour?)

I'm not really sure what to do with that PyOpenCL code to be honest. One
idea is to try and merge the code generators in to Reikna or some other
project, another idea is to leave them where they are, give them more
generic interfaces, and rename the repo. I think there's room for a suite
of OpenCL code generators to support pipelines of the sort used by Theano,
scikit-learn, scikit-image.

Anyway, in terms of helping people run the benchmarks on their own
hardware, how about just printing out the necessary pip command on
ImportError?

On Mon, Aug 19, 2013 at 12:25 PM, Olivier Grisel
[email protected]:

Actually I had thought that python-benchmarks-pyopencl would be an
independent project and python-benchmarks would no longer have any direct
pyopencl benchmarks included. Do you really want make python-benchmarks
depend on another benchmark project? I think it adding such a dependency on
another benchmark repo makes it unnecessarily complex for people to run the
benchmark on their own hardware.


Reply to this email directly or view it on GitHubhttps://github.com//pull/24#issuecomment-22884488
.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants