-
Notifications
You must be signed in to change notification settings - Fork 37
Parallel #113
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Parallel #113
Conversation
Testing still failing as parallel is not going faster.
Also added technion unit tests and code by acident on this branch.
This failed a specific test (4 organs, but passes rest).
This is now running and ready for review :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Quick look for now. Just a few comments - looking good.
|
||
########## | ||
# code written by Oliver J Gurney-Champion | ||
# code adapted from MAtlab code by Eric Schrauben: https://github.com/schrau24/XCAT-ERIC | ||
# This code generates a 4D IVIM phantom as nifti file | ||
|
||
def phantom(bvalue, noise, TR=3000, TE=40, motion=False, rician=False, interleaved=False,T1T2=True): | ||
download_data() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This downloads data during testing now, correct? Is that slow or burdensome?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes and no. The volume, parallel and AI tests just last long, full stop. Downloading the data is relatively quick compared to that less than minute Vs hours).
I think there are two ways we can go here.
I think I could accelerate the volume fit by dramatically reducing the number of vowels; there probably is no need for high resolution.
For the parallel fit, the smaller I make the number of vowels/volume, the less of an acceleration we get from going parallel and the harder it is to test this. For small volumes, typically linear goes way faster. That is why I ended up increasing the volume.
So either we can make a large volume (then the download time is neglectable) and do long tests. Or, we can go for a small volume and ignore the parallel timing. In that cases precalculting the volume and saving it will be faster.
That said, the AI tests also last 5+ minutes per algorithm and we must decide whether we want that to be done every merge...
Either way, all building blocks are here now :)
@@ -152,6 +156,83 @@ def test_bounds(bound_input, eng): | |||
assert passf, f"Fit still passes when initial guess f is out of fit bounds; potentially initial guesses not respected for: {name}" | |||
assert passDp, f"Fit still passes when initial guess Ds is out of fit bounds; potentially initial guesses not respected for: {name}" ''' | |||
|
|||
def test_volume(algorithmlist,eng, threeddata): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar to the download comment, would it be good to tag these tests as something so people could skip the download during testing? Just a thought - maybe they deserve their own tags or none at all.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's discuss :)
assert np.allclose(fit_result['D'], fit_result2['D'], atol=1e-4), "Results differ between parallel and serial" | ||
assert np.allclose(fit_result['f'], fit_result2['f'], atol=1e-2), "Results differ between parallel and serial" | ||
assert np.allclose(fit_result['Dp'], fit_result2['Dp'], atol=1e-2), "Results differ between parallel and serial" | ||
if time_parallel * 1.3 > time_serial: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Time can be tricky because it can vary based on system load. Probably a good choice that's it's not a requirement but this message probably won't ever get seen. I wonder if there's some report we could write to - perhaps in the future - for algorithm profiling purposes?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good idea! Let's discuss. I did give it a personalised tag, so it is findable. However, it does not appear on the final report at the end of testing.
I've now tried to:
- turn off all warnings
- turn back on my specific warning
- summarize warnings with -r w
Let's see what it does. This may be undesirable as it suppresses all other warnings. But it will highlight the specific warnings we want
Describe the changes you have made in this PR
Added parallel computing and testing
Added volume fitting and testing
Added wrapper code
Added SUPER-IVIM-DC fit methods (by accident in this PR)
Link this PR to an issue [optional]
Fixes #ISSUE-NUMBER
Checklist