Skip to content

Parallel #113

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 22 commits into
base: main
Choose a base branch
from
Open

Parallel #113

wants to merge 22 commits into from

Conversation

oliverchampion
Copy link
Collaborator

@oliverchampion oliverchampion commented Jul 29, 2025

Describe the changes you have made in this PR

Added parallel computing and testing
Added volume fitting and testing
Added wrapper code
Added SUPER-IVIM-DC fit methods (by accident in this PR)

Link this PR to an issue [optional]

Fixes #ISSUE-NUMBER

Checklist

  • [ x ] Self-review of changed code
  • [ x ] Added automated tests where applicable
  • Update Docs & Guides

@oliverchampion
Copy link
Collaborator Author

This is now running and ready for review :)

Copy link
Contributor

@etpeterson etpeterson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Quick look for now. Just a few comments - looking good.


##########
# code written by Oliver J Gurney-Champion
# code adapted from MAtlab code by Eric Schrauben: https://github.com/schrau24/XCAT-ERIC
# This code generates a 4D IVIM phantom as nifti file

def phantom(bvalue, noise, TR=3000, TE=40, motion=False, rician=False, interleaved=False,T1T2=True):
download_data()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This downloads data during testing now, correct? Is that slow or burdensome?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes and no. The volume, parallel and AI tests just last long, full stop. Downloading the data is relatively quick compared to that less than minute Vs hours).

I think there are two ways we can go here.

I think I could accelerate the volume fit by dramatically reducing the number of vowels; there probably is no need for high resolution.

For the parallel fit, the smaller I make the number of vowels/volume, the less of an acceleration we get from going parallel and the harder it is to test this. For small volumes, typically linear goes way faster. That is why I ended up increasing the volume.

So either we can make a large volume (then the download time is neglectable) and do long tests. Or, we can go for a small volume and ignore the parallel timing. In that cases precalculting the volume and saving it will be faster.

That said, the AI tests also last 5+ minutes per algorithm and we must decide whether we want that to be done every merge...

Either way, all building blocks are here now :)

@@ -152,6 +156,83 @@ def test_bounds(bound_input, eng):
assert passf, f"Fit still passes when initial guess f is out of fit bounds; potentially initial guesses not respected for: {name}"
assert passDp, f"Fit still passes when initial guess Ds is out of fit bounds; potentially initial guesses not respected for: {name}" '''

def test_volume(algorithmlist,eng, threeddata):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similar to the download comment, would it be good to tag these tests as something so people could skip the download during testing? Just a thought - maybe they deserve their own tags or none at all.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's discuss :)

assert np.allclose(fit_result['D'], fit_result2['D'], atol=1e-4), "Results differ between parallel and serial"
assert np.allclose(fit_result['f'], fit_result2['f'], atol=1e-2), "Results differ between parallel and serial"
assert np.allclose(fit_result['Dp'], fit_result2['Dp'], atol=1e-2), "Results differ between parallel and serial"
if time_parallel * 1.3 > time_serial:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Time can be tricky because it can vary based on system load. Probably a good choice that's it's not a requirement but this message probably won't ever get seen. I wonder if there's some report we could write to - perhaps in the future - for algorithm profiling purposes?

Copy link
Collaborator Author

@oliverchampion oliverchampion Aug 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good idea! Let's discuss. I did give it a personalised tag, so it is findable. However, it does not appear on the final report at the end of testing.

I've now tried to:

  • turn off all warnings
  • turn back on my specific warning
  • summarize warnings with -r w

Let's see what it does. This may be undesirable as it suppresses all other warnings. But it will highlight the specific warnings we want

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants