-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Past, Present and Future of Open Science (Emergent session): Open, community-driven, software pipelines: a retrospective and future outlook #68
Comments
@jsheunis any chance I can get write access? I just realised there's a typo "pipelines take on" -> "pipelines taking on", and I didn't mention who our panelists are! It's Karolina Fing, Oscar Esteban, Satra Ghosh and Erin Dickie, right? |
Hi @kfinc @oesteban @satra @edickie, In a last minute panic I have crated an abstract for our BrainHack panel session, registered as one of the "Emergent" sessions. The abstract is just OK, and worse I realised I failed to name check each of you! Please read and offer edits (as additional comments here) on this abstract and hopefully @jsheunis can ensure that the final edits are incorporated. |
Some thoughts added in bold. Open pipelines collect the best expertise, algorithms and planning into community standards, allowing wide access to top-of-the-range analysis pipelines. While some frameworks exist explicitly to allow users to build pipelines with a diverse set of tools (e.g. Nipype) others comprise specific pipelines offered as a particular best practice solution (e.g. fmriprep as an example of the larger set of evolving nipreps, HCP-Pipelines). Is there a danger of these prepared pipelines take on the hallowed role that individual software tools have previously held, such that their use becomes expected and their non-use needs to be justified? What limits do these pose on scientific questions? How do we approach continuous evaluation and validation and dissemination of such workflows? Do we have built-in procedures in these community standards and development procedures (e.g. niflows) that allow for critical evaluation? nipreps: https://www.nipreps.org/ |
Sorry, this notification was filtered by gmail to some unchecked folder - I just recovered it by chance. Therefore, this I am writing right now is not very well thought-through. I'll come back later today. Just to trigger some brainstorm, I'll go ahead and post some ideas.
|
tl;dr followup to some pieces. i generated this for a different reason, but i think is applicable here: this shows amygdala volumes computed by freesurfer and fsl on the same data (about 1800 cases). the intent of this figure is to show that consistency between our tools is missing even for very basic data elements. now compound these differences, say in a more complex workflow that uses said amygdala ROI in an autism study to look at genetics and fMRI integration. we are going to have a larger hyper-parameter space and evaluating implications of these different tools gets exponentially harder. therefore as a field, we have to move closer and closer towards we don't always know what the "correct" answer is, but when we do (and as a community we agree we do) it is imperative that we quickly move to establish software that performs within those verifiable limits. we should be able to say, here is a validated workflow that measures amygdala volume within 5% of tolerance as measured via X (whatever we decide is the gold/silver standard). |
Fixed the typo and added the discussion participants. |
@jsheunis could we change the useful links to:
|
Some interesting points that we can include in the discussion & I'm happy to add some thoughts: Referring to Botvinik-Nezer et al. 2020:
Other points:
|
Open, community-driven, software pipelines: a retrospective and future outlook
By:
Thomas Nichols, University of Oxford, Big Data Institute
Karolina Finc
Oscar Esteban
Satra Gosh
Erin Dickie
Abstract
Open pipelines collect the best expertise, algorithms and planning into community standards, allowing wide access to top-of-the-range analysis pipelines. While some frameworks exist explicitly to allow users to build pipelines with a diverse set of tools (e.g. NiPype) others comprise specific pipelines offered as a particular best practice solution (e.g. fmriprep). Is there a danger of these prepared pipelines taking on the hallowed role that individual software tools have previously held, such that their use becomes expected and their non-use needs to be justified? How do we approach continuous evaluation of such workflows? Do we have built-in procedures in these community standards that allow for critical evaluation?
Useful Links
Public Mattermost channel for discussions prior to, during and after the session.
nipype
nipreps
niflows
Tagging @nicholst @kfinc @oesteban @satra @edickie
The text was updated successfully, but these errors were encountered: