-
Notifications
You must be signed in to change notification settings - Fork 4
/
Copy pathconfig.yaml
721 lines (709 loc) · 35.3 KB
/
config.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
# yaml-language-server: $schema=public/schemas/brainhack-config.json
event:
year: 2025
startDate:
day: 26
month: 3
year: 2025
endDate:
day: 28
month: 3
year: 2025
displaySections:
tutorial: true
schedule: true
twitterFeed: false
twitterUrl: https://twitter.com/uwobrainhack
faq:
- question: What is a "Brainhack"?
answer: |
Brainhack is an academically and open science oriented hackathon which connects researchers
from across many different disciplines to work together on innovative
projects related to neuroscience. For more details and examples of past
Brainhack projects click
[here](https://gigascience.biomedcentral.com/articles/10.1186/s13742-016-0121-x).
- question: Who can participate?
answer: |
Any students (undergraduate/graduate), post-docs, staff, faculty, or
clinicians with an interest in coding, neuroscience or neuroimaging are
encouraged to participate.
- question: Will the event be in-person or virtual?
answer: |
Brainhack Western 2025 will be in-person.
- question: |
I am interested in attending a specific tutorial in-person. How do I do
that?
answer: |
You will have the opportunity to build a schedule during registration by
specifying which tutorials you wish to attend. You should be able
to attend any tutorials you wish.
- question: What does registration include?
answer: |
In-person registration includes access to all in-person tutorials and the
ability to join an in-person hacking project group. In-person registration
also includes breakfast, lunch, and snacks on all three days.
- question: Do I need to register if I only want to hack?
answer: |
Yes! Anyone wishing to participate in any aspect of Brainhack Western
2023, even just hacking, needs to register.
- question: How much does it cost?
answer: The cost of the event is $5 CAD!
- question: |
Is it possible to attend if I am not affiliated with Western University?
answer: |
We are able to accommodate non-Western University affiliated attendees.
Please note that all attendees will need to follow Western University
COVID-19 campus rules. Note that Brainhack Western
does not offer travel stipends. Attendees traveling to London, ON are
wholly responsible for their own travel, lodging, and any meals not
already included with registration.
- question: Will there be food?
answer: |
Yes! Breakfast, lunch, snacks, non-alcoholic
beverages, and coffee will be provided throughout the event. Please
indicate any dietary restrictions when you register. Also, to cut down on
waste, please bring your own coffee mug and/or water bottle.
- question: What should I bring?
answer: |
Please bring your own laptop, cables, chargers, USB sticks, and some form
of identification (e.g. Student card or Driver's licence). Also bring
anything else you may need to work on your project (hardware, datasets
etc.).
- question: What if I don't know how to code?
answer: |
Learning is an important component of a Brainhack. There will be tutorials
and experts around to answer questions.
- question: What if I can't attend all three days?
answer: |
You are welcome to attend as much or as little as your schedule allows.
We understand that Brainhack Western 2025 is scheduled during a busy part
of the academic year, so we have tried to scheduled breaks each day to
give attendees time to decompress and attend to their own work.
forms:
registration:
type: cognito
title: Brainhack 2025 Registration
shortTitle: Registration
embedTag: <script src="https://www.cognitoforms.com/f/seamless.js" data-key="dKVrRkF3yk6PnoRpFF7XGg" data-form="6"></script>
prefillMappings:
id: Calculations.VolunteerCode
registration:
cost: 5
url: /forms/registration
# This can be uncommented to allow users to enter email addresses.
# It should be a webhook that accepts the email as a simple json document with a
# single "email" key.
# In the past, I've used a make webhook to punt the email addresses on to a
# google docs form.
# Unless the dev is really committed however, it's probably not worth enabling this.
# Hardly anyone signed up for the email list, and those same individuals were students
# who would have got the registration email blast anyway.
# emailSignupTarget: https://hook.us1.make.com/hrdtv28jy9i55havi84l6db2xc1ieyol
openDate:
day: 19
month: 1
year: 2025
closeDate:
day: 19
month: 3
year: 2025
projectPitchUrl: https://github.com/BrainhackWestern/BrainhackWestern.github.io/issues/new?assignees=&labels=pitch&projects=&template=project-pitch.yml&title=%5BPitch%5D%3A+
location:
name: Western Interdisciplinary Research Building
street: Perth Dr
city: London
province: 'ON'
url: 'https://www.uwo.ca/bmi/about/wirb.html'
maps_id: ChIJCVSloavvLogR82_w3v729s0
organizers:
- Ali Tafakkor
- Peter Van Dyken
- Rafaela Platkin
- Anthony Cruz
- Matthew Suppa Modrusan
- Ricardo Alonso Rios Carrillo
- Ronak Mohammady
- Joud Abdulmajeed El-Shawa
- Tallulah Grace Nyland
- Ali Ghavampour
- Kiro Samaan
- Hanna Maekebay
- Armin Panjehpour
- Ali Khan
sponsors:
- img: '/img/brainscan_logo.png'
name: Brainscan
url: 'https://brainscan.uwo.ca'
# Can support multiple, named schedules. Each schedule is a separate list item under
# schedule:
# For color, you can use "red", "green", and "blue", or any hex color or javascript
# colorname
schedule:
- name: Tutorials
startTime: 9
endTime: 18
days:
- day: 26
month: 3
year: 2025
events:
- name: Welcome & Project Pitches
time: '9:00'
duration: '1:00'
color: blue
room:
- WIRB 4190 (Lunchroom)
- tutorial: prereg
time: '10:00'
duration: '1:00'
color: '#007c62'
room:
- WIRB 4190 (Lunchroom)
priority: 1
# - tutorial: bayesian
# time: '10:00'
# duration: '1:00'
# color: green
# room:
# - TBD
# priority: 1
- tutorial: eeg-bids
time: '11:00'
duration: '1:00'
color: green
room:
- WIRB 4190 (Lunchroom)
- name: Lunch
time: '12:00'
duration: '1:00'
room: WIRB 4190 (Lunchroom)
- tutorial: hippunfold
time: '13:00'
duration: '1:00'
color: '#1f331e'
priority: 1
room:
- WIRB 4190 (Lunchroom)
- tutorial: 3dprint
time: '14:00'
duration: '1:00'
color: '#1f331e'
priority: 1
room:
- WIRB 4190 (Lunchroom)
- tutorial: tms
time: '15:00'
duration: '1:00'
color: '#1f331e'
priority: 1
room: WIRB 4190 (Lunchroom)
- day: 27
month: 3
year: 2025
events:
- name: Breakfast
time: '9:00'
duration: '1:00'
room: WIRB 4190 (Lunchroom)
- name: Hack Time
time: '10:00'
duration: '2:00'
room: WIRB 4190 (Lunchroom)
color: '#222233'
priority: 1
- tutorial: writing
time: '10:00'
duration: '1:00'
color: '#1f331e'
priority: 2
room: WIRB 3000
- tutorial: fnirs
time: '11:00'
duration: '1:00'
color: '#1f331e'
priority: 2
room: WIRB 3000
- name: Lunch
time: '12:00'
duration: '1:00'
room: WIRB 4190 (Lunchroom)
- name: Hack Time
time: '14:00'
duration: '2:00'
room: WIRB 4190 (Lunchroom)
color: '#222233'
priority: 1
- tutorial: mystmd
time: '13:00'
duration: '1:00'
color: '#1f331e'
priority: 2
room: WIRB 4190 (Lunchroom)
- tutorial: reprod
time: '14:00'
duration: '1:00'
color: '#007c62'
priority: 2
room: WIRB 3000
- tutorial: monai
time: '15:00'
duration: '1:00'
color: '#1f331e'
priority: 2
room: WIRB 3000
- day: 28
month: 3
year: 2025
events:
- name: Breakfast
time: '9:00'
duration: '1:00'
room: WIRB 4190 (Lunchroom)
- name: Hack Time
time: '10:00'
duration: '2:00'
room: WIRB 4190 (Lunchroom)
color: '#222233'
priority: 1
- tutorial: git
time: '10:00'
duration: '1:00'
color: '#007c62'
priority: 2
room: WIRB 3000
- tutorial: datacol
time: '11:00'
duration: '1:00'
color: '#1f331e'
priority: 2
room: WIRB 3000
- name: Lunch
time: '12:00'
duration: '1:00'
room: WIRB 4190 (Lunchroom)
- name: Hack Time
time: '13:00'
duration: '1:00'
room: WIRB 4190 (Lunchroom)
color: '#222233'
priority: 1
# - tutorial: dataman
# time: '13:00'
# duration: '1:00'
# color: '#007c62'
# priority: 2
# room: WIRB 3000
- tutorial: dti-mri
time: '14:00'
duration: '1:00'
color: '#1f331e'
priority: 2
room: WIRB 4190 (Lunchroom)
- tutorial: anat
time: '13:00'
duration: '2:00'
color: '#1f331e'
priority: 3
room: WIRB 3000
- name: Project presentations
time: '15:00'
duration: '1:00'
color: blue
room: WIRB 3000
- name: Grad Club Networking Event
time: '16:00'
duration: '1:30'
color: blue
room: Grad Club
tutorials:
- id: dti-mri
name: Introduction to DTI and MRI
organizer:
- Bradley Karat
image: /img/dfmri.png
description: |
This joint tutorial will cover both the basics of diffusion
MRI and functional MRI. For diffusion MRI, the basics of processing data
from MRI acquisitions to computing diffusion tensors and tracking the
structural connectome will be covered. For functional MRI, the basics of
what aspect of brain function fMRI measures to considerations for
preprocessing fMRI data will be presented.
- id: eeg-bids
name: Using EEG-BIDS for data management of large EEG datasets
organizer: Marc Joanisse
image: /img/eegbids.png
description: |
EEG-BIDS is a relatively new standard for the orderly storage of raw EEG/iEEG/MEG datasets and derivatives.
I will demonstrate one possible workflow in which I use the EEGLAB toolbox to convert raw EEG data into BIDS format,
and for integrating existing EEG-BIDS datasets into your analysis workflow.
- id: fnirs
name: Introduction to fNIRS
organizer:
- Matthew Kolisnyk
- Garima Gupta
image: /img/fnirs2.png
description: |
Participants will learn the basics of functional near-infrared spectroscopy (fNIRS). This includes a demo of the system, the underlying physics and experimental design, as well as some exciting current applications of fNIRS. Participants will also learn about handling and preprocessing fNIRS data -- including some hands on coding!
- id: anat
name: 'Introduction to Neuroanatomy for Researchers'
organizer:
- Alaa Taha
- Violet Liu
image: '/img/afids.png'
description: |
This tutorial offers an interactive dive into neuroanatomy through a clinical case,
with a focus on subcortical regions of interest.
Participants will gain hands-on experience with segmentation tools like ITK-SNAP and learn to
visualize subcortical structures using ultra-high field MRI.
No previous experience with ITK-SNAP is required
(http://www.itksnap.org/pmwiki/pmwiki.php?n=Downloads.SNAP4)
- id: tms
name: 'Introduction to TMS'
organizer:
- Phivos Phylactou
image: '/img/tms2.png'
description: Transcranial Magnetic Stimulation is a safe, non-invasive, tool that enables us to modulate the brain’s regular activity with high spatial and temporal resolution. Forty years since its introduction, TMS has been utilized in a variety of ways; from the labs all the way to the clinics. In this workshop, we will explore how TMS can be employed to causally study the brain’s neural architecture, how it can be used to measure brain activity, and how it can help the maladaptive brain.
- id: prereg
name: 'Open Science: Pre-registration'
image: /img/prereg.png
organizer:
- Christine Moreau
description: In this hands-on workshop, participants will learn the fundamentals of preregistration and how it enhances research transparency and rigor using the Open Science Framework (OSF). Attendees are encouraged to bring their research protocols as there will be the opportunity to receive peer feedback to refine their preregistrations. The session will also cover handling protocol changes post-publication, emphasizing the flexibility of preregistration.
- id: git
name: 'Open Science: Research Code Management (Git)'
image: /img/git-logo.png
organizer:
- Ali Tafakkor
- Christine Moreau
description: |
This tutorial will cover best practices for managing research code with
the ultimate goal of sharing it. Aspects of Git and GitHub, including
version control, branching, collaborations, etc. will be covered.
# - id: dataman
# name: 'Open Science: Research Data Management (DataLad)'
# image: /img/datalad.png
# organizer: Ali Tafakkor
# description: |
# Handling data can be a challenge, and it's best to think about it before
# you need to organize it on a deadline. In this tutorial you'll learn best
# practices for collecting, storing, processing, and sharing your data.
# We'll go over writing a Data Management Plan, formatting your data with
# BIDS and other open standards, and version controlling your data with
# DataLad, and uploading to a public repository.
- id: reprod
name: 'Open Science: Reproducible Data Analysis (Snakemake)'
image: /img/snakemake_reproducable.png
organizer: Ali Khan
description: |
Using a workflow management tool to run your research analyses can make it
much easier to remember what you've done with your data, make small
adjustments, and describe your analyses to others in a reproducible way.
In this tutorial, you'll learn about what a workflow management tool is,
why you might want to use one, and how to write your analyses as
workflows. As an example, we'll go over SnakeBIDS, a Western-grown
workflow management tool for handling BIDS-formatted neuroimaging data.
# - id: mri
# name: A Brief Introduction to MRI
# image: /img/t1w.jpg
# organizer: Brad Karat
# description: |
# This tutorial will provide a basic introduction to MRI. We will cover
# where the MRI signal is originating from, what are the different
# contrasts, basic pulse sequences for acquiring an image, how to
# reconstruct and visualize an image from DICOM to NIFTI, and unique MRI
# contrasts such as diffusion or functional imaging.
- id: writing
name: Scientific Writing
organizer:
- Jessica Grahn
image: /img/writing.png
description: |
Want your scientific papers to be not just clear but also engaging?
Join our scientific writing style crash course at the hackathon!
Learn how to refine your tone, improve flow, and make your writing more compelling — without losing precision.
- id: hippunfold
name: HippUnfold Tutorial
image: /img/hippunfold.png
organizer:
- Ali Khan
- Jordan DeKraker
- Dhananjhay Bansal
- Mackenzie Snyder
description: |
HippUnfold is a toolbox for automatically modeling the topological folding structure of the human hippocampus.
This session will introduce its key features, including running HippUnfold, exploring volumetric and surface-based outputs,
and integrating results with tools like ITK-SNAP and Connectome Workbench.
We will also discuss applications to structural, fMRI, and dMRI data. No prior experience with HippUnfold is required.
Links: https://github.com/khanlab/hippunfold
- id: monai
name: MONAI Tutorial
organizer:
- Mahmoud Salman
image: /img/monai.png
description: |
A practical, no-theory guide to training deep learning models on neuroimaging data with zero experience. This tutorial covers MONAI, the leading AI framework for medical imaging, simplifying tasks like segmentation and classification. With pre-trained models, automated pipelines, and MONAI Zoo, you’ll apply deep learning to neuroscience research effortlessly—Python experience is required.
- id: 3dprint
name: 3D Printing Tutorial
organizer:
- Mehrdad Kashefi
- Bassel Arafat
image: /img/3dprint2.png
description: |
Many experiments require rapid prototyping of experimental setups. While the traditional method of taping random parts together might work in a pinch, 3D printing offers a faster, more precise, and more robust solution. In this workshop, we’ll provide a brief overview of different 3D printing technologies and materials, followed by a hands-on tutorial on free, online 3D design software. As a fun bonus, we’ll show you how to turn your own MRI structural data into a 3D-printable model.
- id: mystmd
name: MystMD Tutorial
organizer:
- Ricardo Alonso Rios Carrillo
image: /img/mystmd.png
description: |
This tutorial will introduce MystMD (https://mystmd.org/). MyST extends Markdown for technical, scientific communication and publication. MyST is an ecosystem of open-source, community-driven tools designed to revolutionize scientific communication.
The tutorial will cover installation and live code-writing of a short scientific article as a starting example.
- id: datacol
name: 'Data Collection: Best Practices'
organizer:
- Sarah Hollywood
image: /img/datacol.png
description: This workshop centers around the best practices in the approach and conduct of data collection. This will include ethical considerations, how to effectively recruit participants and reduce attrition, improve your communication skills, and how these practices can make you a better researcher.
projects:
2021:
- title: BIDS - Creating the events.tsv file
organizers:
- name: Suzanne Witt
github: switt4
description: |
While most of the steps to create a valid BIDS (Brain Imaging Data
Structure) directory can be automated, the events.tsv file containing
all of the experimental task information still needs to be largely dealt
with manually. Making this process a bit more difficult is that for
tasks that cannot be easily parameterized using just reaction time and
correct responses, the current documentation is limited. For this
project, I would like to gather anyone who is at the point of needing to
make an events.tsv file for their own study and work on creating more
user-friendly documentation for completing the events.tsv file with more
real-world examples. And, if we decide it is needed, we will develop the
framework for an app that will help researchers take their existing
experimental output files and create a valid events.tsv file.
- title: Automated skull-stripping fetal MRI data
organizers:
- name: Emily Nichols
description: |
My project has to do with skull-stripping fetal fMRI data - traditional
tools don't work on fetal fMRI because of the surrounding tissue and the
lack of a clear skull, which means 1000s of hours are spent manually
segmenting the brain. There are some tools available, but they don't
work well and still result in many hours spent fixing the automatic
segmentations. I'm hoping to improve upon available tools so that
automated segmentation works.
- title: |
Building a Jupyter Book tutorial for intracellular electrophysiology
analysis in python
organizers:
- name: Sam Mestern
github: sammestern
description: |
Intracellular electrophysiology experiments - specifically patch-clamp
and its associated methodology - produce immense datasets consisting of
highly structured time-series data. While many sub-fields of
neuroscience have adopted standardized beginner-friendly computational
analyses/tools, the intracellular patch-clamp field still lacks
standardized & beginner-friendly analyses. Many intracellular papers
rely on manual analysis with proprietary tools/code. For this brainhack
project, we will build a [jupyter
book](https://jupyterbook.org/intro.html) outlining some basic analysis
that can be done on patch-clamp data, with a focus on building pipelines
for batch analysis of patch-clamp data. The tutorials will utilize
open-source tools from various sources, including the Allen institute!
An early form of the notebooks can be found
[here](https://www.smestern.com/PCCA_demo/intro.html).
- title: Quality Control Visualization App
organizers:
- name: Peter Van Dyken
github: pvandyken
description: |
Many automated pipelines include some sort of HTML based Quality Control
output, allowing you to quickly scan through visualizations of the
results. These are all in-house, custom solutions; there's no easy way
to generate nice QC interfaces without manually creating html files. The
goal of this project is to design a ReactJS based webapp that reads a
JSON file containing a structured account of all the data to be
displayed. For instance, the json would have paths to all the images to
be displayed, organized by view, subject, contrast, task, or any other
arbitrary dimension. The web app would automatically organize all of the
data into a clean, modern web app interface, allowing the user to view
the data along any dimension. In other words, you could easily view all
the different contrasts from a single subject, with a given view, or
view all the subjects across a given condition, or any other arbitray
"slice" of the data.
An app of this nature will be fairly complicated, so the goal during
Brainhack would be to build a very basic prototype. Knowledge of
Typescript (a typed superset of Javascript) and ReactJS will be helpful,
but anyone interested in learning web app development is welcome.
Knowledge or interest in graphic design (especially user interfaces)
would also be useful.
2022:
- title: Safer Multimodal Teleoperation of (Simulated) Robots
organizers:
- name: Pranshu Malik
github: 'pranshumalik14'
description: |
Being confused, freezing, or panicking while trying hard to stop,
re-direct, or stabilize a drone (or any such robot/toy) in sudden
counter-intuitive poses or environmental conditions is likely a
relatable experience for all of us. The idea here is about enhancing the
expression of our intent while controlling a robot remotely — either in
real life or on a computer screen (simulation) — while not replacing the
primary instrument of control (modality) but instead by also integrating
our brain states (thought) in the control loop as measured, for example,
through EEG. Specifically, for the scope of the hackathon, this could
mean developing a brain-machine interface for automatically assisting
the operator in emergency cases with “smart” control command reflexes or
“takeovers”. Such an approach can be beneficial in high-risk cases such
as remote handling of materials in nuclear facilities or it can also aid
the supervision of autonomous control, say in the context of
self-driving cars, to ultimately increase safety.
For now, we could pick a particular type of simulated robot (industrial
arms, RC cars, drones, etc.) and focus on designing and implementing a
paradigm for characterizing intended motion and surprise during
undesired motion in both autonomous (with no user control but robot's
self- and environmental influences) or semi-autonomous cases (including
user's control commands), i.e., we can aim to measure intent and
surprise given the user's control commands, the brain states, and robot
states during algorithmically curated cases of robot motion. This will
help us detect such situations and also infer desired reactions to
accordingly adjust control commands to achieve desired reactions during
emergencies and, more generally, to augment real-time active control to
match the desired motion. We can strive to keep the approach suitable
for generalizing well enough to robots of other types and/or
morphologies and to more unusual environments.
- title: 'Brain3DVis: AR/MR MRI Visualizer'
organizers:
- name: Liam Bilbie
github: LiamBilbie
description: |
Application that uses a web-based front end where users can submit brain
MRI volumes. The data is then processed on the application backend, a 3D
model is generated, and is then accessible in a AR/VR environment. We
are using the MERN stack for the web component and C#/Unity for the
AR/VR application. We would love to work with people who have experience
with MRI volumetric segmentation or are interested in VR/AR data
visualization. :slightly_smiling_face:
- title: 'Functional Atlas Explorer'
organizers:
- name: Caroline Nettekoven
github: carobellum
- name: Ladan Shahshahani
github: lshahsha
description: |
"What does the inferior frontal gyrus do?" Googling this question
returns 93,400,000 results. A skim read of paper abstracts will give you
some ideas about its functions. But wouldn't it be useful to have an
interactive map where you can explore these functions in the brain?
Imagine clicking on a brain region and getting a word cloud of functions
that this region is highly involved in. That's what we will build!
We will create an interactive tool, useful for anyone wanting to explore
their own functional data or openly available functional datasets.
Together with our functional fusion toolbox (in-house & soon to be
released), it lets you integrate information from many different openly
available datasets (Human Connectome Project, etc).
During the project you will be able to use our existing toolbox to
explore these large datasets - as well as your own data! In addition,
this interactive map will provide a side by side view of your desired
brain regions and their connections with other brain regions. Click on a
region and you get a map of its functional connectivity to all other
regions in the brain.
The tool will help synthesize findings across studies (think Neurosynth,
but directly based on fMRI data) and aid interpretation of results as
well as planning of future experiments.
- title: 'DWI Fiducials'
organizers:
- name: Arun Thurairajah
description: |
Diffusion-Weighted Imaging (DWI) is a variant of the standard MRI
sequence examining the diffusion rate of water molecules. DWI or
Diffusion MRI has allowed for the improved study of white matter
pathways across both diseased and healthy patients.
The goal of the diffusion Fiducials or dFIDs project is to identify a
set of anatomical landmarks on a variety of Diffusion-Weighted MRI
images that are both salient and have functional significance. This
will be an extension of the previous Anatomical Fiducials (AFIDs)
project that has localized a set of 32 clinically-relevant landmarks in
humans, macaques, and PD patients in multiple imaging acquisitions (see
https://doi.org/10.1002/hbm.24693).
We are looking for students, researchers, and clinicians to determine
potential fiducials across various acquisition types and models in DWI.
Those with experience in acquiring diffusion images or who have an
interest in neuroimaging or anatomy are welcome to join!
2023:
- title: DiffSimViz
description: |
DiffSimViz (Diffusion Simulator Visualizer) will be a python toolbox
with the aim of providing intuitive visualizations of particle diffusion
in different media/microstructure. This will ideally include an interface
to choose particular media to visualize diffusion within as well as to
change parameters such as diffusion time and membrane permeability. The
use of such a visualizer will be didactic or for researchers to better
understand what their diffusion MRI signal might look like in particular
microstructural environments.
url: https://github.com/Bradley-Karat/DiffSimViz
organizers:
- name: Bradley Karat
github: Bradley-Karat
- title: FunctionalSight
description: |
What happens in the brain stem when we move our eyes? And how can fMRI
show us regions that control eye movements in different directions?
FunctionalSight will allow researchers to (1) use eye tracking data to
get components of eye movements and (2) regress these eye movement
components into high-resolution fMRI data to understand how the brain
stem controls the eyes. We will dive into high-resolution fMRI data
acquired at Western and combine it with in-scanner eye tracking data to
build a model that can show us the neural signatures of eye movements.
Researchers will be able to use the model for their own eye tracking and
fMRI data to better understand the neural basis of visual perception
attention or reading. A guaranteed eye opening experience!
organizers:
- name: David Mekhaiel
github: DavidMekhaiel
- title: 'BrainLex: Engaging Neuroscience Glossary'
description: |
Navigating through neuroscience can be challenging as there are lots of
complex terms to reference. BrainLex aims to solve this problem with an
interactive online glossary suitable for all learners. Users can look up
neuroscience terms and receive simple definitions visuals and links to related
resources.
organizers:
- name: Parsa Abadi
github: parsaabadi
- title: Ciftipy
description: |
CiftiPy is a Python library designed to streamline the handling of complex CIFTI files used in neuroimaging research. CIFTI files represent gray matter as cortical surface vertices and subcortical volumes which often are difficult and time consuming to access and manipulate. CiftiPy simplifies this process providing an intuitive numpy-like interface for loading manipulating or creating CIFTI files. Built on top of the established Nibabel library, CiftiPy offers researchers a powerful yet user-friendly solution for working with CIFTI files enhancing the accessibility of advanced neuroimaging tools and workflows.
url: https://github.com/pvandyken/ciftipy
organizers:
- name: Mohamed Yousif
github: myousif9
2025:
- title: HippUnfold_v2.0
description: |
HippUnfold is a world-leading software for analysis of the hippocampus. The hippocampus is a "canary in a coal mine" brain region - a sensitive and early indicator of neurological conditions like dementia, epilepsy, and many others. It is also critical in memory and high-order cognition across all mammals. Some studies are already revealing HippUnfold's potential for translation to the clinic!
Recently, we are creating a major overhaul of HippUnfold to run more lightly, smoothly, and using best-practices including BIDS, snakemake, and git continuous integration (version 2.0). We'd love to get input and/or help with outstanding issues on the new release, and would be especially keen to get feedback on its ease-of-use for new users and its Documentation.
Come to the HippUnfold Tutorial at 1PM Wednesday for an introduction!
url: https://github.com/khanlab/hippunfold
organizers:
- name: Jordan DeKraker
github: jordandekraker
- name: Ali Khan
github: akhanf
- title: Label Augmented Modality Agnostic Registration (LAMAR)
description: |
Registration can be tricky when there is low signal-to-noise ratio, signal dropout, and geometric distortions. This technique combines deep-learning based modality-agnostic segmentation with with conventional analytic registration methods to generate precise warpfields even in low-quality data. This type of registration can also be faster at higher resolutions, due to the simplicity of the label maps. The project will involve writing an optimizing python code, writing tests, some documentation, and assessing the quality of registration. This project is a good fit for anyone eager to learn more about python programming, registration, and convolutional neural networks, but even if you're brand new to everything, we can catch you up to speed!
url: https://github.com/MICA-MNI/LaMAR
organizers:
- name: Ian Goodall-Halliwell
github: Ian-Goodall-Halliwell
- title: Diffusion Fiducials
description: |
Neurosurgical procedures, such as deep brain stimulation or lesion resection, depend on the precise alignment of multiple types of scans to accurately target specific structures. Currently, clinical quality control relies on visual inspection, which is insensitive to subtle yet significant misalignments. To address this, our lab developed the anatomical fiducials (AFIDs) framework comprising 32 easily identifiable anatomical landmarks to measure alignment accuracy at the millimetre scale. However, this protocol was only validated on T1-weighted (T1w) MRI scans showing anatomical information. In this project, we aim to extend and validate the AFIDs framework for diffusion MRI-derived functional anisotropy maps (FA maps), which show white matter tracks in the brain. We are looking for anyone interested in learning more about neuroanatomy to join our project apply the anatomical fiducial protocol to annotate FA maps!
url: https://drive.google.com/drive/folders/1bfl6yQLSA4VBvbSgVSE7qjuQHfg8CVCy?usp=drive_link
organizers:
- name: Joy Zhao
github: joy-zhao