Keywords: Unsupervised Domain Adaptation, Pediatric Tumor Segmentation, Gradient Reversal Layer
This repository contains the source code accompanying the paper:
Fu, Jingru, et al. "Unsupervised Domain Adaptation for Pediatric Brain Tumor Segmentation", ADSMI @ MICCAI 2024.
We are excited to announce that the pre-trained weights are now available!
These weights enable researchers and practitioners to reproduce our results or fine-tune the models for their use cases.
You can download the weights from:
We utilized two datasets provided by the BraTS challenge organizers: the BraTS 2021 adult glioma dataset and the BraTS-PEDs 2023 dataset. If you need to convert datasets to nnU-Net style, refer to the example script provided here.
The main developed source code resides in the nnunetv2/training/nnUNetTrainer/customized
directory. You can seamlessly integrate the new nnUNetTrainer into the nnUNet framework.
We presented four strategies (associated with models 5~8 in the above figure) for transfer learning or fine-tuning within nnUNet:
- Train on segmentation-related layers only:
nnUNetTrainer_TL
in nnUNetTrainer_TL. - Fine-tune with a smaller learning rate over 300 epochs:
nnUNetTrainer_TL_FT_1en5_300epochs
in nnUNetTrainer_TL. - Fine-tune with a smaller learning rate over 300 epochs on the encoder only:
nnUNetTrainer_TL_FTen_1en5_300epochs
in nnUNetTrainer_TL. - Fine-tune with a smaller learning rate over 300 epochs on the decoder only:
nnUNetTrainer_TL_FTde_1en5_300epochs
in nnUNetTrainer_TL.
To use DA-nnUNet, use the following example nnUNetTrainer:
- DA-nnUNet training with 500 epochs and 4 convolutional layers in the domain classifier without deep supervision:
nnUNetTrainerDA_500ep_noDS_4Convs
in nnUNetTrainer_DANN.
To adapt DA-nnUNet to your specific problem, you may need to modify the following hard-coded parts:
-
Update the code here. In this section, the domain is inferred from the filenames provided in the code.
-
Modify
nnUNetTrainerDA
: Adjust the target_domain and domain_mapping nnUNetTrainer_DANN.py Lines 84-88
Additionally, to correctly use the custom I/O reader (SimpleITKDomainIO
), you need to specify the optional parameter in dataset.json
as follows:
{
"overwrite_image_reader_writer": "SimpleITKDomainIO"
}
The SimpleITKDomainIO
is needed to extract the domain information from the filename and inject it in the nnUNet property
.
To train your model, run the following command:
nnUNetv2_train 142 3d_fullres_bs4 0 -tr nnUNetTrainerDA_500ep_noDS_4Convs --npz
Explore these components to experiment with DA-nnUNet:
- Domain balanced dataloader: Use
nnUNetDataLoader3D_Balanced
in data_loader_3d_balanced to balance inputs for the domain classifier in each batch. - Architecture: Implementations for integrating the domain classifier into the nnUNet backbone are available in unet_da:
PlainConvUNet_DA
: Domain classifier placed in the bottleneck of the UNet.PlainConvUNet_DAonDecoder
: Domain classifier placed in the decoder of the UNet.
- Schedulers: Use
DALRScheduler
orGRLAlphaScheduler
in schedulers.
We used the official metrics provided by the BraTS 2023 challenge to evaluate our results.
In our paper, we reported the mean and median DSC and HD95 metrics. Lesion-wise results are also available in the results
folder. We employed a post-processing strategy (See code here) from the BraTS 2023 PED challenge winner to redefine the ET region using an optimal ET/WT ratio threshold of 1 (shown in the last row of the table below). The summarized Lesion-Wise (LW) metrics are shown below:
If you find this code useful for your research, please consider citing:
@article{fu2024unsupervised,
title={Unsupervised Domain Adaptation for Pediatric Brain Tumor Segmentation},
author={Fu, Jingru and Bendazzoli, Simone and Smedby, {\"O}rjan and Moreno, Rodrigo},
journal={arXiv preprint arXiv:2406.16848},
year={2024}
}
This repository is based on nnU-Net. We appreciate their excellent work!