napari plugin to interactively train and test a StarDist model
This napari plugin was generated with Cookiecutter using @napari's cookiecutter-napari-plugin template.
This plugin provides tools for annotating spots in a 3D two-channel image (hdf5 type input file), submitting tiles for StarDist model generation or model re-training, and refining initial annotations based on predictions (kind of human-in-the-loop approach).
The objects of interest in the image are sphere-like spots with a diameter of just a few pixels and are thus well suited for StarDist instance segmentation. The image dimensions are typically 1024x1024 pixels in xy and ≥ 64 sections in z.
With python and pip installed (e.g., via miniconda or miniforge),
it is recommended to create a new environment and install napari-spofi using pip.
pip install napari napari-spofi
Start napari and select "spot finder (napari-spofi)" from the "plugin" menu.
One should start the model generation with processed images (background removed, denoised, ...). Go to the 'annotation' section of the widget and create a new directory for annotations. Add an image folder containing at least one h5 file containing dataframes with foreground and background signals (e.g., 'ch1_processed' & 'ch2_processed') (default names can be changed in resources/spofi_defaults.json). Select an image file, chose foreground and background channels. Load the image file.
Inspect the image for distinct regions. To help locate relevant tile positions, make the 'checkerboard' layer visible. While the 'tiles' layer is active, double-click a tile to add it to the list of tiles to process. This list will be used to generate a set of images and masks for training purposes.
Switch to napari's 2D view. Navigate to the centre section of each spot in the active tile and annotate by adding points (one point per spot) using the 'true' points layer. Annotate tiles in one or multiple images. Click the 'extract spots' button to prepare training data. First, the algorithm tries to find the maximum intensity position in the manually assigned spot region. Then a spot mask is generated using one of the two available procedures. Either, pixels with relative minimal intensities as given by the intensity threshold are used to define the spot area. Alternatively a watershed segmentation on the difference of Gaussians filtered foreground image is used. In both cases, the procedures focus in on a sub-region around annotated spot positions.
There is a "suggest spots" option that uses a Laplacian of Gaussian method to locate spots. This may be used as a first step to aid manual annotation.
Go to the 'training' section of the widget. Adjust the "number of epochs". For a first check, 100 epochs is a good start. The plugin uses a simplified setup for StarDist configurations (please see StarDist for a full discussion).
Start training and watch the 'loss' and 'val_loss' values, which should decrease steadily while their ratio should roughly remain at 1 as training progresses.
The retrain option allows the selection of an existing model for retraining.
Go to the 'prediction' section of the widget to start spot prediction for the currently loaded image. Select the appropriate model from the given annotation directory. The 'threshold' value is calculated from the validation data and can be adjusted. Start a new prediction and load the predicted spots when the process has finished. (It is possible to load an existing prediction).
Predicted spots will be loaded into a new layer 'predicted'. The 'predicted' layer is not editable and gives an overview of the spots found. Check your annotation in the active tiles ('true' layer). Adjust the positions of the spots or remove any incorrect spots from the 'true' layer. Extract the spots and train a new model or retrain the model.
Contributions are very welcome.
Distributed under the terms of the BSD-3 license, "napari-spofi" is free and open source software
If you encounter any problems, please file an issue along with a detailed description.