Can segmentation crop images rather than output the segmented regions? #58
Replies: 4 comments 12 replies
-
Hi, Thanks for the positive comments and question!
If you are having trouble with over segmentation i.e segmentation going outside the border of the root, then I think the best way to fix this is with annotation. When interactively training the model keep your foreground (red) annotation within the root. What I mean by this is leave some space between your foreground annotation and the edge of the root. Even just a couple of pixels should be enough. This should help prevent annotation errors where foreground has gone over the background i.e outside of the root (encouraging the model to over segment roots). Then with the background (green) annotation you can push the boundary inwards to make sure the root is not over-segmented. Unfortunately you'd probably want to attempt this approach from the beginning of a project, as existing projects would be significantly influenced by the annotations you have done so far. If that's not clear I could add some images to help explain.
I'm not 100% sure I understand what you mean by 'crop out the areas of interest'. Could you provide more details of how you would want this to work? I have an idea for one feature that might help: Instead of providing the binary segmentation from RootPainter i.e each pixel being labelled as foreground or background, RootPainter could could provide you with the prediction 'probabilities', where each pixel is a value between 0 and 1. i.e it could be 0.1 or 0.8 etc. Then you could experiment with thresholding the images yourself (converting to binary) based on your own desired probability threshold. I'm open to adding an option to output the 'probabilities' so you could perform this thresholding yourself instead of using 0.5 which is the current default. But, my suggestion for now would be to experiment with being more conservative with your foreground annotation during the interactive training process as I suspect this could address the problem without modifications to the software. Fixing the problem by altering the annotation would also provide you with feedback during the annotation, so you can see if it is fixed. Kind regards, |
Beta Was this translation helpful? Give feedback.
-
BTW if you do just want to crop to areas of interest, I'd suggest doing this as a pre-processing step. If your data is small enough then doing this manually works fine (I have used preview in the past). For larger datasets a more automatic approach would be required i.e a python/ImageJ script. If you want to use RootPainter to crop to areas of interest, before you generate a more fine-detailed segmentation then I can also provide advice on this (perhaps this is what you meant anyway?). There's a lot of different ways to do this but what I advise in this situation is to train two models. First a localisation model on a reduced resolution version of your image, say at 1/4 or 1/8. This model can be trained to predict the general area where the root is (don't worry about exact boundaries) and then to exclude the larger background region. It's actually pretty quick to train this type of localisation model. Then you'd use the structure predicted from this model to generate crop coordinates that would be used for the generation of a dataset that only included your regions of interest. Then a high resolution model could be trained on just these areas of interest. Does that make sense? The above approach can be done now, but may require some scripting outside of RootPainter. I'd be open to better supporting it within the client software itself. Another (perhaps simpler) approach would be to train one model and then use connected component analysis on the output, so you automatically exclude any structures that are not connected to your largest connected root structure in each image. Perhaps this could also be performed in RVE/ImageJ already without too much trouble? Kind regards, |
Beta Was this translation helpful? Give feedback.
-
Thought I'd share my experience with this - I was having some difficulty on a dataset of cotton/Palmer amaranth plants with multiple plants per image, so tried the the two model approach based on @Abe404's advice. We trained one model to pick up where the general region of each plant was and then a separate model with higher resolution on the result of that first stage. The process I went through is roughly described below - happy to share more if it's helpful! |
Beta Was this translation helpful? Give feedback.
-
Hi, I realized I never updated this discussion thread about what I ended up doing to solve my problem. I used some simple tools from OpenCV to create a mask using the segmented RootPainter output to crop out my areas of interest. Then, some simple thresholding in ImageJ and/or RhizoVision seemed to (mostly) do the trick in refining the root architecture I wanted to isolate. Sriram |
Beta Was this translation helpful? Give feedback.
-
Hello, I'm using RootPainter to segment rhizotron/rhizobox images and it's working great (also really appreciate the incredibly clear documentation and instructions). However, the segmentation tends to overestimate the root area, so if I were to send it to RhizoVision, the diameter, network area, etc would be greater than true values. This is fine for treatment differences but I'm looking to calibrate 3d models using this data.
Is there a feature for the segmentation to crop out the areas of interest so that I can more manually threshold in ImageJ? Thanks in advance.
Beta Was this translation helpful? Give feedback.
All reactions