SAM Segmentator is an GUI application created for educational segmetational purposes. Segmentator outputs:
- Original image
- Mask
- Txt annotation (for models like YOLO)
- Setup CUDA and CUDNN
https://developer.nvidia.com/cuda-toolkit https://developer.nvidia.com/cudnn
- Clone the repo:
git clone https://github.com/lukasiktar/SAM_segmentator.git
- Download the SAM model from:
https://github.com/facebookresearch/segment-anything.git
- Install Opencv library from:
https://github.com/opencv/opencv.git
- Build torch and torchvision from (it is reccomended to build from source with CUDA support for your CUDA version):
https://pytorch.org/get-started/previous-versions/
- Install Tkinter
sudo apt install python3-tk
- Install segment-anything and albumentations
pip install segment-anything albumentations
- Download the repository and store it in the working directory:
git clone https://github.com/OpenGVLab/SAM-Med2D.git
To start the application, build the Python executable and start it.
Choose the SAM or Custom model segmentation:
SAM:
- Choose the apropriate image
- Draw the bounding box around specified object or click on it
- Perform segemetation using SAM
- Edit segmentation (if neccessary)
- Save the segmetation results
Custom model:
- Load the dataset
- Check the given annotation and modify if neccessary.
- If the annotation is not present, perform manual annotation.
- Save the segmentation results
SAM:
Segment using Point prompt Segment using Box prompt Edit the segmentation Accept or Reject segmentation
Contour Editor:
Automatic segmentation Manual segmentation