Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

use class segmentation for % vegetation/canopy estimation #10

Open
danbjoseph opened this issue Feb 5, 2024 · 3 comments · May be fixed by #55
Open

use class segmentation for % vegetation/canopy estimation #10

danbjoseph opened this issue Feb 5, 2024 · 3 comments · May be fixed by #55
Assignees

Comments

@danbjoseph
Copy link
Member

danbjoseph commented Feb 5, 2024

The current GVI calculation doesn't seem to a good solution (at least not yet). See #29 (comment)

Something seems off (either with the methodology or with the code) based on a spot check of the 3 highest and 3 lowest GVI scores.

Additionally, it's likely that there are better ways to analyze than the green view index calculation from the original Treepedia. Can we download and run a transformer model to do class/semantic segmentation and calculation the % of pixels classified as a "vegetation" type category? (Noting that we want to run locally and not depend on an API/service for the analysis.)

For example https://huggingface.co/iammartian0/RoadSense_High_Definition_Street_Segmentation can segment out Nature-vegetation as a category.
Screenshot 2024-03-31 at 3 55 57 PM

objective

easiest option at this time would seem to be to borrow and build on what the creators of https://github.com/Spatial-Data-Science-and-GEO-AI-Lab/StreetView-NatureVisibility have done and...

  • crop the bottom 20% band as our images will likely have a car roof or helmet like this:
    GS010002_0_000149
  • Apply semantic segmentation to assign labels to different regions or objects in the image.
  • I'm not sure why they say "Divide the image into four equal-width sections." after the segmentation. my assumption was that the warping visible in the image above capturing the full scene might impact the accuracy of the segmentation? do we want to divide the image into 4 images with 90 degree field of vision instead of 1 image with 360 field of vision before the segmentation step?
  • calculate the % of the 360 image that is vegetation/trees/etc
  • attach the score to each point
  • save out the geo-file following the conventions of the existing steps
  • if possible, consider having an argument to save out the X number of highest and lowest GVI scoring images to let the user spot check the results? in a format like they do:
    7536-1
  • see the sample segmented image above. there's a more pastel green (grassy looking area) and a more neon green (appears to be mostly trees). confirm the different possible labels related to vegetation (there are at least 2) and check back to discuss what we want to include.

other notes

  • this would be an additional file alongside assign_gvi_to_points.py - it would be an alternate script to run for that step. It would not replace that file. The script should read in a geospatial data file from step 2, check each point for data about an associated image, read the image from disk and analyze it, write the details of the analysis to a new column and save out a new geospatial data file.
  • the readme instructions should build on our existing readme.
  • this process should be a simple swap for our treepedia based option in 3. Assign a Green View score to each image/feature - when @dragonejt in fix: add LocalImages Image source to assign local images to points using EXIF data #42 is tackling having 2 options for step 2, he has gone the route of including an argument for the method. that is, when running python -m src.download_images the user can include a MAPILLARY or LOCAL argument. similarly, for step 3 calculating green view, we may want to let users include an argument defining their preferred analysis method, something like TREEPEDIA or MASK2FORMER
@danbjoseph danbjoseph converted this from a draft issue Feb 5, 2024
@danbjoseph danbjoseph changed the title explore other analysis options use class segmentation for % vegetation/canopy estimation Apr 4, 2024
@dragonejt
Copy link
Contributor

I'm going to check out the iammartian0/RoadSense_High_Definition_Street_Segmentation and facebook/mask2former-swin-large-cityscapes-semantic that the StreetView-NatureVisibility project uses. Instead of using HuggingFace transformers to download the model and run it locally, I might instead look at the HuggingFace Serverless Inference API.

@danbjoseph
Copy link
Member Author

danbjoseph commented Apr 25, 2024

Thanks for exploring segmentation for analysis. The initial target users are Red Cross Red Crescent National Societies and they may have limited connectivity (e.g. expensive, intermittent, and/or slow). Eventually, I want an option for a fully local workflow (e.g. parsing a local folder of images instead of using Mapillary). Also, the image sets can be quite large so I'd want to know about any potential limits of APIs - I am testing the Spatial-Data-Science-and-GEO-AI-Lab/StreetView-NatureVisibility project and for Semarang, Indonesia and it is 54,027 images.

@danbjoseph
Copy link
Member Author

danbjoseph commented May 8, 2024

(move these notes into the first comment of this thread so that the overall description for this issue is in one place.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging a pull request may close this issue.

2 participants