-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
use class segmentation for % vegetation/canopy estimation #10
Comments
I'm going to check out the iammartian0/RoadSense_High_Definition_Street_Segmentation and facebook/mask2former-swin-large-cityscapes-semantic that the StreetView-NatureVisibility project uses. Instead of using HuggingFace transformers to download the model and run it locally, I might instead look at the HuggingFace Serverless Inference API. |
Thanks for exploring segmentation for analysis. The initial target users are Red Cross Red Crescent National Societies and they may have limited connectivity (e.g. expensive, intermittent, and/or slow). Eventually, I want an option for a fully local workflow (e.g. parsing a local folder of images instead of using Mapillary). Also, the image sets can be quite large so I'd want to know about any potential limits of APIs - I am testing the |
(move these notes into the first comment of this thread so that the overall description for this issue is in one place.) |
The current GVI calculation doesn't seem to a good solution (at least not yet). See #29 (comment)
Additionally, it's likely that there are better ways to analyze than the green view index calculation from the original Treepedia. Can we download and run a transformer model to do class/semantic segmentation and calculation the % of pixels classified as a "vegetation" type category? (Noting that we want to run locally and not depend on an API/service for the analysis.)
For example https://huggingface.co/iammartian0/RoadSense_High_Definition_Street_Segmentation can segment out
Nature-vegetation
as a category.objective
easiest option at this time would seem to be to borrow and build on what the creators of https://github.com/Spatial-Data-Science-and-GEO-AI-Lab/StreetView-NatureVisibility have done and...
other notes
assign_gvi_to_points.py
- it would be an alternate script to run for that step. It would not replace that file. The script should read in a geospatial data file from step 2, check each point for data about an associated image, read the image from disk and analyze it, write the details of the analysis to a new column and save out a new geospatial data file.python -m src.download_images
the user can include aMAPILLARY
orLOCAL
argument. similarly, for step 3 calculating green view, we may want to let users include an argument defining their preferred analysis method, something likeTREEPEDIA
orMASK2FORMER
The text was updated successfully, but these errors were encountered: