This is a demo of GeoTrellis functionality. The demo consists of two parts: the tile ingest process and demo server to query ingested data.
- Java 8
- (optional - for
make ingest) Apache Spark - (optional - for
make ingest-docker) Docker
See the Makefile for full details.
| Command | Action |
|---|---|
make build |
Build ingest/server code |
make ingest |
Ingest data for use by server |
make ingest-docker |
Ingest via docker |
make server |
Start a test server at localhost:8777 |
make image |
Generate a Docker image for deployment |
The demo covers Chattanooga with different Byte
tiles. (In fact each tile is essentially of type Bit because they only
contain the values {0, 1}). Each tile is ingests into it's own layer, and
the resulting map consists of layers which consist of combinations of
differently-weighted source layers (a weighted overlay).
gt/colors- Color Rampsgt/breaks- Color Breaksgt/tms/{zoom}/{x}/{y}- Weighted Overlaygt/sum- Zonal Summary
List of available color ramps to color weighted overlay:
blue-to-orangegreen-to-orangeblue-to-redgreen-to-red-orangelight-to-dark-sunsetlight-to-dark-greenyellow-to-red-heatmapblue-to-yellow-to-red-heatmapdark-red-to-yellow-heatmappurple-to-dark-purple-to-white-heatmapbold-land-use-qualitativemuted-terrain-qualitative
Get Parameters: layers, weights, numBreaks.
Calculates breaks for combined layers by weights with specified breaks amount.
Get Parameters: layers, weights, breaks, bbox, colors: [default: 4], colorRamp: [default: "blue-to-red"], mask.
It is a TMS layer service that gets {zoom}/{x}/{y}, passed a series of
layer names and weights, and returns PNG TMS tiles of the weighted overlay.
It also takes the breaks that were computed using the gt/breaks service.
If the mask option is set to a polygon, {zoom}/{x}/{y} tiles masked by
that polygon would be returned.
Get Parameters: polygon, layers, weights.
This service takes layers, weights and a polygon. It will compute a weighted summary of the area under the polygon.
Running Demo with GeoDocker Cluster
Quick clarification:
- Ingest requires Spark usage.
- Server works without Spark (uses GeoTrellis Collections API).
This description is a bit more generic, and describes dependent Spark server run.
To compile and run this demo, we prepared an environment. To run cluster we have a slightly-modified docker-compose.yml file:
-
To run cluster:
docker-compose up
To check that cluster is operating normally check the availability of these pages:
- Hadoop http://localhost:50070/
- Accumulo http://localhost:50095/
- Spark http://localhost:8080/
To check containers status is possible using following command:
docker ps -a | grep geodocker
More information avaible in a GeoDocker cluster repo
-
Install and run this demo using GeoDocker cluster
-
Modify application.conf (working conf example for GeoDocker cluster):
geotrellis { port = 8777 server.static-path = "../static" hostname = "spark-master" backend = "accumulo" } accumulo { instance = "accumulo" user = "root" password = "GisPwd" zookeepers = "zookeeper" } -
Modify backend-profiles.json (working conf example for GeoDocker cluster):
{ "name": "accumulo-local", "type": "accumulo", "zookeepers": "zookeeper", "instance": "accumulo", "user": "root", "password": "GisPwd" } -
Copy everything into spark master container:
cd ./geotrellis ./sbt assembly docker exec geotrellischattademo_spark-master_1 mkdir -p /data/target/scala-2.10/ docker cp target/scala-2.11/GeoTrellis-Tutorial-Project-assembly-0.1-SNAPSHOT.jar geotrellischattademo_spark-master_1:/data/target/scala-2.10/GeoTrellis-Tutorial-Project-assembly-0.1-SNAPSHOT.jar docker cp ../static geotrellischattademo_spark-master_1:/static docker cp data/arg_wm/ geotrellischattademo_spark-master_1:/data/ docker cp conf geotrellischattademo_spark-master_1:/data/ docker cp ingest.sh geotrellischattademo_spark-master_1:/data/ docker cp run-server.sh geotrellischattademo_spark-master_1:/data/
docker exec -it geotrellischattademo_spark-master_1 bash cd /data/; make ingest # to ingest data into accumulo cd /data/; make server # to run the server
This demo would be installed into
/datadirectory, inside spark master container. -