Topography‑aware meshing pipeline for Atmospheric Boundary Layer (ABL) simulations.
This repository implements a production-capable pipeline to build high‑quality surface and hybrid (prismatic + tetrahedral) volume meshes adapted to complex terrain. The pipeline implements a high‑order (HO) topography approximant, metric-driven surface adaptation, memmap- and GeoTIFF-backed precompute of HO coefficients, adaptive and anisotropic background sampling for metric meshes, metric complexity (β*) sizing to target node budgets, integration with Gmsh background-mesh meshing (≥ 4.14), and hybrid mesh optimization for prismatic/tetrahedral ABLs. Visualization helpers (PyVista) and CLI helpers are included.
This README provides a concise but complete developer and user quickstart. For an exhaustive overview of the project design, algorithms and rationale see PROJECT_OVERVIEW.md.
- Raster-based HO approximant (polynomial LS) with:
- on-demand fits via
query_at((x,y)) -> z, grad, hess - parallel tiled memmap precompute for out-of-core use
- multiband GeoTIFF export/import (one band per coefficient, monomial metadata)
- on-demand fits via
- Metric construction: curvature (Hessian) and tangent (first fundamental form)
- Metric complexity integral and global scaling β* to target node budgets
- Background-mesh generation:
- structured, gradient-driven adaptive, axis-aligned anisotropic, oblique anisotropic
- Delaunay fallback and pluggable backends
- ZoneTensorMesher: writes metric background
.mshand callsgmsh.model.mesh.setBackgroundMesh - Hybrid meshing: prismatic sweep for SBL + tetrahedral fill, with optimization for quality
- PVVisualizer: PyVista helpers for mesh and refinement debug overlays
- CLI helpers:
scripts/precompute_coeffs.py: memmap precompute and GeoTIFF exportexamples/run_pipeline_raster.py: end-to-end driver exposing adaptivity and precompute options
- Unit tests covering complexity, precompute, GeoTIFF roundtrip, anisotropic samplers
-
Clone repo
git clone <repo-url> abl-mesh cd abl-mesh
-
Create Python 3.11+ virtual environment and install core dependencies:
python -m venv .venv source .venv/bin/activate pip install -U pip pip install -e ".[dev]" # installs core + dev extras
To enable optional features:
- Visualization:
pip install -e ".[viz]" - Gmsh integration:
pip install -e ".[gmsh]"(or install gmsh system package; Python bindings >=4.14 recommended) - Accelerators (numba/cython):
pip install -e ".[accel]"
- Visualization:
-
Sanity: run unit tests (requires rasterio & shapely for some tests)
pytest -q
Quick start — minimal run (tiny DEM)
-
Generate or obtain a tiny DEM GeoTIFF (tests provide tiny generators). Example tests provide utilities.
-
Precompute coefficients in-memory or to memmap (recommended for large rasters):
-
memmap tiled precompute:
python scripts/precompute_coeffs.py \ --raster tests/data/tiny_dem.tif \ --degree 3 \ --do-precompute \ --memmap-path /tmp/coeffs_p3.memmap \ --tile-size 64,64 \ --n-jobs 4
-
in-memory precompute + export GeoTIFF:
python scripts/precompute_coeffs.py \ --raster tests/data/tiny_dem.tif \ --do-precompute \ --export-coeffs /tmp/coeffs_p3.tif
-
-
End-to-end mesh run (use precomputed coefficients for speed):
python examples/run_pipeline_raster.py \ --raster tests/data/tiny_dem.tif \ --load-coeffs /tmp/coeffs_p3.tif \ --hmin 1.0 --hmax 50.0 \ --bg-nx 200 --bg-ny 200 \ --bg-adapt-anisotropic --bg-adapt-anisotropic-oblique \ --anisotropy-threshold 2.0 --bg-adapt-max-levels 2 \ --write-mesh final_surface.msh \ --visualize
Notes:
- If using memmap: instead of
--load-coeffsyou can load memmap programmatically viaho.load_coeffs_memmap(...)(convenience helper). - To drive β* scaling, add
--target-num-nodes 50000(CLI mapping present in the example script).
- If using memmap: instead of
-
Precompute on HPC, export coefficients:
- Run
scripts/precompute_coeffs.pyon large machine with--memmap-pathor--do-precomputeand large--tile-size. - Export to GeoTIFF (
--export-coeffs) to produce a portable coefficients file. - Copy GeoTIFF to desktop / meshing node and run
examples/run_pipeline_raster.py --load-coeffs.
- Run
-
Metric complexity (target nodes):
- Provide
--target-num-nodesto compute β* (mapped into ZoneTensorMesher.generate). Adjustcomplexity_nx/complexity_nyif you prefer a different integration grid.
- Provide
-
Refinement polygons (per-turbine local refinement):
- Prepare a vector file (Shapefile or GeoJSON) with polygon features and a numeric attribute
hmin(orh_min). - Pass
--refinement-shapefile path/to/polygons.shptoexamples/run_pipeline_raster.pyor programmatically passrefinement_polygonstoZoneTensorMesher.generate().
- Prepare a vector file (Shapefile or GeoJSON) with polygon features and a numeric attribute
-
Precompute helper:
scripts/precompute_coeffs.py --raster <raster.tif> [--degree 3] [--do-precompute] [--memmap-path PATH] [--load-memmap PATH] [--export-coeffs PATH] [--tile-size R,C] [--n-jobs N] [--use-tqdm] -
Full pipeline driver:
examples/run_pipeline_raster.py --raster <raster.tif> --hmin <m> --hmax <m> [--do-precompute] [--memmap-path PATH] [--load-coeffs PATH] [--export-coeffs PATH] [--bg-nx N] [--bg-ny N] [--bg-adapt-gradient-threshold T] [--bg-adapt-max-levels L] [--bg-adapt-anisotropic] [--bg-adapt-anisotropic-oblique] [--anisotropy-threshold RATIO] [--target-num-nodes NODES] [--refinement-shapefile PATH] [--write-mesh final_surface.msh] [--visualize] [--visualize-debug]
- RasterTopography(raster_path, band=1, verbosity=1)
- bounds(), sample(xy), xy_to_colrow, colrow_to_xy
- RasterHighOrderApproximant(rtopo, degree=3, precompute=False, ...)
- query_at(xy) -> (z, grad, hess)
- precompute_all_coeffs(..., memmap_path=..., tile_size=(r,c), n_jobs=...)
- export_coeffs_geotiff(path), load_coeffs_geotiff(path), load_coeffs_memmap(path,...)
- ZoneTensorMesher(ho, metric_sampler, bbox, verbosity=1, gmsh_init=True)
- generate(... see function signature in docs/PROJECT_OVERVIEW.md)
- visualize_background(), visualize_mesh(), finalize()
-
Run tests:
pytest -q
-
Packaging / wheels:
- This repo uses
setuptoolsas the build backend (pyproject.toml). If you later add Cython / native extensions, implementsetuptools.Extensionobjects insetup.cfgorsetup.pyas required.
- This repo uses
Performance & extension roadmap
- Short-term (already available): Numba-accelerated kernels for Vandermonde, polynomial eval and sqrt(det).
- Mid-term (recommended): Cython or pybind11/Eigen batched LS solver + efficient Vandermonde/eval to accelerate precompute_all_coeffs.
- Long-term: optional GPU kernels (CuPy/CUDA) for extremely large precompute workloads. Fallback pure-Python paths are kept for portability.
- Gmsh:
- Ensure Gmsh Python bindings >= 4.14 for
setBackgroundMesh. If not available the code falls back to inserting scalar sizes as points in the GEO model.
- Ensure Gmsh Python bindings >= 4.14 for
- RasterIO:
rasteriois required for reading GeoTIFF rasters and for GeoTIFF coefficient export/import.
- Shapely:
- Oblique anisotropic sampling uses
shapely.ops.split. If shapely is not installed the code falls back to axis-aligned anisotropic splitting.
- Oblique anisotropic sampling uses
- Memory & disk:
- Memmap precompute writes a (H × W × num_coeffs) memmap; ensure disk space. Use float32 memmap dtype to reduce disk usage.
- Platform notes:
- On Apple M1/M2 (aarch64) numba builds may be unavailable or slower; prefer Cython/pybind11 compiled packages or use float32 memmap and smaller tiles.
- Use feature branches and open PRs with:
- tests for new functionality
- docs updates (PROJECT_OVERVIEW.md, README.md)
- keep changes small and focused (feature + tests + docs)
- CI recommendation:
- Add GitHub Actions job that installs rasterio, shapely, scipy, pytest and runs unit tests; skip gmsh-dependent integration tests in CI or run them with a separate integration job.
- MIT License. See LICENSE file.
- Primary authors & affiliations: see main_arxiv.tex frontmatter (A. Gargallo‑Peiró, M. Avila, A. Folch — Barcelona Supercomputing Center).