Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
28 commits
Select commit Hold shift + click to select a range
c502770
Moved high-level functions to their own module
zyjux Jan 23, 2025
e7d394a
Merge branch 'main' into code_refactor
zyjux Jan 23, 2025
63bf5fc
Included high_level_functions module
zyjux Jan 23, 2025
6845cef
Added package-level docstring
zyjux Jan 23, 2025
c727da9
Added docstring to metric_list
zyjux Jan 23, 2025
85774c3
Adjusted directory structure for src and added
zyjux Jan 23, 2025
ecc81d7
Updated author names
zyjux Jan 24, 2025
d371c38
Added docstring and tests for standard_metrics
zyjux Jan 24, 2025
c25b1d3
Updated docstrings with bivariate vs univariate
zyjux Jan 24, 2025
a42d1ea
Added docstrings and tests for gradient metrics
zyjux Jan 24, 2025
c98b36c
Added documentation for Fourier-based methods
zyjux Jan 27, 2025
e9a430b
Removed unused code
zyjux Jan 27, 2025
e166207
Removed unused code
zyjux Jan 27, 2025
12c0a31
Added tests for Fourier-based methods
zyjux Jan 27, 2025
eac8149
Updated formatting
zyjux Jan 27, 2025
6c4f2ba
Removed unused code and added docstrings and tests
zyjux Jan 27, 2025
6d65989
Added docstrings and tests for spec_slope
zyjux Jan 27, 2025
9ed36cd
Collected metric code into a single file
zyjux Jan 28, 2025
26a79dd
Updated formatting
zyjux Jan 28, 2025
4f44de7
Updated formatting
zyjux Jan 28, 2025
cb16704
Adjusted spec_slope tests to allow for floating point errors
zyjux Jan 28, 2025
6408559
Updated docstrings
zyjux Jan 28, 2025
a70dd42
Added additional experimentation utilities used for generating paper …
zyjux Jan 29, 2025
d812307
Added outdir parameter for experiment utilities
zyjux Jan 30, 2025
e9f9425
Merge branch 'main' into code_refactor
zyjux Feb 4, 2025
23656d2
Added parameter to invert heatmap y axes
zyjux Feb 4, 2025
963b41a
Updated heatmap code to save to specified directory correctly
zyjux Feb 5, 2025
de6a3b0
Updated readme and version, added paper_experiments notebook
zyjux Feb 5, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 9 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -127,3 +127,12 @@ dmypy.json

# Pyre type checker
.pyre/

# Generated figures

media/OUTPUT_IMME
notebooks/experiments_Imme_Apr_4.ipynb
notebooks/experiments_Imme_July_5_2024.ipynb
notebooks/experiments_imme_v2_active_for_Lander.ipynb
notebooks/time_experiment.ipynb
.vscode/settings.json
114 changes: 5 additions & 109 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# ai2es-sharpness
This is a repository containing implementations and utility functions for a variety of metrics that can be used to analyze the sharpness of meteorological image, as well as transform functions and a selection of synthetic and real data for use in examples. This is work conducted under the umbrella of [The NSF AI Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography (AI2ES)](https://www.ai2es.org/). To find out more about the metrics included and how they can be used, see the [accompanying paper](https://www.ai2es.org/sharpness/), and if you make use of this repository, please cite that paper.
This is a repository containing implementations and utility functions for a variety of metrics that can be used to analyze the sharpness of meteorological image, as well as transform functions and a selection of synthetic and real data for use in examples. This is work conducted under the umbrella of [The NSF AI Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography (AI2ES)](https://www.ai2es.org/). To find out more about the metrics included and how they can be used, see the [accompanying preprint](https://www.ai2es.org/sharpness/), and if you make use of this repository, please cite that paper. A peer-reviewed version is expected soon.

## Installation

Expand All @@ -11,118 +11,14 @@ To install with the specific versions used in the creation of this package, use

## Examples of use

There are a number of example notebooks in the [notebooks](/notebooks) folder, which serve as a set of examples for utilizing the low-level interface of this package. For a simple introduction to how to run experiments using transforms, see [experiment_demo.ipynb](/notebooks/experiment_demo.ipynb), and for more in-depth experiments see [experiments_imme_v2.ipynb](/notebooks/experiments_imme_v2.ipynb).
There are a number of example notebooks in the [notebooks](/notebooks) folder, which serve as a set of examples for utilizing the low-level interface of this package. For a simple introduction to how to run experiments using transforms, see [experiment_demo.ipynb](/notebooks/experiment_demo.ipynb), and for more in-depth experiments (including all those described in the accompanying paper,) see [paper_experiments.ipynb](/notebooks/paper_experiments.ipynb).

## Python interface

The principal utilities offered by this package can be found in [`src/__init__.py`](/src/__init__.py), and are the functions `compute_all_metrics_globally`, `compute_metric_globally`, `compute_all_metrics_locally`, `compute_metric_locally`, all of which take in a pair of images `X` and `T` and either compute all metrics, or just a specified single metric. In either case, computing "globally" means that the metric will be applied to the whole image (or pair of images) and a single number will be returned, while computing "locally" means that a heatmap of local metric values will be computed.

## Benchmark

This is a command line interface to compute and plot evaluations from different metrics and transformations on real or synthetic datasets.
The principal utilities offered by this package can be found in [`src/sharpness/high_level_functions.py`](/src/sharpness/high_level_functions.py) and made available as a base-level import with the sharpness package. These are the functions `compute_all_metrics_globally`, `compute_metric_globally`, `compute_all_metrics_locally`, `compute_metric_locally`, all of which take in a pair of images `X` and `T` and either compute all metrics, or just a specified single metric. In either case, computing "globally" means that the metric will be applied to the whole image (or pair of images) and a single number will be returned, while computing "locally" means that a heatmap of local metric values will be computed.

#### Input requirements

At this point, all functions (including the `benchmark.py` script) take as their input a single `n x m` grayscale image. If using these methods on multi-channel imagery, either convert the image to grayscale (if it is an RGB image) or loop over the channels and compute sharpness statistics individually.

Input data should be of float type; if the data are not already floats, certain metrics which require float type inputs will internally convert the data to floats.

#### Example CLI Usage

From within the `src` directory:

```bash
$ python benchmark.py -h
usage: benchmark.py [-h] [-s {sinusoidal,gaussian,bw,xor}] [-i INPUT] [-t {vflip,hflip,blur,noise,brightness,crop}] [-m {all,mse,mae,rmse,grad}] [--visualize] [-o OUTPUT]

Sharpness Benchmarks

optional arguments:
-h, --help show this help message and exit
-s {sinusoidal,gaussian,bw,xor}, --synthetic {sinusoidal,gaussian,bw,xor}
generate synthetic data
-i INPUT, --input INPUT
name of input file to load data from
-t {vflip,hflip,blur,noise,brightness,crop}, --transformation {vflip,hflip,blur,noise,brightness,crop}
transformation to perform on data
-m {all,mse,mae,rmse,grad}, --metric {all,mse,mae,rmse,grad,s1,psnr,ncc,grad-ds,grad-rmse,laplace-rmse,hist-int,hog-pearson,fourier-similarity,wavelet-similarity,tv,grad-tv,fourier-tv,wavelet-tv}
evaluation metric to compute
--heatmap compute sharpness heatmap(s) rather than global metric
--visualize visualize and save the operations
--overlay only relevant if both heatmap and visualize are true; plots heatmaps on top of input data
-o OUTPUT, --output OUTPUT
name of output file visualization
```

Note that with the `--heatmap` option, each metric will be computed on small, overlapping tiles across the image; by default, these tiles will be square with side length equal to ~1/8th the width of the input image, and the stride for these tiles will be 1/4 the side length of the tile. The image will also be padded using the "reflect" method by a number of pixels equal to ~1/16th the width of the input image. These parameters are adjustable by editing the appropriate function in `__init__.py`.

#### Examples

Generate synthetic data, apply a blurring transformation, compute all metrics, and visualize/save the output.

```bash
$ python benchmark.py -s xor -t blur -m all --visualize -o ../media/synthetic.png
=> mse: 150.1562378666429
=> mae: 7.141086141494917
=> rmse: 12.25382543806802
=> s1: (0.00339528769955455, 1.7763568394002505e-15)
=> psnr: 26.36536982276365
=> ncc: 0.9965476066368607
=> mgm: (44.19561294970306, 29.452084668360147)
=> grad-ds: 0.49620562145179087
=> grad-rmse: 71.91002205102615
=> laplace-rmse: 26.700794614133223
=> hist-int: 0.6448973445108177
=> hog-pearson: 0.5533226275817607
=> fourier-rmse: 278977716.3719768
=> wavelet-similarity: 0.485842832185395
=> tv: (524288.0, 272784.50942777185)
=> grad-tv: (3153888.0, 2149503.1447823923)
=> fourier-tv: (120947473.46739776, 120612974.58877043)
=> wavelet-tv: (4194304.0, 4248385.269167363)
```
![](media/synthetic.png)

We can re-run the above example, but with local computations of heatmaps overlaid on top of input data instead of global metrics.

```bash
$ python benchmark.py -s xor -t blur -m all --heatmap --visualize --overlay -o ../media/synthetic_heatmaps.png
Heatmap will be computed with blocks of size 32, and has image padding of length 16
=> mse average: 39.881929874420166
=> mae average: 111.61909246444702
=> rmse average: 6.259764454401335
=> s1 averages: (2.7966713163144616, 4.416437443298626)
=> psnr average: 27.90985194547143
=> ncc average: 1.0957310987654965
=> grad averages: (0.025264954381721993, 0.019396214612508057)
=> grad-ds average: 0.4854998901219907
=> grad-rmse average: 54.209045009739974
=> laplace-rmse average: 20.67599301023201
=> hist-int average: 0.5660052760923319
=> hog-pearson average: 0.6746634025245817
=> fourier-similarity average: 1.0
=> wavelet-similarity average: 0.3293994978185485
=> tv averages: (253952.0, 176779.0)
=> grad-tv averages: (40465.7734375, 30567.400390625)
=> fourier-tv averages: (358239.7522150265, 318628.08900317416)
=> wavelet-tv averages: (65607.43359375, 65929.7275390625)
```
![](media/synthetic_heatmaps.png)

Load the default data example, apply a vertical transformation, compute only the root-mean-square error, and visualize/save the output to the default name.

```bash
$ python benchmark.py -t vflip -m rmse --visualize
Loading data from ../data/kh_ABI_C13.nc (sample 0)
=> rmse: 10.005649078875036
```
![](media/output.png)

Generate synthetic data again, but only compute total variation as a heatmap.
```bash
$ python benchmark.py -s='xor' -t='blur' -m='tv' -o='../media/synth_tv.png' --heatmap --visualize
Heatmap will be computed with blocks of size 32, and has image padding of length 16
=> tv averages: (253952.0, 176779.0)
```
All functions take as their input a single `n x m` grayscale image. If using these methods on multi-channel imagery, either convert the image to grayscale (if it is an RGB image) or loop over the channels and compute sharpness statistics individually.

![](media/synth_tv.png)
Input data should be of float type; if the data are not already floats, certain metrics which require float type inputs will internally convert the data to floats.
Loading
Loading