Skip to content

How to choose the appropriate aggregate sharpness metric #16

@djgagne

Description

@djgagne

Thinking about the construction of a summary sharpness metric, we should be thoughtful about how we are comparing the distributions of our chosen measures of sharpness between the predicted and observed image. For a sharpness metric like gradients, we want to reward images that have a similar distribution of gradients to the target image, but we don't care that the gradients are in exactly the same spot. Thus we should use metrics that compare the predicted and observed gradient distributions aggregated across each image or over a reasonably large region of an image.

The most distribution agnostic way to compare two images would be to calculate the empirical CDF of the gradients for both images and then take either the total difference between the two CDFs (similar to CRPS) or the max difference (similar to the K-S test). The max difference would be the most aggressive metric for sharpness.

Would anyone be interested in implementing this or should I take a stab at it?

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions