You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I load a photo with an XMP sidecar containing face region metadata, then for photos in vertical orientation (e.g. with orientation tag 6 aka 90°CW) the boxes around the faces are mispositioned. They are drawn where the face would be if the photo was rotated. The photo itself displays the right way.
The issue is specific to sidecars; everything is fine if using only embedded metadata.
swaps dimensions of the image if required by orientation
reads face regions, applies rotation to the face regions if required, saves the regions into the internal metadata structure
pass 2 of mapMetadata reads sidecar metadata
there is no ifd0.orientation in the sidecar, so orientation is assumed to be normal/default (there is tiff:Orientation in the sidecar, but it is ignored)
reads face regions and saves them (into the internal metadata structure) without rotation, thus overwriting the regions info parsed on the first pass
So in the presence of a sidecar: the image dimensions are recalculated correctly, but the region coordinates are not adjusted for orientation because on pass 2 we got no orientation info.
Fixing
Locally I did a quick monkey patch
in getOrientation added fallback to reading exif.tiff?.Orientation if ifd0.orientation is not present
in mapImageDimensions I set a new flag metadata.state.orientationApplied to make mapImageDimensions idempotent across passes (otherwise it would swap dimensions on each pass)
Another option I can think of is to refactor mapMetadata. Currently for each metadata source (embedded, sidecar) it reads metadata and performs mapping/actions (such as the dimensions swap). We can separate these steps: firstly, read and merge metadata from all available sources (embedded, sidecar), then perform mapping and actions (such as the dimensions swap).
What do you think? I can try creating a PR based on your suggestions.
Workarounds
Possible workarounds
Orientation-related workaround: use an external editor to "physically" rotate the image based on the exif orientation tag, thus resetting orientation to normal. (Coordinates of face regions have to be adjusted accordingly - either by the editor or manually with exiftool)
Sidecar-related workaround: move the regions info from the sidecar into the image, making sure there is no face regions in the sidecar.
The text was updated successfully, but these errors were encountered:
Description
When I load a photo with an XMP sidecar containing face region metadata, then for photos in vertical orientation (e.g. with orientation tag 6 aka 90°CW) the boxes around the faces are mispositioned. They are drawn where the face would be if the photo was rotated. The photo itself displays the right way.
The issue is specific to sidecars; everything is fine if using only embedded metadata.
Reproducing
I've created a repo https://github.com/skatsubo/exif-orientation-vs-face-regions with sample images
Photo that causes the bug
The photo and sidecar from the above mentioned repo. Attaching them here for convenience: photo-and-sidecar.zip
Environment:
Used app version:
Discussion
As far as I understand the MetadataLoader code flow:
ifd0.orientation
ifd0.orientation
in the sidecar, so orientation is assumed to be normal/default (there istiff:Orientation
in the sidecar, but it is ignored)So in the presence of a sidecar: the image dimensions are recalculated correctly, but the region coordinates are not adjusted for orientation because on pass 2 we got no orientation info.
Fixing
Locally I did a quick monkey patch
getOrientation
added fallback to readingexif.tiff?.Orientation
ififd0.orientation
is not presentmapImageDimensions
I set a new flagmetadata.state.orientationApplied
to makemapImageDimensions
idempotent across passes (otherwise it would swap dimensions on each pass)Another option I can think of is to refactor mapMetadata. Currently for each metadata source (embedded, sidecar) it reads metadata and performs mapping/actions (such as the dimensions swap). We can separate these steps: firstly, read and merge metadata from all available sources (embedded, sidecar), then perform mapping and actions (such as the dimensions swap).
What do you think? I can try creating a PR based on your suggestions.
Workarounds
Possible workarounds
The text was updated successfully, but these errors were encountered: