Conversation
| self.format_only = format_only | ||
| self.outfile_prefix = outfile_prefix | ||
|
|
||
| def add(self, predictions: Sequence[Dict], groundtruths: Sequence[Dict]) -> None: # type: ignore # yapf: disable # noqa: E501 |
There was a problem hiding this comment.
What about making groundtruths defaults to None so that it's easier for users to use it with ann_file != None? Creating an 'empty' groundtruths is ambiguous and inconvenient, since we require it has the same length as predictions
There was a problem hiding this comment.
I think there should be some new design or discussions.
There was a problem hiding this comment.
Our thoughts were to keep consistent with that of COCODetection, maybe this needs further discussions with the team.
| - `num_keypoints`: it is necessary when | ||
| `self.iou_type` is set as `keypoints_crowd`. | ||
| """ | ||
| for prediction, groundtruth in zip(predictions, groundtruths): |
There was a problem hiding this comment.
predictions is instance-level while groundtruths is image-level. Usually they are of different length, i.e. N predictions in 1 image. How to make them of the same length? I think the design of input type is not reasonable.
There was a problem hiding this comment.
What's the meaning of id in predictions? Is there only 1 instance in each image, or there might be multiple instances in an image? How to match your detections with groundtruths without knowing the groundtruths, i.e. same number of detections as groundtruths?
I'm not familiar with the format, but seems like raw_ann_info in groundtruth covers all instances' labels in a single image. Please clarify if I misunderstood.
There was a problem hiding this comment.
There might be multiple instances in an image. For example, in coco_pose_sample.json, there are three entries with "image_id": 40083, because in the image, there are three people. The three instances have different and unique ids.
In compute_metric, groundtruths (a list of groundtruth instances) are represented by a COCO format json file, while predicted instances of the same image are grouped together and dumped into another json file. The evaluation results are calculated by COCOeval by reading the above two files.
Each ann (raw_ann_info) represents an instance, with information of the groundtruth keypoints and bboxes, etc.


Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily get feedback. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.
Motivation
Add
COCOPoseMetricfor pose estimation task.PR in mmpose: open-mmlab/mmpose#1777, about the results verification.
Modification
COCOPoseMetricundermmeval/metrics/coco_pose.py.tests/test_metrics/data/coco_pose_sample.json,tests/test_metrics/data/crowdpose_sample.json,tests/test_metrics/data/ap10k_sample.jsonand unit tests.nmsfunction undermmeval/metrics/utils/nms.py.BC-breaking (Optional)
Does the modification introduce changes that break the backward-compatibility of the downstream repos?
If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.
Use cases (Optional)
If this PR introduces a new feature, it is better to list some use cases here, and update the documentation.
Checklist
Comments
mmeval/metrics/coco_pose.py: add case and simplify inputaddfunction aims at keeping consistent with that ofCOCODetection, and after discussion, no modification is madeyapfhook