Skip to content

Fix bfloat16 numpy conversion#34

Open
ram-from-tvl wants to merge 3 commits intomathpluscode:mainfrom
ram-from-tvl:fix-bfloat16-numpy-conversion
Open

Fix bfloat16 numpy conversion#34
ram-from-tvl wants to merge 3 commits intomathpluscode:mainfrom
ram-from-tvl:fix-bfloat16-numpy-conversion

Conversation

@ram-from-tvl
Copy link

No description provided.

Copilot AI review requested due to automatic review settings March 9, 2026 20:08
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Updates example inference scripts to avoid NumPy conversion failures when running models with torch.bfloat16 (and/or CUDA tensors), ensuring outputs can be converted to NumPy for downstream formatting/printing.

Changes:

  • Cast landmark coordinate outputs to CPU float32 before calling .numpy().
  • Cast classification probability tensors to float32 before calling .numpy() when building probs_dict.

Reviewed changes

Copilot reviewed 4 out of 4 changed files in this pull request and generated 3 comments.

File Description
cinema/examples/inference/landmark_coordinate.py Move landmark coordinate tensor to CPU and cast to float32 before NumPy conversion.
cinema/examples/inference/classification_vendor.py Cast probability tensor to float32 before NumPy conversion for dict construction.
cinema/examples/inference/classification_sex.py Cast probability tensor to float32 before NumPy conversion for dict construction.
cinema/examples/inference/classification_cvd.py Cast probability tensor to float32 before NumPy conversion for dict construction.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

You can also share your feedback on Copilot code review. Take the survey.

- Convert BFloat16 tensors to float32 before calling .numpy()
- Fixes TypeError: Got unsupported ScalarType BFloat16
- Affects classification_cvd.py, classification_sex.py, and classification_vendor.py
- Numpy doesn't support BFloat16 directly, requires conversion to float32 first
- Add .cpu().float() before .numpy() to handle BFloat16 tensors
- Fixes TypeError: Got unsupported ScalarType BFloat16 in landmark detection
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 4 out of 4 changed files in this pull request and generated no new comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

You can also share your feedback on Copilot code review. Take the survey.

- Fix np.argmax(logits) in classification scripts
- Fix heatmap_soft_argmax output in landmark_heatmap.py
- Ensure all tensor to numpy conversions use .cpu().float().numpy()
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants