Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEA] Supporting DLPacks #122

Open
isVoid opened this issue Feb 7, 2025 · 0 comments
Open

[FEA] Supporting DLPacks #122

isVoid opened this issue Feb 7, 2025 · 0 comments
Labels
feature request New feature or request

Comments

@isVoid
Copy link
Collaborator

isVoid commented Feb 7, 2025

Problem:

Currently Numba-CUDA kernels interop with external arrays via cuda array interface (CAI). CAI is an extension based on the existing numpy array interface standard and adopts existing numpy data type to indicates its usage. Numpy dtype currently lacks exotic data types such as bfloat16, fp8 etc. This barrs the interoperability between dl objects with Numba kernels (torch.bfloat16 tensors, for example).

The community have moved towards there extension for Numpy dtypes such as ml_dtypes. It's been adopted by the community most of the time the solution doesn't come as plug and play (issues).

Proposed Solution:

Numba-CUDA kernels should interop with DLPack conformed objects.

Rationale

This should enable more DL users in adopting Numba as their "last-resort" tool for custom kernel development in SIMT model. DLPack is widely adopted by various frameworks and is actively in adding support for new data types. They added support to bfloat16 in 2020 and is actively adding support for fp8.

@isVoid isVoid added the feature request New feature or request label Feb 7, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature request New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant