You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently Numba-CUDA kernels interop with external arrays via cuda array interface (CAI). CAI is an extension based on the existing numpy array interface standard and adopts existing numpy data type to indicates its usage. Numpy dtype currently lacks exotic data types such as bfloat16, fp8 etc. This barrs the interoperability between dl objects with Numba kernels (torch.bfloat16 tensors, for example).
The community have moved towards there extension for Numpy dtypes such as ml_dtypes. It's been adopted by the community most of the time the solution doesn't come as plug and play (issues).
This should enable more DL users in adopting Numba as their "last-resort" tool for custom kernel development in SIMT model. DLPack is widely adopted by various frameworks and is actively in adding support for new data types. They added support to bfloat16 in 2020 and is actively adding support for fp8.
The text was updated successfully, but these errors were encountered:
Problem:
Currently Numba-CUDA kernels interop with external arrays via cuda array interface (CAI). CAI is an extension based on the existing numpy array interface standard and adopts existing numpy data type to indicates its usage. Numpy dtype currently lacks exotic data types such as bfloat16, fp8 etc. This barrs the interoperability between dl objects with Numba kernels (
torch.bfloat16
tensors, for example).The community have moved towards there extension for Numpy dtypes such as ml_dtypes. It's been adopted by the community most of the time the solution doesn't come as plug and play (issues).
Proposed Solution:
Numba-CUDA kernels should interop with DLPack conformed objects.
Rationale
This should enable more DL users in adopting Numba as their "last-resort" tool for custom kernel development in SIMT model. DLPack is widely adopted by various frameworks and is actively in adding support for new data types. They added support to bfloat16 in 2020 and is actively adding support for fp8.
The text was updated successfully, but these errors were encountered: