forked from Dao-AILab/flash-attention
-
Notifications
You must be signed in to change notification settings - Fork 60
Open
Labels
Description
Hi, the documentation says that this implementation is compatible only with the MI200 and MI300 GPUs. But what about the MI100 gpu?
The code contains such conditions that formally match the MI100 with the gfx908 architecture.
bool is_gfx90x = dprops->major == 9 && dprops->minor == 0;
bool is_gfx94x = dprops->major == 9 && dprops->minor == 4;
TORCH_CHECK(is_gfx90x || is_gfx94x, "FlashAttention only supports AMD MI200 GPUs or newer.");
Will this code be compatible with MI100 in practice? If not, are there any plans to add such support? Or what are the reasons that keep you from adding support for the MI100?
bhugueney