Skip to content

[MLIR][NVVM] Add support for f6x2 conversion #136537

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
54 changes: 54 additions & 0 deletions mlir/include/mlir/Dialect/LLVMIR/NVVMOps.td
Original file line number Diff line number Diff line change
Expand Up @@ -1066,6 +1066,60 @@ def NVVM_CvtFloatToTF32Op : NVVM_Op<"cvt.float.to.tf32"> {
}];
}

def CVTFP6E2M3 : I32EnumAttrCase<"E2M3", 0, "e2m3">;
Copy link
Contributor

@durga4github durga4github Apr 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The FP6 types themselves can be used in other places/context too.
So, we do not need to attach "CVT" context to these types and keep them generic as "FP6E2M3" etc.
(May be even FP6_E2M3 for better readability..)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did this here because I see that for the wgmma.mma_async Op, we have types like WGMMATypeF8E4M3 to represent the operand types. If we have a more general enumeration, should we change the usage in wgmma.mma_async too (since we will be having cvt Ops for the FP8 types as well)?

def CVTFP6E3M2 : I32EnumAttrCase<"E3M2", 1, "e3m2">;

def CVTFP6Type : I32EnumAttr<"CVTFP6Type", "NVVM CVTFP6Type kind",
[CVTFP6E2M3, CVTFP6E3M2]> {
let genSpecializedAttr = 0;
let cppNamespace = "::mlir::NVVM";
}
def CVTFP6TypeAttr : EnumAttr<NVVM_Dialect, CVTFP6Type, "cvt_fp6_type"> {
let assemblyFormat = "`<` $value `>`";
}

def NVVM_CvtToF6x2Op : NVVM_Op<"cvt.to.f6x2"> {
let summary = "Convert a pair of float inputs to f6x2";
let description = [{
This Op converts each of the given float inputs to the specified fp6 type.
The result `dst` is represented either as an i16 type or as a vector
of two i8 types.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please also include the details on the nature of the result.. (from spec)

...the converted values are packed in the destination operand d such that the value converted from input a is stored in the upper 8 bits of d with 2 MSB bits padded with zeros and the value converted from input b is stored in the lower 8 bits of d with 2 MSB bits padded with zeros.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated the description in the latest revision, thanks!

If `dst` is returned as an i16 type, the converted values are packed such
that the value converted from `a` is stored in the upper 8 bits of `dst`
with 2 MSB bits padded with zeros and the value converted from `b` is
stored in the lower 8 bits of `dst` with 2 MSB bits padded with zeros.
If `dst` is returned as a vector type, each converted value is stored as an
i8 element in the vector.
The `relu` attribute, when set, lowers to the '.relu' variant of
the cvt instruction.

[For more information, see PTX ISA](https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#data-movement-and-conversion-instructions-cvt)
}];
let results = (outs AnyTypeOf<[I16, VectorOfLengthAndType<[2], [I8]>]>:$dst);
let arguments = (ins
CVTFP6TypeAttr:$type,
F32:$a,
F32:$b,
DefaultValuedAttr<BoolAttr, "false">:$relu);
let assemblyFormat = "$type $a `,` $b attr-dict `:` type($dst)";

let extraClassDeclaration = [{
static llvm::Intrinsic::ID getIntrinsicID(NVVM::CVTFP6Type,
bool hasRelu);
bool isReturnVectorType();
}];

string llvmBuilder = [{
auto intId = NVVM::CvtToF6x2Op::getIntrinsicID($type, $relu);
llvm::Value *packedI16 = createIntrinsicCall(builder, intId, {$a, $b});
if(op.isReturnVectorType())
$dst = builder.CreateBitCast(packedI16,
llvm::FixedVectorType::get(llvm::Type::getInt8Ty(builder.getContext()), 2));
else
$dst = packedI16;
}];
}

//===----------------------------------------------------------------------===//
// NVVM MMA Ops
//===----------------------------------------------------------------------===//
Expand Down
22 changes: 22 additions & 0 deletions mlir/lib/Dialect/LLVMIR/IR/NVVMDialect.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,7 @@
#include "llvm/AsmParser/Parser.h"
#include "llvm/IR/Attributes.h"
#include "llvm/IR/Function.h"
#include "llvm/IR/IRBuilder.h"
#include "llvm/IR/IntrinsicsNVPTX.h"
#include "llvm/IR/Type.h"
#include "llvm/Support/Casting.h"
Expand Down Expand Up @@ -133,6 +134,10 @@ LogicalResult CvtFloatToTF32Op::verify() {
return success();
}

bool CvtToF6x2Op::isReturnVectorType() {
return !getDst().getType().isInteger(16);
}

LogicalResult BulkStoreOp::verify() {
if (getInitVal() != 0)
return emitOpError("only 0 is supported for initVal, got ") << getInitVal();
Expand Down Expand Up @@ -1290,6 +1295,23 @@ llvm::Intrinsic::ID CvtFloatToTF32Op::getIntrinsicID(NVVM::FPRoundingMode rnd,
}
}

#define CVT_TO_F6X2_ID_IMPL(type, relu, relu_str) \
relu ? llvm::Intrinsic::nvvm_ff_to_##type##_rn##relu_str##_satfinite \
: llvm::Intrinsic::nvvm_ff_to_##type##_rn_satfinite

llvm::Intrinsic::ID CvtToF6x2Op::getIntrinsicID(NVVM::CVTFP6Type type,
bool hasRelu) {
switch (type) {
case NVVM::CVTFP6Type::E2M3:
return CVT_TO_F6X2_ID_IMPL(e2m3x2, hasRelu, _relu);
case NVVM::CVTFP6Type::E3M2:
return CVT_TO_F6X2_ID_IMPL(e3m2x2, hasRelu, _relu);
default:
break;
}
llvm_unreachable("Invalid CVTFP6Type for CvtToF6x2Op");
}

llvm::Intrinsic::ID
Tcgen05AllocOp::getIntrinsicIDAndArgs(Operation &op,
LLVM::ModuleTranslation &mt,
Expand Down
22 changes: 22 additions & 0 deletions mlir/test/Target/LLVMIR/nvvm/cvt_fp6x2.mlir
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
// RUN: mlir-translate -mlir-to-llvmir %s | FileCheck %s

// CHECK-LABEL: @convert_float_to_fp6x2_packed
llvm.func @convert_float_to_fp6x2_packed(%srcA : f32, %srcB : f32) {
//CHECK: %{{.*}} = call i16 @llvm.nvvm.ff.to.e2m3x2.rn.satfinite(float %{{.*}}, float %{{.*}})
%res1 = nvvm.cvt.to.f6x2 <e2m3> %srcA, %srcB : i16
//CHECK: %{{.*}} = call i16 @llvm.nvvm.ff.to.e3m2x2.rn.satfinite(float %{{.*}}, float %{{.*}})
%res2 = nvvm.cvt.to.f6x2 <e3m2> %srcA, %srcB : i16
llvm.return
}

// CHECK-LABEL: @convert_float_to_fp6x2_vector
llvm.func @convert_float_to_fp6x2_vector(%srcA : f32, %srcB : f32) {
//CHECK: %[[res0:.*]] = call i16 @llvm.nvvm.ff.to.e2m3x2.rn.satfinite(float %{{.*}}, float %{{.*}})
//CHECK-NEXT: %{{.*}} = bitcast i16 %[[res0]] to <2 x i8>
%res1 = nvvm.cvt.to.f6x2 <e2m3> %srcA, %srcB : vector<2xi8>
//CHECK: %[[res1:.*]] = call i16 @llvm.nvvm.ff.to.e3m2x2.rn.satfinite(float %{{.*}}, float %{{.*}})
//CHECK-NEXT: %{{.*}} = bitcast i16 %[[res1]] to <2 x i8>
%res2 = nvvm.cvt.to.f6x2 <e3m2> %srcA, %srcB : vector<2xi8>
llvm.return
}

Loading