This chapter specifies the deterministic lowering of canonical Core IR into MLIR for the feature-gate that ships with the public compiler. The rules cover only the stable, publicly implemented subset.
- Lowering is feature-gated (e.g. via an
mlir-loweringfeature flag in the compiler). - The input module MUST be verified and canonicalised per Core IR.
- Lowering produces deterministic MLIR textual IR suitable for snapshot testing. Minor changes may occur between MIND versions but are stable within a release.
- Core v1 MLIR lowering rules target the CPU backend. Any GPU-specific MLIR dialects or pipelines are experimental and outside the scope of this chapter (see Runtime).
- Scalars and tensors lower to
arith.constantwith explicit MLIR tensor types.
- Zero-initialised tensors are represented using
tensor.emptyfollowed bylinalg.fillfed by a separatearith.constantfor the fill value.
MatMullowers tolinalg.matmulin destination-passing style:- A
tensor.emptydestination is created with the inferred result shape. - The destination is passed via
outsand the filled tensor is the operation result.
- A
Conv2dlowers tolinalg.conv_2d_nhwc_hwcfwith NHWC input and HWCF filter.- Destination-passing mirrors matmul: allocate with
tensor.empty, feed throughouts. - Verification ensures input and filter channels match before lowering.
- Elementwise
BinOpinstructions lower to MLIR arithmetic ops on broadcasted tensors using MLIR canonical broadcasting utilities. SumandMeanlower to reduction patterns that preservekeepdimssemantics;Meandivides by the reduced element count explicitly.
- Given canonical IR, the emitted MLIR text is deterministic. Ordering follows IR instruction order and canonical operand ordering rules.
- The textual form is stable within a compiler release. Future releases MAY evolve op selections or attributes but MUST preserve semantics for the defined Core v1 operations.