-
Notifications
You must be signed in to change notification settings - Fork 49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Introduce MLNumber for specifying numeric inputs of any type #647
Introduce MLNumber for specifying numeric inputs of any type #647
Conversation
For some MLGraphBuilder methods the type of a numeric input can vary - e.g. for constant() an explicit MLOperandDataType is provided; for clamp() and pad() the data type is implied by input operands. In these cases, specifying the numeric value as either a float/double or int64 type runs into accuracy or range issues - you can't accurately represent all int64 values as a double, and you can't represent the full range of floats as int64. (You also can't represent all int64 values as an long long either - over 2^53 things get wierd. But that's a digression.) Per discussion in whatwg/webidl#1388 this change introduces a union between a bigint type and unrestricted double called MLNumber. Callers can pass a JS number (1234, 1.1234e38) or a JS bigint (9007199254740993n), and the implementation will treat it properly based on the explicit or implicit MLOperandDataType. Usage of this type should be limited to only those cases. Fixes webmachinelearning#442 Note that webmachinelearning#492 proposes changes to the constant sequential filling operation; this just adds IDL to match the current spec prose. Some of the concerns raised in webmachinelearning#325 are addressed (e.g. clamp()'s options). However, several other options are still specified as "float", and should maybe be "double" - but MLNumber is likely not appropriate for those, so they are not updated here.
FYI @fdwr @huningxin @zolkis - marked as "draft" but worth looking at.
|
index.bs
Outdated
@@ -1133,6 +1135,16 @@ The {{MLOperand}} objects are created by the methods of {{MLGraphBuilder}}, inte | |||
To <dfn for="MLGraphBuilder">validate operand</dfn> given {{MLGraphBuilder}} |builder| and {{MLOperand}} |operand|, return true if |operand|.{{MLOperand/[[builder]]}} is |builder|, and false otherwise. | |||
</p> | |||
|
|||
#### {{MLNumber}} #### {#api-mlnumber-typedef} | |||
|
|||
<dfn typedef>MLNumber</dfn> is used when specifying the type of a numeric option for an {{MLOperand}} which can be of any {{MLOperandDataType}}, including both 64-bit integer types ({{MLOperandDataType/"uint64"}} and {{MLOperandDataType/"int64"}}) and 32-bit floating point ({{MLOperandDataType/"float32"}}). Implementations must process the value according to the corresponding {{MLOperandDataType}}. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Implementations must process the value according to the corresponding {{MLOperandDataType}}.
What if there is no corresponding MLOperandDataType
? For example, clamp()
can be built as an MLActivation
without needing to specify any particular dtype. What is the dtype or the activation's associated operator?
The concept of an activation's operator is a bit hand-wavy in the spec, but it's very concrete in the Chromium implementation and the data type must be known when we pass the activation as an input to some other builder method anyways (we need to check that the types match, assuming we don't want to allow mixed precision #283 (comment))
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Presumably the data type of the eventual operator's input? i.e. for the impl we'd need to hold onto the union in the MLActivation until the operator is constructed?
We need to improve the spec text, but I want to understand the implementation first.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Presumably the data type of the eventual operator's input?
Fused activations should use operator's output as its own input, although the output's data type of operators that support fusion is usually the same as the input's.
i.e. for the impl we'd need to hold onto the union in the MLActivation until the operator is constructed?
I think so.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added a short note in 67b5a68 but we can probably improve it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i.e. for the impl we'd need to hold onto the union in the MLActivation until the operator is constructed?
I think so.
Does this make sense? What happens if an MLActivation
is passed to multiple operators with different data types?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
minValue <= maxValue is particularly interesting depending on the conversions. If you pass {minValue: -1, maxValue: 128}
this looks valid, but if the data type ends up being "uint8", does this turn into {minValue: 0xFF, maxValue: 0x80}
? Ooof.
with the idea that an MLActivation has one operator slot
Agreed that "the whole "operator" concept is pretty hand-wavy anyways". An MLActivation's internal [[operator]] slot is not the specific operator instance of the MLOperand that the activation is fused with, and is probably more like an operator type than a specific instance. So I don't think there's a conflict, but we could definitely improve the text - and explicitly mention that MLActivations can be re-used when creating multiple MLOperands and even with different data types.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To be explicit, here's where I'm envisioning the conversion/validation happens:
- for
constant()
- immediate conversion based on explicitly passedMLOperandDataType
- for
MLOperand clamp()
- immediate conversion based on input's data type - for
MLActivation clamp()
- when the activation is passed to an operand-vending method - for
pad()
- immediate conversion based on input's data type
See #649 (comment) for a framework where activation validation could be plugged in.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
question:
Does this mean for example for gemm
, if the input a
and b
are passed as float16, and if you pass
{alpha: MLNumber(3.14123452435)}
, it will automatically convert that to float16 as 3.141
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess under the hood - yes (although it'd be just {alpha: 3.14123452435}
)
Presumably that's what the spec/implementation implicitly does today, even if we didn't introduce MLNumber
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Under the hood it preserves the fp32 precision right now (as what you are suggesting).
But I'd prefer we explicitly set the expectation that it will be casted to the precision to be the same as the input type.
In the gemm
example, since it's emulated as alpha * matmul(a,b) + beta * c
in CoreML, and CoreML requires all these binary ops to have the same type. so if a
, b
are fp16, then it's better to cast alpha
to fp16 to multiply them. Otherwise we need to cast input to fp32 (which doesn't get executed on ANE).
So right now we just don't support fp16 inputs for gemm on CoreML until this is sorted out.
- Introduce a "cast" definition that takes a number and a type, and returns the number cast to that type. - Invoke cast during MLOperand and MLActivation creation. TODO: - Passing restrictions - Floating point - allow Infinities/NaNs or not? - Integer - throw or clamp if out of range? - Simplify supported restrictions - resample2d sizes option - is this part of the op data or not?
Cross-linking this async update from today's agenda: webmachinelearning/meetings#24 (comment) (Thanks @inexorabletash!) |
No MLActivations take MLNumber, so this paragraph doesn't make sense. Also, add a brief intro to the cast algorithm section.
Okay - I think this is ready for a review. Probably best not to worry about the commit history (and definitely squash on merge, and edit down the comment!) - I can squash/force-push if desired. @a-sully has a work-in-progress CL for the Chromium prototype, and we'll want WPTs too. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great - thank you Joshua :). Just one question and one clarity request. (update: oops, found one more)
Co-authored-by: Dwayne Robinson <[email protected]>
Co-authored-by: Ningxin Hu <[email protected]>
Since it's relevant here and related to switching pad() to match constant(): for the places that still use I can't think of a case where those are ever useful, even if the behavior is well defined. Maybe leave it as |
@inexorabletash: tldr - I don't feel strongly either way.
Does it make sense? No (especially not for epsilon). Though, I also generally feel we shouldn't unnecessarily restrict the API moreso than whatever the equivalent op decomposition would produce. So, if
...then it's nice to be consistent with the math it produces, NaN's and infinities alike; but then in practice, I don't see how these values would be useful, and so I'm fine with just |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 Updates appreciated. Do you want a re-review from Phillis and Austin first before merging?
I think it's good to merge. Just remember to squash and tidy up the commit message! |
SHA: 9f88ebf Reason: push, by huningxin Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
For some
MLGraphBuilder
methods the type of a numeric input can vary - e.g. forconstant()
an explicitMLOperandDataType
is provided; forclamp()
andpad()
the data type is implied by input operands. In these cases, specifying the numeric value as either a float/double or int64 type runs into accuracy or range issues - you can't accurately represent all int64 values as a double, and you can't represent the full range of floats as int64. (You also can't represent all int64 values as an long long either - over 2^53 things get wierd. But that's a digression.)MLNumber
- a "variant" (union) of either a JS Number (equivalent to a double a.k.a. float64) or BigInt (for full precision int64/uint64 use)constant()
(scalar overload) -MLNumber
clamp()
(min/max options) -MLNumber
pad()
(padding value) -MLNumber
float
argument/option now take adouble
. These are all ops that only operate on floating point types, so no need forMLNumber
though it would be harmless to use it there to allow BigInt inputs. This follows web API best practices, and allows full precision conversion to float16 (since float64 to float32 to float16 can yield different results than float64 directly to float16)batchNormalization()
,instanceNormalization()
,layerNormalization()
-epsilon
optionelu()
,gemm()
,hardSigmoid()
,leakyRelu()
,linear()
-alpha
andbeta
optionsCasting algorithms are spelled out, always have "clamp" semantics (i.e. no weird modulus wrapping), and never fail.
MLOperand
-vending methods, the conversion can be done eagerly.MLActivation
s, the conversion is done at "fusion time"; it's notable that the sameMLActivation
instance could be fused with multiple ops, and cast differently for each.Preview | Diff