Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use same implementation for Quantize lowering as onnx reference implementation and use all integer types supported by tosa-mlir #151

Open
wants to merge 2 commits into
base: feature/fused-ops
Choose a base branch
from

Conversation

jorickert
Copy link
Contributor

No description provided.

…entation and use integer types supported by tosa-mlir
@jorickert jorickert force-pushed the jrickert.quant_improvements branch from 0a58071 to ab36fa5 Compare February 26, 2025 21:59
@jorickert jorickert changed the title Use same implementation of Quantize lowering as onnx reference implementation and use integer types supported by tosa-mlir Use same implementation of Quantize lowering as onnx reference implementation and use all integer types supported by tosa-mlir Feb 26, 2025
@jorickert jorickert changed the title Use same implementation of Quantize lowering as onnx reference implementation and use all integer types supported by tosa-mlir Use same implementation for Quantize lowering as onnx reference implementation and use all integer types supported by tosa-mlir Feb 27, 2025
@jorickert
Copy link
Contributor Author

  • Wait with mergin until support for folding cast is added in tosa-mlir

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants