Skip to content

Commit e5a0b88

Browse files
mengluy0125facebook-github-bot
authored andcommitted
Support activation quantization without scaling (#2607)
Summary: X-link: pytorch/pytorch#148380 We enable the activation quantization in the forward pass, and users can customize the dtype they want to quantize. Differential Revision: D70522237
1 parent 6eb17f1 commit e5a0b88

File tree

1 file changed

+4
-0
lines changed
  • userbenchmark/dynamo/dynamobench/_dynamo

1 file changed

+4
-0
lines changed

userbenchmark/dynamo/dynamobench/_dynamo/utils.py

+4
Original file line numberDiff line numberDiff line change
@@ -4566,3 +4566,7 @@ def maybe_disable_inference_mode_for_fake_prop() -> Generator[None, None, None]:
45664566
yield
45674567
else:
45684568
yield
4569+
4570+
4571+
def is_node_meta_valid(node: Optional[torch.fx.Node]):
4572+
return node is None or "example_value" in node.meta or "val" in node.meta

0 commit comments

Comments
 (0)