Skip to content

Commit 7b6e8cc

Browse files
mengluy0125facebook-github-bot
authored andcommitted
Support activation quantization without scaling (#2607)
Summary: X-link: pytorch/pytorch#148380 We enable the activation quantization in the forward pass, and users can customize the dtype they want to quantize. Differential Revision: D70522237
1 parent 87d954e commit 7b6e8cc

File tree

1 file changed

+4
-0
lines changed
  • userbenchmark/dynamo/dynamobench/_dynamo

1 file changed

+4
-0
lines changed

userbenchmark/dynamo/dynamobench/_dynamo/utils.py

+4
Original file line numberDiff line numberDiff line change
@@ -4579,3 +4579,7 @@ def maybe_disable_inference_mode_for_fake_prop() -> Generator[None, None, None]:
45794579
yield
45804580
else:
45814581
yield
4582+
4583+
4584+
def is_node_meta_valid(node: Optional[torch.fx.Node]):
4585+
return node is None or "example_value" in node.meta or "val" in node.meta

0 commit comments

Comments
 (0)