Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Operator] support instance_norm #296

Merged
merged 13 commits into from
Dec 10, 2024
Merged

Conversation

ZQPei
Copy link
Contributor

@ZQPei ZQPei commented Nov 18, 2024

PR Category

Operator

Type of Change

New Feature

Description

Support instance norm with forward and backward, w/wo running stats

Issue

Progress

  • Change is properly reviewed (1 reviewer required, 2 recommended).
  • Change is responded to an issue.
  • Change is fully covered by a UT.

Performance

Environment:
A100 nvlink 80G
pytorch 1.13
cuda 11.8
image

Copy link
Collaborator

@StrongSpoon StrongSpoon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for your contribution! please modify the code or respond to the comments.

benchmark/test_norm_perf.py Show resolved Hide resolved
tests/test_named_ops.py Outdated Show resolved Hide resolved
tests/test_norm_ops.py Outdated Show resolved Hide resolved
src/flag_gems/ops/instancenorm.py Show resolved Hide resolved
src/flag_gems/ops/instancenorm.py Outdated Show resolved Hide resolved
src/flag_gems/ops/instancenorm.py Show resolved Hide resolved
tests/test_norm_ops.py Outdated Show resolved Hide resolved
src/flag_gems/ops/instancenorm.py Show resolved Hide resolved
Copy link
Collaborator

@StrongSpoon StrongSpoon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

review done.

tests/test_norm_ops.py Outdated Show resolved Hide resolved
@@ -70,9 +88,15 @@ def layernorm_input_fn(shape, dtype, device):
layernorm_input_fn,
marks=pytest.mark.layer_norm,
),
pytest.param(
"instance_norm",
torch.instance_norm,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

torch.nn.functional.instance_norm

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I found that torch.nn.functional.instance_norm uses a different API than torch.instance_norm.
This API here is exactly torch.instance_norm.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought they are the same.
截屏2024-12-03 10 13 19

eps,
HAS_WEIGHT_BIAS=has_weight_bias,
)
if has_running_stats and use_input_stats: # update running stats
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

redundant condition

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually No.
has_running_stats means running_mean and running_var is not None.
use_input_stats means use mean and rstd from input, otherwise uses running_mean and running_var from outside.
Condition when updating running stats, is has_running_stats and use_input_stats be True simultaneously.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use_input_stats is checked in line 500. so there is no need to check it again.

tests/test_norm_ops.py Show resolved Hide resolved
Copy link
Collaborator

@Bowen12992 Bowen12992 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you can pull the latest code to Pass the CI

@ZQPei
Copy link
Contributor Author

ZQPei commented Dec 2, 2024

you can pull the latest code to Pass the CI

OK, I will rebase to the latest master soon

Copy link
Collaborator

@StrongSpoon StrongSpoon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@StrongSpoon StrongSpoon merged commit 900744d into FlagOpen:master Dec 10, 2024
8 of 9 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Code Contribution: 【Lv3】【Operator Development】instance_norm
3 participants