Skip to content

Fix: Ensure ViT includes the final hidden state in its output #2610

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

tkdcjf159
Copy link

Context

What is the purpose of this PR? Is it to

  • fix a bug

When using the out_indices parameter to select specific hidden states from the Vision Transformer (ViT) based on their layer index, the implementation incorrectly fails to return the hidden state from the final transformer layer. This occurs even if the index corresponding to the final layer is explicitly included in the out_indices list.

Changelog

What are the changes made in this PR?
*

Test plan

Please make sure to do each of the following if applicable to your PR. If you're unsure about any one of these just ask and we will happily help. We also have a contributing page for some guidance on contributing.

  • run pre-commit hooks and linters (make sure you've first installed via pre-commit install)
  • add unit tests for any new functionality
  • update docstrings for any new or updated methods or classes
  • run unit tests via pytest tests
  • run recipe tests via pytest tests -m integration_test
  • manually run any new or modified recipes with sufficient proof of correctness
  • include relevant commands and any other artifacts in this summary (pastes of loss curves, eval results, etc.)

UX

If your function changed a public API, please add a dummy example of what the user experience will look like when calling it.
Here is a docstring example
and a tutorial example

  • I did not change any public API
  • I have added an example to docs or docstrings

Copy link

pytorch-bot bot commented Apr 18, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/torchtune/2610

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

❌ 2 New Failures, 4 Cancelled Jobs

As of commit 3117b82 with merge base 1075d9c (image):

NEW FAILURES - The following jobs have failed:

CANCELLED JOBS - The following jobs were cancelled. Please retry:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot
Copy link

Hi @tkdcjf159!

Thank you for your pull request and welcome to our community.

Action Required

In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at [email protected]. Thanks!

@facebook-github-bot
Copy link

Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks!

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Apr 18, 2025
Copy link
Contributor

@pbontrager pbontrager left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for adding this. It was assumed that you'd get the last hidden layer from the x output, but it makes sense that you might want it before the layer norm and positional embeddings. Please put the enumerate back to keep things a bit more compact but otherwise it's good to go.

@@ -376,11 +376,16 @@ def forward(

# transformer with optional hidden layer outputs
x = x.reshape(bsz_and_n_imgs, n_tiles * n_tokens, embed_dim)
for layer_idx, transformer_layer in enumerate(self.layers):
layer_idx = 0
for transformer_layer in self.layers:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can still keep the enumerate, the layer_idx variable is still accessible after the for loop

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pbontrager Fix it!
Initially, I misunderstood that the range of out_indices was 1 ~ num_layer instead of 0 ~ num_layer-1, so I made modifications based on that misconception. If the range of out_indices is 0~num_layer-1, then after the for loop, I should just check if layer_idx is included in self.out_indices and add it if it is. I've made the corrections according to your advice.

@pbontrager pbontrager self-assigned this Apr 18, 2025
@pbontrager
Copy link
Contributor

You need to fix the test and this should be good to land.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants