Skip to content

Update occlusion to new method of constructing ablated batches #1616

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

sarahtranfb
Copy link
Contributor

Differential Revision: D76483214

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D76483214

sarahtranfb added a commit to sarahtranfb/captum that referenced this pull request Jun 23, 2025
…ch#1616)

Summary: Pull Request resolved: pytorch#1616

Differential Revision: D76483214
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D76483214

…ch#1616)

Summary:

`FeaturePermutation` and `Occlusion` are subclasses of `FeatureAblation`, which contains the bulk of the logic around iterating through and processing perturbed inputs. 
Previously, `FeatureAblation` only constructed the perturbed input by looking at each input tensor individually, as there wasn't an explicit use-case that needed cross-tensor feature grouping. However this behavior has been modified to support cross-tensor masking, as different sparse features at Meta are represented by different tensors when they're finally passed to Captum, and we need to support grouping across features and feature types for various Ads workstreams/asks. 
The new behavior is mostly rolled out internally, and is controlled by the `enable_cross_tensor_attribution` flag. We do not want to support both behaviors indefinitely. 

`Occlusion` does not use the custom masks parameter exposed via `FeatureAblation`, but constructs the masks internally. There's no use-case requiring cross-tensor masking for occlusion, so it's not a requirement that it still follows the logic in `FeatureAblation`. However, it's been adapted in this diff in order to just reuse the code from `FeatureAblation`, as the logic specific to `enable_cross_tensor_attribution=False` will be going away.

Differential Revision: D76483214
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D76483214

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants