Skip to content

Add Remote LLM Support for Perturbation-Based Attribution via RemoteLLMAttribution and VLLMProvider #1544

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed

Conversation

saichandrapandraju
Copy link
Contributor

@saichandrapandraju saichandrapandraju commented Apr 15, 2025

This PR introduces support for applying Captum's perturbation-based attribution algorithms to remotely hosted large language models (LLMs). It enables users to perform interpretability analyses on models served via APIs, such as those using vLLM, without requiring access to model internals.

Motivation:

Captum’s current LLM attribution framework requires access to local models, limiting its usability in production and hosted environments. With the rise of scalable remote inference backends and OpenAI-compatible APIs, this PR allows Captum to be used for black-box interpretability with hosted models, as long as they return token-level log probabilities.

This integration also aligns with ongoing efforts like llama-stack, which aims to provide a unified API layer for inference (and also for RAG, Agents, Tools, Safety, Evals, and Telemetry) across multiple backends—further expanding Captum’s reach for model explainability.

Key Additions:

  • RemoteLLMProvider Interface:
    A generic interface for fetching log probabilities from remote LLMs, making it easy to plug in various inference backends.
  • VLLMProvider Implementation:
    A concrete subclass of RemoteLLMProvider tailored for models served using vLLM, handling the specifics of communicating with vLLM endpoints to retrieve necessary data for attribution.
  • RemoteLLMAttribution class:
    A subclass of LLMAttribution that overrides internal methods to work with remote providers. It enables all perturbation-based algorithms (e.g., Feature Ablation, Shapley Values, KernelSHAP) using only the output logprobs from a remote LLM.
  • OpenAI-Compatible API Support:
    Used openai client under the hood for querying remote models, as many LLM serving solutions now support the OpenAI-compatible API format (e.g., vLLM OpenAI server and projects like llama-stack(see here for ongoing work related to this).

Issue(s) related to this:

… hosted models that provide logprobs (like vLLM)
@facebook-github-bot
Copy link
Contributor

Hi @saichandrapandraju!

Thank you for your pull request and welcome to our community.

Action Required

In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at [email protected]. Thanks!

@facebook-github-bot
Copy link
Contributor

Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks!

@saichandrapandraju
Copy link
Contributor Author

Hi @vivekmig @yucu @aobo-y could you check this PR please. Let me know if I have to change anything

@aobo-y
Copy link
Contributor

aobo-y commented Apr 18, 2025

Thank you @saichandrapandraju for the great effort! Generally, i agree this idea makes a lot of sense.
But our team may need some time to look into the code changes and get back to you.

@craymichael can you take a look at it? since you have studied the integration with llama-stack before

@saichandrapandraju
Copy link
Contributor Author

saichandrapandraju commented Apr 18, 2025

Thank you for the positive feedback @aobo-y ! Happy to hear the direction makes sense. Please take your time reviewing — I’ll be around to clarify or iterate on anything as needed. Looking forward to it!

@emaadmanzoor
Copy link

Is there any update on this? Seems like a great idea!

@craymichael
Copy link
Contributor

Hi, sorry for the delay on this! I actually have on my todos to dive into this PR next week.

Copy link
Contributor

@craymichael craymichael left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just wrapped up review of the diff - it's really in great shape, thank you for contributing this! Moreover, the remote provider abstraction is generic enough to support other frameworks, including Llama Stack (which includes support for log prob configs and prompt_logprobs depending on provider). I left some comments and questions (and some thinking out loud...). Let's discuss/iterate and we can land this contribution soon!

attr_kws["n_samples"] = n_samples

# In remote mode, we don't need the actual model, this is just a placeholder
placeholder_model = torch.nn.Module()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For a test this is fine. In release notes/an example notebook we can instead use a simpler placeholder like

lambda *_: 0

since the forward function has signature Callable[..., Union[int, float, Tensor, Future[Tensor]]]. I'm thinking as well that for convenience we can have the above lambda available as a class attribute of RemoteLLMAttribution with name placeholder_model so the syntax looks like:

AttrClass(RemoteLLMAttribution.placeholder_model)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As part of these changes, we should update (or create a counterpart) to the LLM attribution tutorial and update the docs. I'm happy to do this once this feature lands as well.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For a test this is fine. In release notes/an example notebook we can instead use a simpler placeholder like

lambda *_: 0

since the forward function has signature Callable[..., Union[int, float, Tensor, Future[Tensor]]]. I'm thinking as well that for convenience we can have the above lambda available as a class attribute of RemoteLLMAttribution with name placeholder_model so the syntax looks like:

AttrClass(RemoteLLMAttribution.placeholder_model)

Added a placeholder_model as class attribute but not the lambda because BaseLLMAttribution expects device from the model. So created a simple class (_PlaceholderModel with device) and the object of this class returns 0 (similar to lambda) when called.

from abc import ABC, abstractmethod
from typing import Any, List, Optional
from captum._utils.typing import TokenizerLike
from openai import OpenAI
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since this is being imported into attr.py, let's move this import to the VLLMProvider init wrapped around with a try-except, telling the user to install openai package if it isn't already. We'll keep it as an optional dependency as you have it in setup.py

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for pointing this out. Fixed this.

"""
# Parameter normalization
if 'max_tokens' not in gen_args:
gen_args['max_tokens'] = gen_args.pop('max_new_tokens', 25)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: add default to docstring

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done


try:
self.client = OpenAI(base_url=self.api_url,
api_key=os.getenv("OPENAI_API_KEY", "EMPTY")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: add env var to docstring
Looks like we don't need to handle this logic
https://github.com/openai/openai-python/blob/main/src/openai/_client.py#L114

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added env var to docstring. Instead of forcing user to set this env var (which OpenAI expects here), I'm setting a dummy string.

prompt=prompt,
temperature=0.0,
max_tokens=1,
extra_body={"prompt_logprobs": 0}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should prompt_logprobs be 1 here? I thought 0 would result in an exception as it needs to be > 1

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Initially I thought the same but turns out prompt_logprobs (similar to logprobs) is non-negative (here).

Once prompt_logprobs (same for logprobs) is set to a non-negative value (let's say k), the response will have upto k+1 logprobs because the sampled token will always be included. Explained here

In our case, if we set prompt_logprobs to 0, we will get logprobs of all the tokens in the input string and upto 0 additional logprobs.
If it is 1, we will get logprobs for all input tokens and upto 1 additional logprobs which makes it upto 2 logprobs for each token

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, interesting! Thanks for investigating that, that's not super intuitive but it makes sense.

for probs in response.choices[0].prompt_logprobs[1:]:
if not probs:
raise ValueError("Empty probability data in API response")
prompt_logprobs.append(list(probs.values())[0]['logprob'])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I understand correctly, prompt_logprobs is a list[dict[str, dict[str, float]]] where each element corresponds to a prompt token with contents being a map from generation token for each of the k prompt_logprobs with generation data in the token's map, including the logprob. If so, maybe we should assert that the length of probs is always 1?

Also iiuc, I can nitpick:

  • We can iterate over the final num_target_str_tokens only
  • Can replace list(probs.values())[0] with next(iter(probs.values()))

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I understand correctly, prompt_logprobs is a list[dict[str, dict[str, float]]] where each element corresponds to a prompt token with contents being a map from generation token for each of the k prompt_logprobs with generation data in the token's map, including the logprob. If so, maybe we should assert that the length of probs is always 1?

here's a sample schema for prompt_logprobs response for 2 tokens in the prompt -

[
    {
        "<token_id_as_str>": {
                "logprob": <float_value>, 
                "rank": <int_value>, 
                "decoded_token": "<str_token>"
                }
      },
    {
        "<token_id_as_str>": {
                "logprob": <float_value>, 
                "rank": <int_value>, 
                "decoded_token": "<str_token>"
                }
      },

]

Since we're setting prompt_logprobs = 0, we can assert the length of each token logprob map to 1. Added this.

Also iiuc, I can nitpick:

  • We can iterate over the final num_target_str_tokens only
  • Can replace list(probs.values())[0] with next(iter(probs.values()))

modified to include these.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the clarification!

if not target_str:
raise ValueError("Target string cannot be empty")

num_target_str_tokens = len(tokenizer.encode(target_str, add_special_tokens=False))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that each response contains token-level information. I think that it's possible that when we generate a response if none is provided that we might not need to require a tokenizer, which will minimize tokenizer-model mismatch. However, I think we can just make note of this and keep as future work.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll make a note and address this later, thank you!

@saichandrapandraju
Copy link
Contributor Author

saichandrapandraju commented May 9, 2025

Thank you for the comments @craymichael . I'll work on these and get back to you soon!

@cinjon
Copy link

cinjon commented May 11, 2025

Btw, tried using this with perturbations_per_eval > 1 + placeholder model and ran into an error. It would be helpful to include this in the tests.

AssertionError: When perturbations_per_eval > 1, forward_func's output should be a tensor whose 1st dim grow with the input batch size: when input batch size is 1, the output shape is torch.Size([1, 70]); when input batch size is 8, the output shape is torch.Size([1, 553])

@craymichael
Copy link
Contributor

craymichael commented May 12, 2025

Btw, tried using this with perturbations_per_eval > 1 + placeholder model and ran into an error. It would be helpful to include this in the tests.

This is also an issue in the other LLM attribution implementations outside of the remote LLM attribution. The tutorial notebook (Llama2_LLM_Attribution.ipynb) also breaks with perturbations_per_eval > 1. @cinjon do you mind opening a separate issue for this? We can deal with the underlying issue in a separate thread.

@saichandrapandraju
Copy link
Contributor Author

Hi @craymichael , made the required changes and re-requested the review. Thank you!

Copy link
Contributor

@craymichael craymichael left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for making the changes! This is in great shape and I don't have anything to add right now. I'm going to kick off some CI tests before we can go about merging

@craymichael
Copy link
Contributor

Ok we're almost good. Looks like the following small items need to be addressed:

Once those are fixed I think we're good to go!

@facebook-github-bot
Copy link
Contributor

@craymichael has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

@craymichael
Copy link
Contributor

We're so close - one more annoyance. Internally, we use the pyre type checker on top of mypy. We're currently running into this last cleanup items:

pytorch/captum/captum/attr/_core/llm_attr.py:906:4 Missing return annotation [3]: Returning `None` but no return type is specified.
pytorch/captum/captum/attr/_core/llm_attr.py:909:4 Missing return annotation [3]: Returning `int` but no return type is specified.
pytorch/captum/captum/attr/_core/llm_attr.py:909:24 Missing parameter annotation [2]: Parameter `*args` has no type specified.
pytorch/captum/captum/attr/_core/llm_attr.py:909:32 Missing parameter annotation [2]: Parameter `**kwargs` has no type specified.
pytorch/captum/tests/attr/test_llm_attr.py:1141:8 Incompatible attribute type [8]: Attribute `device` declared in class `captum.attr._core.llm_attr._PlaceholderModel` has type `device` but is used as type `str`.
pytorch/captum/tests/attr/test_llm_attr.py:1190:8 Incompatible attribute type [8]: Attribute `device` declared in class `captum.attr._core.llm_attr._PlaceholderModel` has type `device` but is used as type `str`.
pytorch/captum/tests/attr/test_llm_attr.py:1279:8 Incompatible attribute type [8]: Attribute `device` declared in class `captum.attr._core.llm_attr._PlaceholderModel` has type `device` but is used as type `str`.

Think these changes need to be made:

class _PlaceholderModel:
    """
    ...
    """

    def __init__(self) -> None:
        self.device: str | torch.device = torch.device("cpu")

    def __call__(self, *args: Any, **kwargs: Any) -> int:
        return 0

@saichandrapandraju
Copy link
Contributor Author

Oh, my bad. Let me fix right away! (can we add pyre checks to Captum CI..?)

@craymichael
Copy link
Contributor

Honestly that's a great point, haha. I will investigate adding that.

@facebook-github-bot
Copy link
Contributor

@craymichael has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

@saichandrapandraju
Copy link
Contributor Author

What is that 'one last thing' 😅

@craymichael
Copy link
Contributor

Hang on, it is currently a false positive. Still waiting for internal tests to finish. Fingers crossed!

@facebook-github-bot
Copy link
Contributor

@craymichael merged this pull request in 9c261c0.

@craymichael
Copy link
Contributor

@saichandrapandraju Thanks for your great contribution! Changes have been merged. I am also adding pyre to the CI this week so we can reduce more of this back and forth in the future

@saichandrapandraju
Copy link
Contributor Author

saichandrapandraju commented May 21, 2025

Yay 🎉!

Thank you for the help @craymichael ! We (TrustyAI) are a group of applied engineers/researchers who believe every ML model prediction should be interpretable to ensure trust. As LLMs are increasingly served remotely, this PR offers a solid starting point for enabling explanations for remote models (currently via vLLM for blackbox methods). We appreciate your continued support. Let me know if you want any help to update the docs for this 😊

Thank you for initiating this @aobo-y !

Thanks for your interest @emaadmanzoor & @cinjon !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants