-
Notifications
You must be signed in to change notification settings - Fork 533
Add Remote LLM Support for Perturbation-Based Attribution via RemoteLLMAttribution and VLLMProvider #1544
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Remote LLM Support for Perturbation-Based Attribution via RemoteLLMAttribution and VLLMProvider #1544
Conversation
… hosted models that provide logprobs (like vLLM)
Thank you for your pull request and welcome to our community. Action RequiredIn order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you. ProcessIn order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with If you have received this in error or have any questions, please contact us at [email protected]. Thanks! |
Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks! |
Thank you @saichandrapandraju for the great effort! Generally, i agree this idea makes a lot of sense. @craymichael can you take a look at it? since you have studied the integration with llama-stack before |
Thank you for the positive feedback @aobo-y ! Happy to hear the direction makes sense. Please take your time reviewing — I’ll be around to clarify or iterate on anything as needed. Looking forward to it! |
Is there any update on this? Seems like a great idea! |
Hi, sorry for the delay on this! I actually have on my todos to dive into this PR next week. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just wrapped up review of the diff - it's really in great shape, thank you for contributing this! Moreover, the remote provider abstraction is generic enough to support other frameworks, including Llama Stack (which includes support for log prob configs and prompt_logprobs depending on provider). I left some comments and questions (and some thinking out loud...). Let's discuss/iterate and we can land this contribution soon!
tests/attr/test_llm_attr.py
Outdated
attr_kws["n_samples"] = n_samples | ||
|
||
# In remote mode, we don't need the actual model, this is just a placeholder | ||
placeholder_model = torch.nn.Module() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For a test this is fine. In release notes/an example notebook we can instead use a simpler placeholder like
lambda *_: 0
since the forward function has signature Callable[..., Union[int, float, Tensor, Future[Tensor]]]
. I'm thinking as well that for convenience we can have the above lambda available as a class attribute of RemoteLLMAttribution
with name placeholder_model
so the syntax looks like:
AttrClass(RemoteLLMAttribution.placeholder_model)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As part of these changes, we should update (or create a counterpart) to the LLM attribution tutorial and update the docs. I'm happy to do this once this feature lands as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For a test this is fine. In release notes/an example notebook we can instead use a simpler placeholder like
lambda *_: 0since the forward function has signature
Callable[..., Union[int, float, Tensor, Future[Tensor]]]
. I'm thinking as well that for convenience we can have the above lambda available as a class attribute ofRemoteLLMAttribution
with nameplaceholder_model
so the syntax looks like:AttrClass(RemoteLLMAttribution.placeholder_model)
Added a placeholder_model
as class attribute but not the lambda because BaseLLMAttribution
expects device
from the model. So created a simple class (_PlaceholderModel
with device
) and the object of this class returns 0 (similar to lambda) when called.
captum/attr/_core/remote_provider.py
Outdated
from abc import ABC, abstractmethod | ||
from typing import Any, List, Optional | ||
from captum._utils.typing import TokenizerLike | ||
from openai import OpenAI |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this is being imported into attr.py
, let's move this import to the VLLMProvider
init wrapped around with a try-except, telling the user to install openai package if it isn't already. We'll keep it as an optional dependency as you have it in setup.py
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for pointing this out. Fixed this.
captum/attr/_core/remote_provider.py
Outdated
""" | ||
# Parameter normalization | ||
if 'max_tokens' not in gen_args: | ||
gen_args['max_tokens'] = gen_args.pop('max_new_tokens', 25) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: add default to docstring
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
captum/attr/_core/remote_provider.py
Outdated
|
||
try: | ||
self.client = OpenAI(base_url=self.api_url, | ||
api_key=os.getenv("OPENAI_API_KEY", "EMPTY") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: add env var to docstring
Looks like we don't need to handle this logic
https://github.com/openai/openai-python/blob/main/src/openai/_client.py#L114
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added env var to docstring. Instead of forcing user to set this env var (which OpenAI expects here), I'm setting a dummy string.
captum/attr/_core/remote_provider.py
Outdated
prompt=prompt, | ||
temperature=0.0, | ||
max_tokens=1, | ||
extra_body={"prompt_logprobs": 0} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should prompt_logprobs
be 1
here? I thought 0 would result in an exception as it needs to be > 1
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Initially I thought the same but turns out prompt_logprobs
(similar to logprobs
) is non-negative (here).
Once prompt_logprobs
(same for logprobs
) is set to a non-negative value (let's say k
), the response will have upto k+1
logprobs because the sampled token will always be included. Explained here
In our case, if we set prompt_logprobs
to 0, we will get logprobs of all the tokens in the input string and upto 0 additional logprobs.
If it is 1, we will get logprobs for all input tokens and upto 1 additional logprobs which makes it upto 2 logprobs for each token
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, interesting! Thanks for investigating that, that's not super intuitive but it makes sense.
captum/attr/_core/remote_provider.py
Outdated
for probs in response.choices[0].prompt_logprobs[1:]: | ||
if not probs: | ||
raise ValueError("Empty probability data in API response") | ||
prompt_logprobs.append(list(probs.values())[0]['logprob']) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I understand correctly, prompt_logprobs
is a list[dict[str, dict[str, float]]]
where each element corresponds to a prompt token with contents being a map from generation token for each of the k prompt_logprobs
with generation data in the token's map, including the logprob. If so, maybe we should assert that the length of probs
is always 1?
Also iiuc, I can nitpick:
- We can iterate over the final
num_target_str_tokens
only - Can replace
list(probs.values())[0]
withnext(iter(probs.values()))
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I understand correctly,
prompt_logprobs
is alist[dict[str, dict[str, float]]]
where each element corresponds to a prompt token with contents being a map from generation token for each of the kprompt_logprobs
with generation data in the token's map, including the logprob. If so, maybe we should assert that the length ofprobs
is always 1?
here's a sample schema for prompt_logprobs
response for 2 tokens in the prompt -
[
{
"<token_id_as_str>": {
"logprob": <float_value>,
"rank": <int_value>,
"decoded_token": "<str_token>"
}
},
{
"<token_id_as_str>": {
"logprob": <float_value>,
"rank": <int_value>,
"decoded_token": "<str_token>"
}
},
]
Since we're setting prompt_logprobs = 0
, we can assert the length of each token logprob map to 1. Added this.
Also iiuc, I can nitpick:
- We can iterate over the final
num_target_str_tokens
only- Can replace
list(probs.values())[0]
withnext(iter(probs.values()))
modified to include these.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the clarification!
captum/attr/_core/remote_provider.py
Outdated
if not target_str: | ||
raise ValueError("Target string cannot be empty") | ||
|
||
num_target_str_tokens = len(tokenizer.encode(target_str, add_special_tokens=False)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that each response
contains token-level information. I think that it's possible that when we generate a response if none is provided that we might not need to require a tokenizer, which will minimize tokenizer-model mismatch. However, I think we can just make note of this and keep as future work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll make a note and address this later, thank you!
Thank you for the comments @craymichael . I'll work on these and get back to you soon! |
Btw, tried using this with perturbations_per_eval > 1 + placeholder model and ran into an error. It would be helpful to include this in the tests.
|
This is also an issue in the other LLM attribution implementations outside of the remote LLM attribution. The tutorial notebook (Llama2_LLM_Attribution.ipynb) also breaks with |
Hi @craymichael , made the required changes and re-requested the review. Thank you! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for making the changes! This is in great shape and I don't have anything to add right now. I'm going to kick off some CI tests before we can go about merging
Ok we're almost good. Looks like the following small items need to be addressed:
Once those are fixed I think we're good to go! |
@craymichael has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
We're so close - one more annoyance. Internally, we use the pyre type checker on top of mypy. We're currently running into this last cleanup items:
Think these changes need to be made: class _PlaceholderModel:
"""
...
"""
def __init__(self) -> None:
self.device: str | torch.device = torch.device("cpu")
def __call__(self, *args: Any, **kwargs: Any) -> int:
return 0 |
Oh, my bad. Let me fix right away! (can we add pyre checks to Captum CI..?) |
Honestly that's a great point, haha. I will investigate adding that. |
@craymichael has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
What is that 'one last thing' 😅 |
Hang on, it is currently a false positive. Still waiting for internal tests to finish. Fingers crossed! |
@craymichael merged this pull request in 9c261c0. |
@saichandrapandraju Thanks for your great contribution! Changes have been merged. I am also adding pyre to the CI this week so we can reduce more of this back and forth in the future |
Yay 🎉! Thank you for the help @craymichael ! We (TrustyAI) are a group of applied engineers/researchers who believe every ML model prediction should be interpretable to ensure trust. As LLMs are increasingly served remotely, this PR offers a solid starting point for enabling explanations for remote models (currently via vLLM for blackbox methods). We appreciate your continued support. Let me know if you want any help to update the docs for this 😊 Thank you for initiating this @aobo-y ! Thanks for your interest @emaadmanzoor & @cinjon ! |
This PR introduces support for applying Captum's perturbation-based attribution algorithms to remotely hosted large language models (LLMs). It enables users to perform interpretability analyses on models served via APIs, such as those using vLLM, without requiring access to model internals.
Motivation:
Captum’s current LLM attribution framework requires access to local models, limiting its usability in production and hosted environments. With the rise of scalable remote inference backends and OpenAI-compatible APIs, this PR allows Captum to be used for black-box interpretability with hosted models, as long as they return token-level log probabilities.
This integration also aligns with ongoing efforts like llama-stack, which aims to provide a unified API layer for inference (and also for RAG, Agents, Tools, Safety, Evals, and Telemetry) across multiple backends—further expanding Captum’s reach for model explainability.
Key Additions:
RemoteLLMProvider
Interface:A generic interface for fetching log probabilities from remote LLMs, making it easy to plug in various inference backends.
VLLMProvider
Implementation:A concrete subclass of
RemoteLLMProvider
tailored for models served using vLLM, handling the specifics of communicating with vLLM endpoints to retrieve necessary data for attribution.RemoteLLMAttribution
class:A subclass of
LLMAttribution
that overrides internal methods to work with remote providers. It enables all perturbation-based algorithms (e.g., Feature Ablation, Shapley Values, KernelSHAP) using only the output logprobs from a remote LLM.Used openai client under the hood for querying remote models, as many LLM serving solutions now support the OpenAI-compatible API format (e.g., vLLM OpenAI server and projects like
llama-stack
(see here for ongoing work related to this).Issue(s) related to this: