Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ Request ] Multimodal model generation #132

Closed
WoutDeRijck opened this issue Aug 16, 2024 · 4 comments · May be fixed by huggingface/transformers#36489
Closed

[ Request ] Multimodal model generation #132

WoutDeRijck opened this issue Aug 16, 2024 · 4 comments · May be fixed by huggingface/transformers#36489

Comments

@WoutDeRijck
Copy link

Now it is only possible to pass text to enforced model. Some tasks require multimodal models, like processing images. It would be awesome if it would be possible to enforce a multimodal model to use a particular JSON schema.

@ayylemao
Copy link

ayylemao commented Aug 22, 2024

If you use Hugginface's transformers library it is possible. Just add the prefix function to the generate method and it will work:

            generated_ids = model.generate(**inputs, 
                                           max_new_tokens=256, 
                                           do_sample=False, 
                                           prefix_allowed_tokens_fn=prefix_func)```

@WoutDeRijck
Copy link
Author

I am currently working with miniCPM:

model = AutoModel.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True,
    attn_implementation='flash_attention_2', torch_dtype=torch.bfloat16) # sdpa or flash_attention_2, no eager
model = model.eval().cuda()
tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True)

parser = JsonSchemaParser(InvoiceFormat.schema())
prefix_function = build_transformers_prefix_allowed_tokens_fn(tokenizer, parser)

When I pass this prefix function to the model.generate I get these kinds of errors:

file /opt/conda/lib/python3.10/site-packages/transformers/generation/logits_process.py:1337, in PrefixConstrainedLogitsProcessor.__call__(self, input_ids, scores)
   1334 @add_start_docstrings(LOGITS_PROCESSOR_INPUTS_DOCSTRING)
   1335 def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor) -> torch.FloatTensor:
   1336     mask = torch.full_like(scores, -math.inf)
-> 1337     for batch_id, beam_sent in enumerate(input_ids.view(-1, self._num_beams, input_ids.shape[-1])):
   1338         for beam_id, sent in enumerate(beam_sent):
   1339             prefix_allowed_tokens = self._prefix_allowed_tokens_fn(batch_id, sent)

RuntimeError: cannot reshape tensor of 0 elements into shape [-1, 1, 0] because the unspecified dimension size -1 can be any value and is ambiguous

@WoutDeRijck
Copy link
Author

Here is a small notebook I made to show where it goes wrong.

lmformatenforcer.ipynb.zip

@noamgat
Copy link
Owner

noamgat commented Oct 16, 2024

There is a new sample that shows multimodal extraction working:
https://github.com/noamgat/lm-format-enforcer/blob/main/samples/colab_llama32_vision_enforcer.ipynb

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
3 participants