Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

huggingface: use AutoModel instead of SentenceTransformer #1250

Closed
wants to merge 4 commits into from

Conversation

l4b4r4b4b4
Copy link
Contributor

Switch SentenceTransformer for AutoModel in order to set trust_remote_code needed to use the encode method with embeddings models like jinai-v2

Description

This PR fixes #

Notes for Reviewers

Signed commits

  • Yes, I signed my commits.

Switch SentenceTransformer for AutoModel in order to set trust_remote_code needed to use the encode method with embeddings models like jinai-v2

Signed-off-by: Lucas Hänke de Cansino <[email protected]>
@mudler
Copy link
Owner

mudler commented Nov 7, 2023

the failure looks genuine @l4b4r4b4b4 - is that compatible with sentence formers? does it requires the models to be pulled manually?

Maybe it makes sense to have a separate backend for this instead

@lunamidori5
Copy link
Collaborator

the failure looks genuine @l4b4r4b4b4 - is that compatible with sentence formers? does it requires the models to be pulled manually?

Maybe it makes sense to have a separate backend for this instead

We need to make sure that the trust_remote_code is setable in the yaml file if we could @mudler

@l4b4r4b4b4
Copy link
Contributor Author

l4b4r4b4b4 commented Nov 7, 2023

the failure looks genuine @l4b4r4b4b4 - is that compatible with sentence formers? does it requires the models to be pulled manually?

Maybe it makes sense to have a separate backend for this instead

@mudler it was my understanding its compatible with transformers since its a component from HugginsFace's transformer library. Could also be implemented as fallback option in case normal SentenceTransformer does not work for not exposing trust_remote_code property.

@l4b4r4b4b4
Copy link
Contributor Author

the failure looks genuine @l4b4r4b4b4 - is that compatible with sentence formers? does it requires the models to be pulled manually?

Maybe it makes sense to have a separate backend for this instead

ah and no you don't have to download anything manually. Simply set the yml in ./models folder as before and it downloads the model and infers embedding vectors successfully.

So don't think a seperate backend is actually needed 🤷‍♂️

@mudler
Copy link
Owner

mudler commented Nov 9, 2023

the failure looks genuine @l4b4r4b4b4 - is that compatible with sentence formers? does it requires the models to be pulled manually?
Maybe it makes sense to have a separate backend for this instead

ah and no you don't have to download anything manually. Simply set the yml in ./models folder as before and it downloads the model and infers embedding vectors successfully.

So don't think a seperate backend is actually needed 🤷‍♂️

I do agree 100% with you here, lets keep to one if it's possible, but it's weird - the test failed complaining that could not find the model, maybe it misses some option then?

@mudler mudler changed the title Update huggingface.py huggingface: use AutoModel instead of SentenceTransformer Nov 11, 2023
@mudler
Copy link
Owner

mudler commented Nov 19, 2023

@l4b4r4b4b4 friendly ping, are you looking into the failures, or shall I help here?

@l4b4r4b4b4
Copy link
Contributor Author

l4b4r4b4b4 commented Nov 19, 2023

@l4b4r4b4b4 friendly ping, are you looking into the failures, or shall I help here?

@mudler That would be great! Since in my local installation it successfully compiles and does inference with jinai model. However I am not sure why it is not able to do that on build test. Maybe it calls for GPU inference by default? 🤔

@mudler
Copy link
Owner

mudler commented Nov 20, 2023

seems in some cases automodel indeed requires quite few additional steps here:

https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2#usage-huggingface-transformers

@l4b4r4b4b4 friendly ping, are you looking into the failures, or shall I help here?

@mudler That would be great! Since in my local installation it successfully compiles and does inference with jinai model. However I am not sure why it is not able to do that on build test. Maybe it calls for GPU inference by default? 🤔

The tests-linux pipeline runs only on CPU, no GPU is involved here

@mudler
Copy link
Owner

mudler commented Nov 20, 2023

the fact is - we will have as per #1126 a transformers backend - and that will make sense maybe to use Automodel? maybe we can have the transformers backend using AutoModel for embeddings as well and have a backend for transformers #1015

@mudler
Copy link
Owner

mudler commented Nov 20, 2023

Closing as #1308 was merged,thanks @l4b4r4b4b4 !

@mudler mudler closed this Nov 20, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants