[Bug]: During local embedding,RAGFlow is sending too much text at once, exceeding the model's maximum token limit, causing the model to be unable to fully read the input. #4683
Labels
bug
Something isn't working
Is there an existing issue for the same bug?
RAGFlow workspace code commit ID
main
RAGFlow image version
v0.15.1,nightly
Other environment information
Actual behavior
When employing embedding models with a lower 'maximum input token' capacity,models such as bge-large and conan-embedding-v1 are limited to a maximum input of 512 tokens. When using these models for embedding, RAGFlow sends more than 512 tokens at once, ollama will encounter an error.
I've found the cause of the error here:https://github.com/ollama/ollama/issues/7288#issuecomment-2591709109
Although I can adjust the maximum input limit of the model in ollama, it will cause RAGFlow's text to be truncated, resulting in incomplete embeddings.Additionally, I'm unable to locate a setting within RAGFlow to control the maximum input for the embedding model.
When adding a model, the max token setting controls the maximum output, not the input, which doesn't apply to embedding models.
The same issue of an ineffective max token option also exists when adding reranker models.
Expected behavior
Please add a setting to RAGFlow to control the maximum number of tokens sent to the embedding model per request, and also fix the bug where the max token limit is ineffective when adding reranker models.
Steps to reproduce
Additional information
No response
The text was updated successfully, but these errors were encountered: