docs: fix semantic similarity description (cross-encoder -> bi-encoder) #1910
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR updates the documentation to correctly describe the Semantic similarity.
Issue
The documentation previously stated that a cross-encoder was used for computing the semantic similarity score. However, after reviewing the implementation, it is clear that the current approach follows a bi-encoder strategy:
A cross-encoder would typically process both texts together in a single forward pass (e.g., concatenating them before encoding), which is not the case in the current implementation.
Current Implementation
For example, in the current implementation:
This code shows that the ground truth and response are encoded separately, and their similarity is computed using cosine similarity, which is characteristic of a bi-encoder approach.
Fix
The term "cross-encoder" has been corrected to "bi-encoder" in the documentation to ensure consistency with the actual implementation.