You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[✓ ] I checked the documentation and related resources and couldn't find an answer to my question.
Your Question
I wanted to get some clarification on how Context Precision is calculated - I thought it compares the retrieved context chunks to the user input to determine the precision / relevance of the retrieved context. However, the documentation suggests that the context chunks are compared to the response generated by the llm - so is it calculating whether the retrieved context is used by the llm in generating a response?
Documentation
LLM Based Context Precision
The following metrics uses LLM to identify if a retrieved context is relevant or not.
Context Precision without reference
LLMContextPrecisionWithoutReference metric can be used when you have both retrieved contexts and also reference contexts associated with a user_input. To estimate if a retrieved contexts is relevant or not this method uses the LLM to compare each of the retrieved context or chunk present in retrieved_contexts with response.
Follow-up
Is there a metric that checks to see the relevance of the retrieved context based on the user input?
The text was updated successfully, but these errors were encountered:
I wanted to get some clarification on how Context Precision is calculated - I thought it compares the retrieved context chunks to the user input to determine the precision / relevance of the retrieved context. However, the documentation suggests that the context chunks are compared to the response generated by the LLM - so is it calculating whether the retrieved context is used by the LLM in generating a response?
Yes, you are absolutely right. The Context Precision metric compares the retrieved context chunks with the response generated by the LLM to check if they were useful in forming the response.
Is there a metric that checks to see the relevance of the retrieved context based on the user input?
If you're looking to check the relevance of the retrieved context based on the user input, you can create a custom metric. For a binary metric, you could use Aspect Critic (Simple Criteria Scoring). If you're looking for something more detailed, you can consider using RubricsScore (Rubric-based Metrics).
[✓ ] I checked the documentation and related resources and couldn't find an answer to my question.
Your Question
I wanted to get some clarification on how Context Precision is calculated - I thought it compares the retrieved context chunks to the user input to determine the precision / relevance of the retrieved context. However, the documentation suggests that the context chunks are compared to the response generated by the llm - so is it calculating whether the retrieved context is used by the llm in generating a response?
Documentation
LLM Based Context Precision
The following metrics uses LLM to identify if a retrieved context is relevant or not.
Context Precision without reference
LLMContextPrecisionWithoutReference metric can be used when you have both retrieved contexts and also reference contexts associated with a user_input. To estimate if a retrieved contexts is relevant or not this method uses the LLM to compare each of the retrieved context or chunk present in retrieved_contexts with response.
Follow-up
Is there a metric that checks to see the relevance of the retrieved context based on the user input?
The text was updated successfully, but these errors were encountered: