Description
[✓ ] I checked the documentation and related resources and couldn't find an answer to my question.
Your Question
I wanted to get some clarification on how Context Precision is calculated - I thought it compares the retrieved context chunks to the user input to determine the precision / relevance of the retrieved context. However, the documentation suggests that the context chunks are compared to the response generated by the llm - so is it calculating whether the retrieved context is used by the llm in generating a response?
Documentation
LLM Based Context Precision
The following metrics uses LLM to identify if a retrieved context is relevant or not.
Context Precision without reference
LLMContextPrecisionWithoutReference metric can be used when you have both retrieved contexts and also reference contexts associated with a user_input. To estimate if a retrieved contexts is relevant or not this method uses the LLM to compare each of the retrieved context or chunk present in retrieved_contexts with response.
Follow-up
Is there a metric that checks to see the relevance of the retrieved context based on the user input?