Description
[ ] I have checked the documentation and related resources and couldn't resolve my bug.
Describe the bug
A clear and concise description of what the bug is.
https://www.youtube.com/watch?v=Ts2wDG6OEko&t=190s
This is the reference link and I copy exactly the same code to run, and Ollama run Llama 3 is working, but when going into evalute function, it gives :
Evaluating: 0%| | 0/2 [00:00<?, ?it/s]Exception raised in Job[0]: ResponseError()
Evaluating: 50%|█████████████████████████████████████████████████████████ | 1/2 [02:00<02:00, 120.95s/it]Exception raised in Job[1]: ResponseError()
Evaluating: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [02:11<00:00, 65.58s/it]
Code to Reproduce:
from datasets import load_dataset
dataset = load_dataset("explodinggradients/amnesty_qa","english_v2",trust_remote_code=True)
dataset_subset = dataset["eval"].select(range(2))
llm = ChatOllama(model="llama3")
embedding_model = OllamaEmbeddings(model="llama3")
result = evaluate(
dataset=dataset_subset,
llm=llm,
embeddings=embedding_model,
metrics=[
context_precision
],
run_config=RunConfig(timeout=600.0, max_workers=128)
)
Ragas version:
Python version:
Code to Reproduce
Share code to reproduce the issue
Error trace
Expected behavior
A clear and concise description of what you expected to happen.
Additional context
Add any other context about the problem here.