Description
I'm trying to get paper-qa working locally, but the documentation doesn't properly show how that should be done. I'm working from the documentation for local usage that calls for using the ask
function.
The documentation states that the LLM agent is default, but fake
is the default. The constructor for AgentSettings
specifies as much:
In settings.py
:
agent_type: str = Field(
default="fake",
description="Type of agent to use",
)
The behavior of the fake
agent from the documentation isn't clear, but it doesn't respect any settings passed to it. The fake
agent performs a search query.
In main.py
no llm settings are passed besides the model name:
for search in await litellm_get_search_query(
question, llm=query.settings.llm, count=3
):
Then, the litellm_get_search_query
tries to spin up a LiteLLM model (the fake
agent) with no parameters from the user except for the model name. This is already a problem if you're trying to use the model locally as it'll try to spin up OpenAI as a provider.
In litellm_get_search_query
contained in helpers_.py
:
model = LiteLLMModel(name=llm)
model.config["model_list"][0]["litellm_params"].update({"temperature": temperature})
result = await model.run_prompt(
prompt=search_prompt,
data={"question": question, "count": count},
skip_system=True,
)
Solution: The ask
function for local usage needs to have AgentSettings
manually defined (also note the selection of ToolSelector
for agent type, presumably the expected default:
answer = ask(
"<YOUR_QUERY>",
settings=Settings(
llm="<YOUR_MODEL_NAME>",
llm_config=local_llm_config,
summary_llm="<YOUR_MODEL_NAME>",
summary_llm_config=local_llm_config,
paper_directory="<YOUR_PAPER_DIR>",
agent=AgentSettings(agent_llm_config=local_llm_config,
agent_llm="<YOUR_MODEL_NAME>",
agent_type="ToolSelector"),
answer=AnswerSettings(evidence_k=3) #optional
),
)