Skip to content

Common Issues

Parker Combs edited this page Dec 18, 2025 · 14 revisions

Common considerations

Start here if your agent/model is not behaving as intended. Keep in mind that AI and LLMs are an emerging and rapidly developing technology, and thus they may not be suitable for all use cases.

  • Prompt engineering can make a big difference. If a model is behaving too rigidly or ignoring your input, try relaxing or changing the prompt supplied to it. Some models may attempt to follow their instructions too strictly or literally, causing them to behave unexpectedly.
  • If your model is not outputting in a certain format (e.g., JSON, list, etc.), try switching to a different model - not all models support this type of behavior.
  • If your model is not using supplied tools, try switching to a different model - not all models support this type of behavior. For instance, o3-deep-research will not use tools.

Problems using or accessing models

"Publisher Model was not found or your project does not have access to it."

This error indicates that the model you are attempting to access is either not supported by or no longer available (if once supported by ToxPipe) to ToxPipe. Try a different model type. A list of available models can be found here.

Rate limit errors

You may encounter rate limit errors when attempting to quickly and/or repeatedly access an AI model (e.g., when using a script to query a model automatically). There are a few options to try in this case:

  • Try a different model. Different models have different quotas.
  • Try a locally-hosted model via Ollama.
  • Evaluate how much of your code needs to be performed by an AI model. Can some work be done through "traditional" code?
  • Implement retry with exponential backoff for queries to the model.
  • After exhausting other potential solutions, contact the administrator of your ToxPipe instance for increasing the quota or more specialized support. Please include the name(s) of the model(s) you are using in your request.

Prompt/query considerations

LLMs are AI language models - although they may produce content that appears humanlike, they do not "think" in the same way humans do to generate it. Instead, they follow idiosyncrasies that may seem counterintuitive to how a human might solve a problem and thus might require one to change a prompt to better align with what the AI might understand. This section will cover common issues related to prompting and possible solutions.

The model can't reference a specific page of an uploaded document

Due to current architecture limitations, LLMs do not preserve the structure of an uploaded file. Therefore, they do not understand how to refer to individual pages of documents by page number or individual sections by title. We recommend editing your queries to be more context/content-based rather than structure/page-based. For example:

Analyze the Authors section from pages 3-4 and give me a list of all the contributing authors listed there.

Should instead be:

Give me a list of all the contributing authors in this paper.

The model is not following my formatting instructions

In some cases, a model (especially older/smaller models) may not properly format its output in alignment with instructions you provided it in your prompt. A few changes that may remedy this are listed below:

  • Providing examples to an AI model as part of its prompt can help it better understand how to respond to a request of a certain format, e.g., Format your answer as a semicolon-delimited list with no other text. For example: item1;item2;item3;...;itemN.

The model is producing blank output

In certain circumstances, like when using a model as part of an agent, the model may generate a blank response. This is common with newer/larger models with larger output token windows such as GPT-5. In these cases, try running the model/agent again after increasing the maximum output token size.

Clone this wiki locally