You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
instructor.exceptions.InstructorRetryException:
litellm.BadRequestError:
litellm.ContextWindowExceededError:
ContextWindowExceededError:
OpenAIException - Error code: 400
{
'error':
{
'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 150820 tokens (150655 in the messages, 165 in the functions). Please reduce the length of the messages or functions.",
'type': 'invalid_request_error',
'param': 'messages',
'code': 'context_length_exceeded'
}
}
A check should be implemented for the length of request message that we are sending.
We can use tiktoken to count the tokens.
Large requests should be chunked into smaller requests.
tiktoken
to count the tokens.From SyncLinear.com | COG-948
The text was updated successfully, but these errors were encountered: