We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hi! I used to be a llama cpp python[server] user but now I am trying to migrate to SGLang !
I know v1/openai standard is very limited so each inference server app is trying to extended the routes to facilitate theirs users life.
So as a new user I would like to ask/suggest two new routes for SGLang server:
tokenize: very simple and direct returning the array of tokens.
tokenize/count: optional but useful, the total of tokens (if not at least a simple len(tokenize return) will be enough in client side.
Thanks in advance Celso
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Hi!
I used to be a llama cpp python[server] user but now I am trying to migrate to SGLang !
I know v1/openai standard is very limited so each inference server app is trying to extended the routes to facilitate theirs users life.
So as a new user I would like to ask/suggest two new routes for SGLang server:
tokenize: very simple and direct returning the array of tokens.
tokenize/count: optional but useful, the total of tokens (if not at least a simple len(tokenize return) will be enough in client side.
Thanks in advance
Celso
The text was updated successfully, but these errors were encountered: