Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

generate_sql function gives direct answers instead of a SQL #754

Open
Praj-17 opened this issue Jan 18, 2025 · 0 comments
Open

generate_sql function gives direct answers instead of a SQL #754

Praj-17 opened this issue Jan 18, 2025 · 0 comments

Comments

@Praj-17
Copy link

Praj-17 commented Jan 18, 2025

I’m using Vanna.ai in my architecture with ChromaDB, Gemini, and BigQuery. Users chat via a WebSocket, and I send the chat history as context to the LLM in the vn.ask() (I am calling the 4 functions in a sequence as mentioned on the documentation) function.

The first time it works fine and gives correct answers, but as the conversation history grows, I start getting direct answers instead of SQL queries (which are wrong). I have a retry mechanism in place that tries up to 5 times if the generated SQL is invalid, but the output remains the same each time.

I’m wondering:

Why does the generate_sql() function return direct answers instead of SQL queries?
Why does the retry mechanism always return the same output?
Could this be due to caching or model hallucinations, or is there another reason? Any suggestions on how to fix this?

I have attached my model architecture for better understanding.

Any help will be very useful, It will be even better if someone from Vanna could directly get in touch with us.

Image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant