Skip to content

Conversation

@Le09
Copy link

@Le09 Le09 commented Apr 24, 2025

Fairly straightforward. In the spirit of the repo, I've kept things simple rather than also automatically add everything (like optional Anthropic libraries or others).

I think asking beginners to modify directly call_llm poses a risk of them putting their private keys on Github, so it's better to have them learn how to use environment variables to avoid taking chances.

@zachary62
Copy link
Member

Thank you! I'm a bit worried that env will add friction to semi-technical people, but will add notes about security issue

Copy link

@eliliam eliliam left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice and clean, very extensible too if anyone wanted to add new providers down the line.

@redliu312
Copy link

hihi @Le09

do you think it is required to add

from dotenv import load_dotenv
load_dotenv()

in the top section of the call_llm.py

I tried your branch and it seems this is needed for load the .env file when the script executed

@Le09
Copy link
Author

Le09 commented Apr 30, 2025

@redliu312 Oh, I see.
Environment variables are variables that are set in you terminal. So a standard development workflow is to do

workon Tutorial-Codebase-Knowledge-venv  # activate your virtual environment with the correct set of python dependencies
source .env  # load the API keys
python file.py

If done this way, the variables are available in everything that you do in this terminal. So python main.py and python utils.call_llm.py would both get variables.
If you do the call from main, then main loads the variable first so call_llm should get them.
So, the issue is that calling utils/call_llm directly does not get them from main; since this is for testing purpose, the load_dotenv() should indeed be added to this file, but under the if __name__ == "__main__": line.

@taqtiqa-mark
Copy link
Contributor

@zachary62 I pulled this into my clone of the repo, and set .env values for XAI:

❯ ./python utils/call_llm.py
Making call...
Response: Hello! I'm doing great—I'm an AI, so I'm always powered up and ready to chat. How about you? What can I help with today?

Would be great to see this merged.

@zachary62
Copy link
Member

Could the conflicts be resolved? I will merge it then. Thank you!

@taqtiqa-mark
Copy link
Contributor

Could the conflicts be resolved? I will merge it then. Thank you!

Hmm, I just did git fetch origin pull/50/head:pr-50 and all has worked fine.... I'll look into the conflicts.

zachary62 added a commit that referenced this pull request Oct 24, 2025
Automatically switch provider based on envirnment variables, Ollama support: closes #13 & #50
@Le09
Copy link
Author

Le09 commented Oct 24, 2025

Beat me to it :-)

@Le09 Le09 closed this Oct 24, 2025
@taqtiqa-mark
Copy link
Contributor

Beat me to it :-)

Yeah, full credit to you... my git fu didn't allow me to preserve you in the commit history. Apologies for that.

@Le09
Copy link
Author

Le09 commented Oct 25, 2025

@taqtiqa-mark The standard flow is that you simply add the repo on which the fork is as a new remote and fetch it so that you can cherry-pick the commit (in which case the original author stays, and you appear as the committer). Maybe a bit simpler, you can checkout the Github PR (you can do so with the gh CLI tool). Anyway, cheers!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants