Note
Still under development! The code is a bit spaghetti-ish right now (we've all been there 🍝), but, it works! Feel free to contribute and help make it better!
Chat with your PDFs using a local LLM.
- Clone the repository
git clone https://github.com/onurravli/docuchat.git
- Install the dependencies
pip install -r requirements.txt
# or
uv sync
- Create
pdfs
directory in the root of the project
mkdir pdfs
-
Add your PDFs to the
pdfs
directory -
Start the chat
python main.py
usage: main.py [-h] [--model MODEL] [--logging] [--quiet] [--temperature TEMPERATURE] [--streaming]
Chat with your PDFs using a local LLM.
options:
-h, --help show this help message and exit
--model MODEL Ollama model name to use (default: llama3.2)
--logging Enable logging (default: False)
--quiet Enable quiet mode (default: False)
--temperature TEMPERATURE
Temperature for the LLM (default: 0.7)
--streaming Enable streaming (default: True)
This project is licensed under the MIT License. See the LICENSE.md file for details.
Contributions are welcome! Please open an issue or submit a pull request.
Caution
This project is not affiliated with Ollama or LangChain. I don't accept any responsibility for the use of this project, generated content or any other content.