Skip to content

onurravli/docuchat

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Note

Still under development! The code is a bit spaghetti-ish right now (we've all been there 🍝), but, it works! Feel free to contribute and help make it better!

DocuChat

Chat with your PDFs using a local LLM.

Get Started

  1. Clone the repository
git clone https://github.com/onurravli/docuchat.git
  1. Install the dependencies
pip install -r requirements.txt

# or

uv sync
  1. Create pdfs directory in the root of the project
mkdir pdfs
  1. Add your PDFs to the pdfs directory

  2. Start the chat

python main.py

Usage

usage: main.py [-h] [--model MODEL] [--logging] [--quiet] [--temperature TEMPERATURE] [--streaming]

Chat with your PDFs using a local LLM.

options:
  -h, --help            show this help message and exit
  --model MODEL         Ollama model name to use (default: llama3.2)
  --logging             Enable logging (default: False)
  --quiet               Enable quiet mode (default: False)
  --temperature TEMPERATURE
                        Temperature for the LLM (default: 0.7)
  --streaming           Enable streaming (default: True)

License

This project is licensed under the MIT License. See the LICENSE.md file for details.

Contributing

Contributions are welcome! Please open an issue or submit a pull request.

Acknowledgements

Disclaimer

Caution

This project is not affiliated with Ollama or LangChain. I don't accept any responsibility for the use of this project, generated content or any other content.

About

Chat with your PDFs using a local LLM.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages