-
Install Ollama
-
Download the model - we are getting tinyllama which is a very compact model. And it should run on most machines.
ollama pull tinyllama:latest
-
Download model for creating embeddings. The all-minilm aims to train sentence embedding models on very large sentence level datasets using a self-supervised contrastive learning objective.
ollama pull all-minilm:latest
-
Start, and make sure ollama is available at http://localhost:11434/
-
Install the packages in requirements.ttx
-
Run indexing
export OLLAMA_HOST='0.0.0.0';python index_content.py
-
Test searching
export OLLAMA_HOST='0.0.0.0';python search.py
-
Play around by changing the models and other parameters