|
| 1 | +# Ollama Docker image |
| 2 | + |
| 3 | +### CPU only |
| 4 | + |
| 5 | +```bash |
| 6 | +docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama |
| 7 | +``` |
| 8 | + |
| 9 | +### Nvidia GPU |
| 10 | +Install the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html#installation). |
| 11 | + |
| 12 | +#### Install with Apt |
| 13 | +1. Configure the repository |
| 14 | +```bash |
| 15 | +curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey \ |
| 16 | + | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg |
| 17 | +curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list \ |
| 18 | + | sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' \ |
| 19 | + | sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list |
| 20 | +sudo apt-get update |
| 21 | +``` |
| 22 | +2. Install the NVIDIA Container Toolkit packages |
| 23 | +```bash |
| 24 | +sudo apt-get install -y nvidia-container-toolkit |
| 25 | +``` |
| 26 | + |
| 27 | +#### Install with Yum or Dnf |
| 28 | +1. Configure the repository |
| 29 | + |
| 30 | +```bash |
| 31 | +curl -s -L https://nvidia.github.io/libnvidia-container/stable/rpm/nvidia-container-toolkit.repo \ |
| 32 | + | sudo tee /etc/yum.repos.d/nvidia-container-toolkit.repo |
| 33 | +``` |
| 34 | + |
| 35 | +2. Install the NVIDIA Container Toolkit packages |
| 36 | + |
| 37 | +```bash |
| 38 | +sudo yum install -y nvidia-container-toolkit |
| 39 | +``` |
| 40 | + |
| 41 | +#### Configure Docker to use Nvidia driver |
| 42 | +``` |
| 43 | +sudo nvidia-ctk runtime configure --runtime=docker |
| 44 | +sudo systemctl restart docker |
| 45 | +``` |
| 46 | + |
| 47 | +#### Start the container |
| 48 | + |
| 49 | +```bash |
| 50 | +docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama |
| 51 | +``` |
| 52 | + |
| 53 | +### AMD GPU |
| 54 | + |
| 55 | +To run Ollama using Docker with AMD GPUs, use the `rocm` tag and the following command: |
| 56 | + |
| 57 | +``` |
| 58 | +docker run -d --device /dev/kfd --device /dev/dri -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama:rocm |
| 59 | +``` |
| 60 | + |
| 61 | +### Run model locally |
| 62 | + |
| 63 | +Now you can run a model: |
| 64 | + |
| 65 | +``` |
| 66 | +docker exec -it ollama ollama run llama3 |
| 67 | +``` |
| 68 | + |
| 69 | +### Try different models |
| 70 | + |
| 71 | +More models can be found on the [Ollama library](https://ollama.com/library). |
0 commit comments