You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+29Lines changed: 29 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -49,6 +49,33 @@ Invoke is available in two editions:
49
49
50
50
More detail, including hardware requirements and manual install instructions, are available in the [installation documentation][installation docs].
51
51
52
+
## Docker Container
53
+
54
+
We publish official container images in Github Container Registry: https://github.com/invoke-ai/InvokeAI/pkgs/container/invokeai. Both CUDA and ROCm images are available. Check the above link for relevant tags.
55
+
56
+
> [!IMPORTANT]
57
+
> Ensure that Docker is set up to use the GPU. Refer to [NVIDIA][nvidia docker docs] or [AMD][amd docker docs] documentation.
58
+
59
+
### Generate!
60
+
61
+
Run the container, modifying the command as necessary:
62
+
63
+
```bash
64
+
docker run --runtime=nvidia --gpus=all --publish 9090:9090 ghcr.io/invoke-ai/invokeai
65
+
```
66
+
67
+
Then open `http://localhost:9090` and install some models using the Model Manager tab to begin generating.
68
+
69
+
For ROCm, add `--device /dev/kfd --device /dev/dri` to the `docker run` command.
70
+
71
+
### Persist your data
72
+
73
+
You will likely want to persist your workspace outside of the container. Use the `--volume /home/myuser/invokeai:/invokeai` flag to mount some local directory (using its **absolute** path) to the `/invokeai` path inside the container. Your generated images and models will reside there. You can use this directory with other InvokeAI installations, or switch between runtime directories as needed.
74
+
75
+
### DIY
76
+
77
+
Build your own image and customize the environment to match your needs using our `docker-compose` stack. See [README.md](./docker/README.md) in the [docker](./docker) directory.
78
+
52
79
## Troubleshooting, FAQ and Support
53
80
54
81
Please review our [FAQ][faq] for solutions to common installation problems and other issues.
Copy file name to clipboardExpand all lines: docker/README.md
+52-18Lines changed: 52 additions & 18 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,51 +1,85 @@
1
-
# InvokeAI Containerized
1
+
# Invoke in Docker
2
2
3
-
All commands should be run within the `docker` directory: `cd docker`
3
+
- Ensure that Docker can use the GPU on your system
4
+
- This documentation assumes Linux, but should work similarly under Windows with WSL2
5
+
- We don't recommend running Invoke in Docker on macOS at this time. It works, but very slowly.
4
6
5
-
## Quickstart :rocket:
7
+
## Quickstart :lightning:
6
8
7
-
On a known working Linux+Docker+CUDA (Nvidia) system, execute `./run.sh` in this directory. It will take a few minutes - depending on your internet speed - to install the core models. Once the application starts up, open `http://localhost:9090` in your browser to Invoke!
9
+
No `docker compose`, no persistence, just a simple one-liner using the official images:
8
10
9
-
For more configuration options (using an AMD GPU, custom root directory location, etc): read on.
11
+
**CUDA:**
10
12
11
-
## Detailed setup
13
+
```bash
14
+
docker run --runtime=nvidia --gpus=all --publish 9090:9090 ghcr.io/invoke-ai/invokeai
15
+
```
16
+
17
+
**ROCm:**
18
+
19
+
```bash
20
+
docker run --device /dev/kfd --device /dev/dri --publish 9090:9090 ghcr.io/invoke-ai/invokeai:main-rocm
21
+
```
22
+
23
+
Open `http://localhost:9090` in your browser once the container finishes booting, install some models, and generate away!
24
+
25
+
> [!TIP]
26
+
> To persist your data (including downloaded models) outside of the container, add a `--volume/-v` flag to the above command, e.g.: `docker run --volume /some/local/path:/invokeai <...the rest of the command>`
27
+
28
+
## Customize the container
29
+
30
+
We ship the `run.sh` script, which is a convenient wrapper around `docker compose` for cases where custom image build args are needed. Alternatively, the familiar `docker compose` commands work just as well.
31
+
32
+
```bash
33
+
cd docker
34
+
cp .env.sample .env
35
+
# edit .env to your liking if you need to; it is well commented.
36
+
./run.sh
37
+
```
38
+
39
+
It will take a few minutes to build the image the first time. Once the application starts up, open `http://localhost:9090` in your browser to invoke!
40
+
41
+
## Docker setup in detail
12
42
13
43
#### Linux
14
44
15
45
1. Ensure builkit is enabled in the Docker daemon settings (`/etc/docker/daemon.json`)
16
46
2. Install the `docker compose` plugin using your package manager, or follow a [tutorial](https://docs.docker.com/compose/install/linux/#install-using-the-repository).
17
-
- The deprecated `docker-compose` (hyphenated) CLI continues to work for now.
47
+
- The deprecated `docker-compose` (hyphenated) CLI probably won't work. Update to a recent version.
18
48
3. Ensure docker daemon is able to access the GPU.
19
-
- You may need to install [nvidia-container-toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)
> You'll be better off installing Invoke directly on your system, because Docker can not use the GPU on macOS.
56
+
57
+
If you are still reading:
58
+
23
59
1. Ensure Docker has at least 16GB RAM
24
60
2. Enable VirtioFS for file sharing
25
61
3. Enable `docker compose` V2 support
26
62
27
-
This is done via Docker Desktop preferences
63
+
This is done via Docker Desktop preferences.
28
64
29
-
### Configure Invoke environment
65
+
### Configure the Invoke Environment
30
66
31
-
1. Make a copy of `.env.sample` and name it `.env` (`cp .env.sample .env` (Mac/Linux) or `copy example.env .env` (Windows)). Make changes as necessary. Set `INVOKEAI_ROOT` to an absolute path to:
32
-
a. the desired location of the InvokeAI runtime directory, or
33
-
b. an existing, v3.0.0 compatible runtime directory.
67
+
1. Make a copy of `.env.sample` and name it `.env` (`cp .env.sample .env` (Mac/Linux) or `copy example.env .env` (Windows)). Make changes as necessary. Set `INVOKEAI_ROOT` to an absolute path to the desired location of the InvokeAI runtime directory. It may be an existing directory from a previous installation (post 4.0.0).
34
68
1. Execute `run.sh`
35
69
36
70
The image will be built automatically if needed.
37
71
38
-
The runtime directory (holding models and outputs) will be created in the location specified by `INVOKEAI_ROOT`. The default location is `~/invokeai`. The runtime directory will be populated with the base configs and models necessary to start generating.
72
+
The runtime directory (holding models and outputs) will be created in the location specified by `INVOKEAI_ROOT`. The default location is `~/invokeai`. Navigate to the Model Manager tab and install some models before generating.
39
73
40
74
### Use a GPU
41
75
42
76
- Linux is *recommended* for GPU support in Docker.
43
77
- WSL2 is *required* for Windows.
44
78
- only `x86_64` architecture is supported.
45
79
46
-
The Docker daemon on the system must be already set up to use the GPU. In case of Linux, this involves installing `nvidia-docker-runtime` and configuring the `nvidia` runtime as default. Steps will be different for AMD. Please see Docker documentation for the most up-to-date instructions for using your GPU with Docker.
80
+
The Docker daemon on the system must be already set up to use the GPU. In case of Linux, this involves installing `nvidia-docker-runtime` and configuring the `nvidia` runtime as default. Steps will be different for AMD. Please see Docker/NVIDIA/AMD documentation for the most up-to-date instructions for using your GPU with Docker.
47
81
48
-
To use an AMD GPU, set `GPU_DRIVER=rocm` in your `.env` file.
82
+
To use an AMD GPU, set `GPU_DRIVER=rocm` in your `.env` file before running `./run.sh`.
49
83
50
84
## Customize
51
85
@@ -59,10 +93,10 @@ Values are optional, but setting `INVOKEAI_ROOT` is highly recommended. The defa
59
93
INVOKEAI_ROOT=/Volumes/WorkDrive/invokeai
60
94
HUGGINGFACE_TOKEN=the_actual_token
61
95
CONTAINER_UID=1000
62
-
GPU_DRIVER=nvidia
96
+
GPU_DRIVER=cuda
63
97
```
64
98
65
-
Any environment variables supported by InvokeAI can be set here - please see the [Configuration docs](https://invoke-ai.github.io/InvokeAI/features/CONFIGURATION/) for further detail.
99
+
Any environment variables supported by InvokeAI can be set here. See the [Configuration docs](https://invoke-ai.github.io/InvokeAI/features/CONFIGURATION/) for further detail.
Copy file name to clipboardExpand all lines: docs/installation/040_INSTALL_DOCKER.md
+18-41Lines changed: 18 additions & 41 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,50 +4,37 @@ title: Installing with Docker
4
4
5
5
# :fontawesome-brands-docker: Docker
6
6
7
-
!!! warning "macOS and AMD GPU Users"
7
+
!!! warning "macOS users"
8
8
9
-
We highly recommend to Install InvokeAI locally using [these instructions](INSTALLATION.md),
10
-
because Docker containers can not access the GPU on macOS.
11
-
12
-
!!! warning "AMD GPU Users"
13
-
14
-
Container support for AMD GPUs has been reported to work by the community, but has not received
15
-
extensive testing. Please make sure to set the `GPU_DRIVER=rocm` environment variable (see below), and
16
-
use the `build.sh` script to build the image for this to take effect at build time.
9
+
Docker can not access the GPU on macOS, so your generation speeds will be slow. [Install InvokeAI](INSTALLATION.md) instead.
17
10
18
11
!!! tip "Linux and Windows Users"
19
12
20
-
For optimal performance, configure your Docker daemon to access your machine's GPU.
13
+
Configure Docker to access your machine's GPU.
21
14
Docker Desktop on Windows [includes GPU support](https://www.docker.com/blog/wsl-2-gpu-support-for-docker-desktop-on-nvidia-gpus/).
22
-
Linux users should install and configure the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)
15
+
Linux users should follow the [NVIDIA](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html) or [AMD](https://rocm.docs.amd.com/projects/install-on-linux/en/latest/how-to/docker.html) documentation.
23
16
24
-
## Why containers?
17
+
## TL;DR
25
18
26
-
They provide a flexible, reliable way to build and deploy InvokeAI.
27
-
See [Processes](https://12factor.net/processes) under the Twelve-Factor App
28
-
methodology for details on why running applications in such a stateless fashion is important.
19
+
Ensure your Docker setup is able to use your GPU. Then:
29
20
30
-
The container is configured for CUDA by default, but can be built to support AMD GPUs
31
-
by setting the `GPU_DRIVER=rocm` environment variable at Docker image build time.
21
+
```bash
22
+
docker run --runtime=nvidia --gpus=all --publish 9090:9090 ghcr.io/invoke-ai/invokeai
23
+
```
32
24
33
-
Developers on Apple silicon (M1/M2/M3): You
34
-
[can't access your GPU cores from Docker containers](https://github.com/pytorch/pytorch/issues/81224)
35
-
and performance is reduced compared with running it directly on macOS but for
36
-
development purposes it's fine. Once you're done with development tasks on your
37
-
laptop you can build for the target platform and architecture and deploy to
38
-
another environment with NVIDIA GPUs on-premises or in the cloud.
25
+
Once the container starts up, open http://localhost:9090 in your browser, install some models, and start generating.
39
26
40
-
## TL;DR
27
+
## Build-It-Yourself
41
28
42
-
This assumes properly configured Docker on Linux or Windows/WSL2. Read on for detailed customization options.
29
+
All the docker materials are located inside the [docker](https://github.com/invoke-ai/InvokeAI/tree/main/docker) directory in the Git repo.
43
30
44
31
```bash
45
-
# docker compose commands should be run from the `docker` directory
46
32
cd docker
33
+
cp .env.sample .env
47
34
docker compose up
48
35
```
49
36
50
-
## Installation in a Linux container (desktop)
37
+
We also ship the `run.sh` convenience script. See the `docker/README.md` file for detailed instructions on how to customize the docker setup to your needs.
51
38
52
39
### Prerequisites
53
40
@@ -58,18 +45,9 @@ Preferences, Resources, Advanced. Increase the CPUs and Memory to avoid this
58
45
[Issue](https://github.com/invoke-ai/InvokeAI/issues/342). You may need to
59
46
increase Swap and Disk image size too.
60
47
61
-
#### Get a Huggingface-Token
62
-
63
-
Besides the Docker Agent you will need an Account on
64
-
[huggingface.co](https://huggingface.co/join).
65
-
66
-
After you succesfully registered your account, go to
a token and copy it, since you will need in for the next step.
69
-
70
48
### Setup
71
49
72
-
Set up your environmnent variables. In the `docker` directory, make a copy of `.env.sample` and name it `.env`. Make changes as necessary.
50
+
Set up your environment variables. In the `docker` directory, make a copy of `.env.sample` and name it `.env`. Make changes as necessary.
73
51
74
52
Any environment variables supported by InvokeAI can be set here - please see the [CONFIGURATION](../features/CONFIGURATION.md) for further detail.
75
53
@@ -103,10 +81,9 @@ Once the container starts up (and configures the InvokeAI root directory if this
103
81
## Troubleshooting / FAQ
104
82
105
83
- Q: I am running on Windows under WSL2, and am seeing a "no such file or directory" error.
106
-
- A: Your `docker-entrypoint.sh` file likely has Windows (CRLF) as opposed to Unix (LF) line endings,
107
-
and you may have cloned this repository before the issue was fixed. To solve this, please change
108
-
the line endings in the `docker-entrypoint.sh` file to `LF`. You can do this in VSCode
84
+
- A: Your `docker-entrypoint.sh` might have has Windows (CRLF) line endings, depending how you cloned the repository.
85
+
To solve this, change the line endings in the `docker-entrypoint.sh` file to `LF`. You can do this in VSCode
109
86
(`Ctrl+P` and search for "line endings"), or by using the `dos2unix` utility in WSL.
110
87
Finally, you may delete `docker-entrypoint.sh` followed by `git pull; git checkout docker/docker-entrypoint.sh`
111
88
to reset the file to its most recent version.
112
-
For more information on this issue, please see the[Docker Desktop documentation](https://docs.docker.com/desktop/troubleshoot/topics/#avoid-unexpected-syntax-errors-use-unix-style-line-endings-for-files-in-containers)
89
+
For more information on this issue, see [Docker Desktop documentation](https://docs.docker.com/desktop/troubleshoot/topics/#avoid-unexpected-syntax-errors-use-unix-style-line-endings-for-files-in-containers)
0 commit comments