Releases: invoke-ai/InvokeAI
InvokeAI Version 2.2.4 - A Stable Diffusion Toolkit
With InvokeAI 2.2, this project now provides enthusiasts and professionals a robust workflow solution for creating AI-generated and human facilitated compositions. Additional enhancements have been made as well, improving safety, ease of use, and installation.
Optimized for efficiency, InvokeAI needs only ~3.5GB of VRAM to generate a 512x768 image (and less for smaller images), and is compatible with Windows/Linux/Mac (M1 & M2).
You can see the release video here, which introduces the main WebUI enhancement for version 2.2 - The Unified Canvas. This new workflow is the biggest enhancement added to the WebUI to date, and unlocks a stunning amount of potential for users to create and iterate on their creations. The following sections describe what's new for InvokeAI.
Version 2.2.4 is a bugfix release. The major user-visible change is that we have overhauled the installation experience to make it faster and more stable. Please see Installation Overview for instructions on using the new installer, and see the .zip files in the Assets section below for the installer for your preferred platform. Note that you will need to install Python 3.9 or 3.10 to use the new installation method.
The new installers are located here. They have been updated 13 December in order to prevent a segfault crash on certain Macintosh systems.
There are a number of installation-related changes that previous InvokeAI users should be aware of:
Everything now lives in the invokeai
directory.
Previously there were two directories to worry about, the directory that contained the InvokeAI source code and the launcher scripts, and the invokeai
directory that contained the models files, embeddings, configuration and outputs. With the 2.2.4 release, this dual system is done away with, and everything, including the invoke.bat
and invoke.sh
launcher scripts, now live in a directory named invokeai
. By default this directory is located in your home directory (e.g. \Users\yourname
on Windows), but you can select where it goes at install time.
InvokeAI-installer-2.2.4-p5-linux.zip
InvokeAI-installer-2.2.4-p5-mac.zip
InvokeAI-installer-2.2.4-p5-windows.zip
After installation, you can delete the install directory (the one that the zip file creates when it unpacks). Do not delete or move the invokeai
directory!
The .invokeai
initialization file has been renamed invokeai/invokeai.init
You can place frequently-used startup options in this file, such as the default number of steps or your preferred sampler. To keep everything in one place, this file has now been moved into the invokeai
directory and is named invokeai.init
.
To update from Version 2.2.3
The easiest route is to download and unpack one of the 2.2.4 installer files. When it asks you for the location of the invokeai
runtime directory, respond with the path to the directory that contains your 2.2.3 invokeai
. That is, if invokeai
lives at C:\Users\fred\invokeai
, then answer with C:\Users\fred
and answer "Y" when asked if you want to reuse the directory.
The update.sh
(update.bat
) script that came with the 2.2.3 source installer does not know about the new directory layout and won't be fully functional.
To update to 2.2.5 (and beyond) there's now an update path.
As they become available, you can update to more recent versions of InvokeAI using an update.sh
(update.bat
) script located in the invokeai
directory. Running it without any arguments will install the most recent version of InvokeAI. Alternatively, you can get set releases by running the update.sh
script with an argument in the command shell. This syntax accepts the path to the desired release's zip file, which you can find by clicking on the green "Code" button on this repository's home page. Here are some examples:
# 2.2.4 release
update.sh https://github.com/invoke-ai/InvokeAI/archive/refs/tags/v2.2.4.zip
# 2.2.5 release (don't try; it doesn't exist yet!)
update.sh https://github.com/invoke-ai/InvokeAI/archive/refs/tags/v2.2.5.zip
# current development version
update.sh https://github.com/invoke-ai/InvokeAI/archive/main.zip
# feature branch 3d-movies (don't try; it doesn't exist yet!)
update.sh https://github.com/invoke-ai/InvokeAI/archive/3d-movies.zip
Other 2.2.4 Improvements
- Fix InvokeAI GUI initialization by @addianto in #1687
- fix link in documentation by @lstein in #1728
- Fix broken link by @ShawnZhong in #1736
- Remove reference to binary installer by @lstein in #1731
- documentation fixes for 2.2.3 by @lstein in #1740
- Modify installer links to point closer to the source installer by @ebr in #1745
- add documentation warning about 1650/60 cards by @lstein in #1753
- Fix Linux source URL in installation docs by @andybearman in #1756
- Make install instructions discoverable in readme by @damian0815 in #1752
- typo fix by @ofirkris in #1755
- Non-interactive model download (support
HUGGINGFACE_TOKEN
) by @ebr in #1578 - fix(srcinstall): shell installer -
cp
scripts instead of linking by @tildebyte in #1765 - stability and usage improvements to binary & source installers by @lstein in #1760
- fix off-by-one bug in cross-attention-control by @damian0815 in #1774
- Eventually update APP_VERSION to 2.2.3 by @spezialspezial in #1768
- invoke script cds to its location before running by @lstein in #1805
- Make PaperCut and VoxelArt models load again by @lstein in #1730
- Fix --embedding_directory / --embedding_path not working by @blessedcoolant in #1817
- Clean up readme by @hipsterusername in #1820
- Optimized Docker build with support for external working directory by @ebr in #1544
- disable pushing the cloud container by @mauwii in #1831
- Fix
docker push
github action and expand with additional metadata by @ebr in #1837 - Fix Broken Link To Notebook by @VedantMadane in #1821
- Account for flat models by @spezialspezial in #1766
- Update invoke.bat.in isolate environment variables by @lynnewu in #1833
- Arch Linux Specific PatchMatch Instructions & fixing conda install on linux by @SammCheese in #1848
- Make force free GPU memory work in img2img by @addianto in #1844
- New installer by @lstein
New Contributors
- @ebr made their first contribution in #1727
- @addianto made their first contribution in #1687
- @ShawnZhong made their first contribution in #1736
- @andybearman made their first contribution in #1756
- @ofirkris made their first contribution in #1755
- @VedantMadane made their first contribution in #1821
- @lynnewu made their first contribution in #1833
- @SammCheese made their first contribution in #1848
Full Changelog: v2.2.3...v2.2.4
InvokeAI 2.2.3
Note: This point release removes references to the binary installer from the installation guide. The binary installer is not stable at the current time. First time users are encouraged to use the "source" installer as described in Installing InvokeAI with the Source Installer
With InvokeAI 2.2, this project now provides enthusiasts and professionals a robust workflow solution for creating AI-generated and human facilitated compositions. Additional enhancements have been made as well, improving safety, ease of use, and installation.
Optimized for efficiency, InvokeAI needs only ~3.5GB of VRAM to generate a 512x768 image (and less for smaller images), and is compatible with Windows/Linux/Mac (M1 & M2).
You can see the release video here, which introduces the main WebUI enhancement for version 2.2 - The Unified Canvas. This new workflow is the biggest enhancement added to the WebUI to date, and unlocks a stunning amount of potential for users to create and iterate on their creations. The following sections describe what's new for InvokeAI.
Update 1 December 2022 -
-
The Unified Canvas: The Web UI now features a fully fitted infinite canvas that is capable of outpainting, inpainting, img2img and txt2img so you can streamline and extend your creative workflow. The canvas was rewritten to improve performance greatly and bring support for a variety of features like Paint Brushing, Unlimited History, Real-Time Progress displays and more.
-
Embedding Management: Easily pull from the top embeddings on Huggingface directly within Invoke, using the embed token to generate the exact style you want. With the ability to use multiple embeds simultaneously, you can easily import and explore different styles within the same session!
-
Viewer: The Web UI now also features a Viewer that lets you inspect your invocations in greater detail. No more opening the images in your external file explorer, even with large upscaled images!
-
1 Click Installer Launch: With our official 1-click installation launch, using our tool has never been easier. Our OS specific bundles (Mac M1/M2, Windows, and Linux) will get everything set up for you. Click and get going - It’s now simple to get started with InvokeAI. See Installation.
-
Model Safety: A checkpoint scanner (picklescan) has been added to the initialization process of new models, helping with security against malicious and evil pickles.
-
DPM++2 Experimental Samplers: New samplers have been added! Please note that these are experimental, and are subject to change in the future as we continue to enhance our backend system.
First-time Installation
For those installing InvokeAI for the first time, please use this recipe:
For automated installation, open up the "Assets" section below and download one of the InvokeAI-*.zip files. The instructions in the Installation section of the InvokeAI docs will provide you with a guide to which file to download and what to do with it when you get it.
For manual installation download one of the "Source Code" archive files located in the Assets below.
Unpack the file, and enter the InvokeAI directory that it creates. Alternatively, you may clone the source code repository using the command git clone http://github.com/invoke-ai/InvokeAI
and follow the instructions in Manual Installation.
Upgrading
For those wishing to upgrade from an earlier version, please use this recipe:
Download one of the "Source Code" archive files located in the Assets below.
Unpack the file, and enter the InvokeAI directory that it creates.
Alternatively, if you have previously cloned the InvokeAI repository, you may update it by entering the InvokeAI directory and running the commands git checkout main, followed by git pull
Select the appropriate environment file for your operating system and GPU hardware. A number of files can be found in a new environments-and-requirements directory:
environment-lin-amd.yml # Linux with an AMD (ROCm) GPU
environment-lin-cuda.yml # Linux with an NVIDIA CUDA GPU
environment-mac.yml # Macintoshes with MPS acceleration
environment-win-cuda.yml # Windows with an NVIDA CUDA GPU
Important Step that developers tend to miss! Either copy this environment file to the root directory with the name environment.yml, or make a symbolic link from environment.yml to the selected enrivonment file:
Macintosh and Linux using a symbolic link:
ln -sf environments-and-requirements/environment-xxx-yyy.yml environment.yml
Replace
xxx
andyyy
with the appropriate OS and GPU codes.
Windows:
copy environments-and-requirements\environment-win-cuda.yml environment.yml
When this is done, confirm that a file environment.yml has been created in the InvokeAI root directory and that it points to the correct file in the environments-and-requirements directory.
Now run the following commands in the InvokeAI directory.
conda env update
conda activate invokeai
python scripts/preload_models.py
Additional installation information, including recipes for installing without Conda, can be found in Manual Installation
Known Bugs
- If you use the binary installer, the autocomplete function will not work on the command line client due to limitations of the version of python that the installer uses. However, all other functions of the command line client, and all features of the web UI will function perfectly well.
- The PyPatchMatch module, which provides excellent outpainting and inpainting results, does not currently work on Macintoshes. It will work on Linux after a support library is added to the system. See Installing PyPatchMatch.
- InvokeAI 2.2.0 does not support the Stable Diffusion 2.0 model at the current time, but is expected to provide full support in the near future.
- The 1650 and 1660ti GPU cards only run in full-precision mode, which greatly limits the size of the models you can load and images you can generate with InvokeAI.
Contributing
Please see CONTRIBUTORS for a list of the many individuals who contributed to this project. Also many thanks to the dozens of patient testers who flushed out bugs in this release before it went live.
Anyone who wishes to contribute to this project, whether documentation, features, bug fixes, code
cleanup, testing, or code reviews, is very much encouraged to do so. If you are unfamiliar with how
to contribute to GitHub projects, here is a
Getting Started Guide. Unlike previous versions of InvokeAI we have now moved all development to the main
branch, so please make your pull requests against this branch.
Support
For support, please use this repository's GitHub Issues tracking service. Live support is also available on the InvokeAI Discord server.
InvokeAI Version 2.2.2 - A Stable Diffusion Toolkit
Note: The binary installer is not ready for prime time. First time users are recommended to install via the "source" installer accessible through the links at the bottom of this page.
With InvokeAI 2.2, this project now provides enthusiasts and professionals a robust workflow solution for creating AI-generated and human facilitated compositions. Additional enhancements have been made as well, improving safety, ease of use, and installation.
Optimized for efficiency, InvokeAI needs only ~3.5GB of VRAM to generate a 512x768 image (and less for smaller images), and is compatible with Windows/Linux/Mac (M1 & M2).
You can see the release video here, which introduces the main WebUI enhancement for version 2.2 - The Unified Canvas. This new workflow is the biggest enhancement added to the WebUI to date, and unlocks a stunning amount of potential for users to create and iterate on their creations. The following sections describe what's new for InvokeAI.
Update 1 December 2022 -
-
The Unified Canvas: The Web UI now features a fully fitted infinite canvas that is capable of outpainting, inpainting, img2img and txt2img so you can streamline and extend your creative workflow. The canvas was rewritten to improve performance greatly and bring support for a variety of features like Paint Brushing, Unlimited History, Real-Time Progress displays and more.
-
Embedding Management: Easily pull from the top embeddings on Huggingface directly within Invoke, using the embed token to generate the exact style you want. With the ability to use multiple embeds simultaneously, you can easily import and explore different styles within the same session!
-
Viewer: The Web UI now also features a Viewer that lets you inspect your invocations in greater detail. No more opening the images in your external file explorer, even with large upscaled images!
-
1 Click Installer Launch: With our official 1-click installation launch, using our tool has never been easier. Our OS specific bundles (Mac M1/M2, Windows, and Linux) will get everything set up for you. Click and get going - It’s now simple to get started with InvokeAI. See Installation.
-
Model Safety: A checkpoint scanner (picklescan) has been added to the initialization process of new models, helping with security against malicious and evil pickles.
-
DPM++2 Experimental Samplers: New samplers have been added! Please note that these are experimental, and are subject to change in the future as we continue to enhance our backend system.
First-time Installation
For those installing InvokeAI for the first time, please use this recipe:
For automated installation, open up the "Assets" section below and download one of the InvokeAI-*.zip files. The instructions in the Installation section of the InvokeAI docs will provide you with a guide to which file to download and what to do with it when you get it.
For manual installation download one of the "Source Code" archive files located in the Assets below.
Unpack the file, and enter the InvokeAI directory that it creates. Alternatively, you may clone the source code repository using the command git clone http://github.com/invoke-ai/InvokeAI
and follow the instructions in Manual Installation.
Upgrading
For those wishing to upgrade from an earlier version, please use this recipe:
Download one of the "Source Code" archive files located in the Assets below.
Unpack the file, and enter the InvokeAI directory that it creates.
Alternatively, if you have previously cloned the InvokeAI repository, you may update it by entering the InvokeAI directory and running the commands git checkout main, followed by git pull
Select the appropriate environment file for your operating system and GPU hardware. A number of files can be found in a new environments-and-requirements directory:
environment-lin-amd.yml # Linux with an AMD (ROCm) GPU
environment-lin-cuda.yml # Linux with an NVIDIA CUDA GPU
environment-mac.yml # Macintoshes with MPS acceleration
environment-win-cuda.yml # Windows with an NVIDA CUDA GPU
Important Step that developers tend to miss! Either copy this environment file to the root directory with the name environment.yml, or make a symbolic link from environment.yml to the selected enrivonment file:
Macintosh and Linux using a symbolic link:
ln -sf environments-and-requirements/environment-xxx-yyy.yml environment.yml
Replace
xxx
andyyy
with the appropriate OS and GPU codes.
Windows:
copy environments-and-requirements\environment-win-cuda.yml environment.yml
When this is done, confirm that a file environment.yml has been created in the InvokeAI root directory and that it points to the correct file in the environments-and-requirements directory.
Now run the following commands in the InvokeAI directory.
conda env update
conda activate invokeai
python scripts/preload_models.py
Additional installation information, including recipes for installing without Conda, can be found in Manual Installation
Known Bugs
- If you use the binary installer, the autocomplete function will not work on the command line client due to limitations of the version of python that the installer uses. However, all other functions of the command line client, and all features of the web UI will function perfectly well.
- The PyPatchMatch module, which provides excellent outpainting and inpainting results, does not currently work on Macintoshes. It will work on Linux after a support library is added to the system. See Installing PyPatchMatch.
- InvokeAI 2.2.0 does not support the Stable Diffusion 2.0 model at the current time, but is expected to provide full support in the near future.
- The 1650 and 1660ti GPU cards only run in full-precision mode, which greatly limits the size of the models you can load and images you can generate with InvokeAI.
Contributing
Please see CONTRIBUTORS for a list of the many individuals who contributed to this project. Also many thanks to the dozens of patient testers who flushed out bugs in this release before it went live.
Anyone who wishes to contribute to this project, whether documentation, features, bug fixes, code
cleanup, testing, or code reviews, is very much encouraged to do so. If you are unfamiliar with how
to contribute to GitHub projects, here is a
Getting Started Guide. Unlike previous versions of InvokeAI we have now moved all development to the main
branch, so please make your pull requests against this branch.
Support
For support, please use this repository's GitHub Issues tracking service. Live support is also available on the InvokeAI Discord server.
InvokeAI Version 2.2.0
With InvokeAI 2.2, this project now provides enthusiasts and professionals a robust workflow solution for creating AI-generated and human facilitated compositions. Additional enhancements have been made as well, improving safety, ease of use, and installation.
Optimized for efficiency, InvokeAI needs only ~3.5GB of VRAM to generate a 512x768 image (and less for smaller images), and is compatible with Windows/Linux/Mac (M1 & M2).
You can see the release video here, which introduces the main WebUI enhancement for version 2.2 - The Unified Canvas. This new workflow is the biggest enhancement added to the WebUI to date, and unlocks a stunning amount of potential for users to create and iterate on their creations. The following sections describe what's new for InvokeAI.
Update 1 December 2022 -
-
The Unified Canvas: The Web UI now features a fully fitted infinite canvas that is capable of outpainting, inpainting, img2img and txt2img so you can streamline and extend your creative workflow. The canvas was rewritten to improve performance greatly and bring support for a variety of features like Paint Brushing, Unlimited History, Real-Time Progress displays and more.
-
Embedding Management: Easily pull from the top embeddings on Huggingface directly within Invoke, using the embed token to generate the exact style you want. With the ability to use multiple embeds simultaneously, you can easily import and explore different styles within the same session!
-
Viewer: The Web UI now also features a Viewer that lets you inspect your invocations in greater detail. No more opening the images in your external file explorer, even with large upscaled images!
-
1 Click Installer Launch: With our official 1-click installation launch, using our tool has never been easier. Our OS specific bundles (Mac M1/M2, Windows, and Linux) will get everything set up for you. Click and get going - It’s now simple to get started with InvokeAI. See Installation.
-
Model Safety: A checkpoint scanner (picklescan) has been added to the initialization process of new models, helping with security against malicious and evil pickles.
-
DPM++2 Experimental Samplers: New samplers have been added! Please note that these are experimental, and are subject to change in the future as we continue to enhance our backend system.
First-time Installation
For those installing InvokeAI for the first time, please use this recipe:
For automated installation, open up the "Assets" section below and download one of the InvokeAI-*.zip files. The instructions in the Installation section of the InvokeAI docs will provide you with a guide to which file to download and what to do with it when you get it.
For manual installation download one of the "Source Code" archive files located in the Assets below.
Unpack the file, and enter the InvokeAI directory that it creates. Alternatively, you may clone the source code repository using the command git clone http://github.com/invoke-ai/InvokeAI
and follow the instructions in Manual Installation.
Upgrading
For those wishing to upgrade from an earlier version, please use this recipe:
Download one of the "Source Code" archive files located in the Assets below.
Unpack the file, and enter the InvokeAI directory that it creates.
Alternatively, if you have previously cloned the InvokeAI repository, you may update it by entering the InvokeAI directory and running the commands git checkout main, followed by git pull
Select the appropriate environment file for your operating system and GPU hardware. A number of files can be found in a new environments-and-requirements directory:
environment-lin-amd.yml # Linux with an AMD (ROCm) GPU
environment-lin-cuda.yml # Linux with an NVIDIA CUDA GPU
environment-mac.yml # Macintoshes with MPS acceleration
environment-win-cuda.yml # Windows with an NVIDA CUDA GPU
Important Step that developers tend to miss! Either copy this environment file to the root directory with the name environment.yml, or make a symbolic link from environment.yml to the selected enrivonment file:
Macintosh and Linux using a symbolic link:
ln -sf environments-and-requirements/environment-xxx-yyy.yml environment.yml
Replace
xxx
andyyy
with the appropriate OS and GPU codes.
Windows:
copy environments-and-requirements\environment-win-cuda.yml environment.yml
When this is done, confirm that a file environment.yml has been created in the InvokeAI root directory and that it points to the correct file in the environments-and-requirements directory.
Now run the following commands in the InvokeAI directory.
conda env update
conda activate invokeai
python scripts/preload_models.py
Additional installation information, including recipes for installing without Conda, can be found in Manual Installation
Known Bugs
- If you use the binary installer, the autocomplete function will not work on the command line client due to limitations of the version of python that the installer uses. However, all other functions of the command line client, and all features of the web UI will function perfectly well.
- The PyPatchMatch module, which provides excellent outpainting and inpainting results, does not currently work on Macintoshes. It will work on Linux after a support library is added to the system. See Installing PyPatchMatch.
- InvokeAI 2.2.0 does not support the Stable Diffusion 2.0 model at the current time, but is expected to provide full support in the near future.
- The 1650 and 1660ti GPU cards only run in full-precision mode, which greatly limits the size of the models you can load and images you can generate with InvokeAI.
Contributing
Please see CONTRIBUTORS for a list of the many individuals who contributed to this project. Also many thanks to the dozens of patient testers who flushed out bugs in this release before it went live.
Anyone who wishes to contribute to this project, whether documentation, features, bug fixes, code
cleanup, testing, or code reviews, is very much encouraged to do so. If you are unfamiliar with how
to contribute to GitHub projects, here is a
Getting Started Guide. Unlike previous versions of InvokeAI we have now moved all development to the main
branch, so please make your pull requests against this branch.
Support
For support, please use this repository's GitHub Issues tracking service. Live support is also available on the InvokeAI Discord server.
InvokeAI Version 2.1.3 - A Stable Diffusion Toolkit
Welcome! Click here to get the latest release!
Read below for the old 2.1.3 release.
Invoke AI 2.1.3
The invoke-ai team is excited to be able to share the release of InvokeAI 2.1 - A Stable Diffusion Toolkit, a project that aims to provide enthusiasts and professionals both a suite of robust image creation tools. Optimized for efficiency, InvokeAI needs only ~3.5GB of VRAM to generate a 512x768 image (and less for smaller images), and is compatible with Windows/Linux/Mac (M1 & M2).
InvokeAI was one of the earliest forks of the core CompVis repo (formerly lstein/stable-diffusion), and recently evolved into a full-fledged community driven and open source stable diffusion toolkit. Version 2.1 of the tool introduces multiple new features and performance enhancements.
This 14-minute YouTube video introduces you to some of the new features contained in this release. The following sections describe what's new in the Web interface (WebGUI) and the command-line interface (CLI).
Version 2.1.3 is primarily a bug fix release that improves the installation process and provides enhanced stability and usability.
Update 22 November - updated invokeAI-src-installer-mac.zip
to correct an error downloading the micromamba distirbution.
New features
- A choice of installer scripts that automate installation and configuration. See Installation.
- A streamlined manual installation process that works for both Conda and PIP-only installs. See Manual Installation.
- The ability to save frequently-used startup options (model to load, steps, sampler, etc) in a
.invokeai
file. See Client - Support for AMD GPU cards (non-CUDA) on Linux machines.
- Multiple bugs and edge cases squashed.
Installation
For those installing InvokeAI for the first time, please use this recipe:
- For automated installation, open up the "Assets" section below and download one of the
InvokeAI-*.zip
files. The instructions in the Installation section of the [InvokeAI docs](https://invoke-ai.github.io/InvokeAI) will provide you with a guide to which file to download and what to do with it when you get it. - For manual installation download one of the "Source Code" archive files located in the Assets below.
- Unpack the file, and enter the
InvokeAI
directory that it creates. - Alternatively, you may clone the source code repository using the command
git clone http://github.com/invoke-ai/InvokeAI
- Follow the instructions in Manual Installation.
- Unpack the file, and enter the
Upgrading
For those wishing to upgrade from an earlier version, please use this recipe:
- Download one of the "Source Code" archive files located in the Assets below.
- Unpack the file, and enter the
InvokeAI
directory that it creates. - Alternatively, if you have previously cloned the InvokeAI repository, you may update it by entering the InvokeAI directory and running the commands
git checkout main
, followed bygit pull
- Select the appropriate environment file for your operating system and GPU hardware. A number of files can be found in a new
environments-and-requirements
directory:
environment-lin-amd.yml # Linux with an AMD (ROCm) GPU
environment-lin-cuda.yml # Linux with an NVIDIA CUDA GPU
environment-mac.yml # Macintoshes with MPS acceleration
environment-win-cuda.yml # Windows with an NVIDA CUDA GPU
- Important Step that developers tend to miss! Either copy this environment file to the root directory with the name
environment.yml
, or make a symbolic link fromenvironment.yml
to the selected enrivonment file:
-
Macintosh and Linux using a symbolic link:
ln -sf environments-and-requirements/environment-xxx-yyy.yml environment.yml # Replace `xxx` and `yyy` with the appropriate OS and GPU codes.
-
Windows:
copy environments-and-requirements\environment-win-cuda.yml environment.yml
When this is done, confirm that a file environment.yml
has been created in the InvokeAI root directory and that it points to the correct file in the environments-and-requirements
directory.
- Now run the following commands in the InvokeAI directory.
conda update
conda activate invokeai
python scripts/preload_models.py
Additional installation information, including recipes for installing without Conda, can be found in Manual Installation
Contributing
Please see CONTRIBUTORS for a list of the many individuals who contributed to this project. Also many thanks to the dozens of patient testers who flushed out bugs in this release before it went live.
Anyone who wishes to contribute to this project, whether documentation, features, bug fixes, code
cleanup, testing, or code reviews, is very much encouraged to do so. If you are unfamiliar with how
to contribute to GitHub projects, here is a
Getting Started Guide.
The most important thing is to know about contributing code is to make your pull request against the "development" branch, and not against "main". This will help keep public breakage to a minimum and will allow you to propose more radical
changes.
Support
For support, please use this repository's GitHub Issues tracking service. Live support is also available on the InvokeAI Discord server.
InvokeAI Version 2.1 - A Stable Diffusion Toolkit
The invoke-ai team is excited to be able to share the release of InvokeAI 2.1 - A Stable Diffusion Toolkit, a project that aims to provide enthusiasts and professionals both a suite of robust image creation tools. Optimized for efficiency, InvokeAI needs only ~3.5GB of VRAM to generate a 512x768 image (and less for smaller images), and is compatible with Windows/Linux/Mac (M1 & M2).
InvokeAI was one of the earliest forks of the core CompVis repo (formerly lstein/stable-diffusion), and recently evolved into a full-fledged community driven and open source stable diffusion toolkit. Version 2.1 of the tool introduces multiple new features and performance enhancements.
This 14-minute YouTube video introduces you to some of the new features contained in this release. The following sections describe what's new in the Web interface (WebGUI) and the command-line interface (CLI).
Major new features
- Inpainting support in the WebGUI
- Greatly improved navigation and user experience in the WebGUI
- The prompt syntax has been enhanced with prompt weighting, cross-attention and prompt merging.
- You can now load multiple models and switch among them quickly without leaving the CLI or WebGUI.
- The installation process (via
scripts/preload_models.py
) now lets you select among several popular Stable Diffusion models and downloads and installs them on your behalf. Among other models, this script will install the current Stable Diffusion 1.5 model as well as a StabilityAI variable autoencoder (VAE) which improves face generation. - Tired of struggling with photoeditors to get the masked region of for inpainting just right? Let the AI make the mask for you using text masking. This feature allows you to specify the part of the image to paint over using just English-language phrases.
- Tired of seeing the head of your subjects cropped off? Uncrop them in the CLI with the outcrop feature.
- Tired of seeing your subject's bodies duplicated or mangled when generating larger-dimension images? Check out the
--hires
option in the CLI, or select the corresponding toggle in the WebGUI. - We now support textual inversion and fine-tune .bin styles and subjects from the Hugging Face archive of SD Concepts. Load the .bin file using the
--embedding_path
option. (The next version will support merging and loading of multiple simultaneous models).
Installation
To install InvokeAI from scratch, please see the Installation section of the InvokeAI docs.
Upgrading
For those wishing to upgrade from an earlier version, please use the following recipe from within the InvokeAI directory:
Mac users:
conda deactivate
git checkout main
git pull
rm -rf src
conda update -f environment-mac.yml
conda activate invokeai
python scripts/preload_models.py
Windows users:
conda deactivate
git checkout main
git pull
rmdir src /s
conda update
conda activate invokeai
python scripts\preload_models.py
Linux Users
conda deactivate
git checkout main
git pull
rm -rf src
conda update
conda activate invokeai
python scripts/preload_models.py
Contributing
Please see CONTRIBUTORS for a list of the many individuals who contributed to this project. Also many thanks to the dozens of patient testers who flushed out bugs in this release before it went live.
Anyone who wishes to contribute to this project, whether documentation, features, bug fixes, code
cleanup, testing, or code reviews, is very much encouraged to do so. If you are unfamiliar with how
to contribute to GitHub projects, here is a
Getting Started Guide.
The most important thing is to know about contributing code is to make your pull request against the "development" branch, and not against "main". This will help keep public breakage to a minimum and will allow you to propose more radical
changes.
Support
For support, please use this repository's GitHub Issues tracking service. Live support is also available on the InvokeAI Discord server.
Full change log since 2.0.2
- update mac instructions to use invokeai for env name by @willwillems in #1030
- Update .gitignore by @blessedcoolant in #1040
- reintroduce fix for m1 from #579 missing after merge by @skurovec in #1056
- Update Stable_Diffusion_AI_Notebook.ipynb (Take 2) by @ChloeL19 in #1060
- Print out the device type which is used by @manzke in #1073
- Hires Addition by @hipsterusername in #1063
- fix for "1 leaked semaphore objects to clean up at shutdown" on M1 by @skurovec in #1081
- Forward dream.py to invoke.py using the same interpreter, add deprecation warning by @db3000 in #1077
- fix noisy images at high step counts by @lstein in #1086
- Generalize facetool strength argument by @db3000 in #1078
- Enable fast switching among models at the invoke> command line by @lstein in #1066
- Fix Typo, committed changing ldm environment to invokeai by @jdries3 in #1095
- Update generate.py by @unreleased in #1109
- Update 'ldm' env to 'invokeai' in troubleshooting steps by @19wolf in #1125
- Fixed documentation typos and resolved merge conflicts by @rupeshs in #1123
- Fix broken doc links, fix malaprop in the project subtitle by @majick in #1131
- Only output facetool parameters if enhancing faces by @db3000 in #1119
- Update gitignore to ignore codeformer weights at new location by @spezialspezial in #1136
- fix links to point to invoke-ai.github.io #1117 by @mauwii in #1143
- Rework-mkdocs by @mauwii in #1144
- add option to CLI and pngwriter that allows user to set PNG compression level by @lstein in #1127
- Fix img2img DDIM index out of bound by @wfng92 in #1137
- Fix gh actions by @mauwii in #1128
- update mac instructions to use invokeai for env name by @willwillems in #1030
- Update .gitignore by @blessedcoolant in #1040
- reintroduce fix for m1 from #579 missing after merge by @skurovec in #1056
- Update Stable_Diffusion_AI_Notebook.ipynb (Take 2) by @ChloeL19 in #1060
- Print out the device type which is used by @manzke in #1073
- Hires Addition by @hipsterusername in #1063
- fix for "1 leaked semaphore objects to clean up at shutdown" on M1 by @skurovec in #1081
- Forward dream.py to invoke.py using the same interpreter, add deprecation warning by @db3000 in #1077
- fix noisy images at high step counts by @lstein in #1086
- Generalize facetool strength argument by @db3000 in #1078
- Enable fast switching among models at the invoke> command line by @lstein in #1066
- Fix Typo, committed changing ldm environment to invokeai by @jdries3 in #1095
- Fixed documentation typos and resolved merge conflicts by @rupeshs in #1123
- Only output facetool parameters if enhancing faces by @db3000 in #1119
- add option to CLI and pngwriter that allows user to set PNG compression level by @lstein in #1127
- Fix img2img DDIM index out of bound by @wfng92 in #1137
- Add text prompt to inpaint mask support by @lstein in #1133
- Respect http[s] protocol when making socket.io middleware by @damian0815 in #976
- WebUI: Adds Codeformer support by @psychedelicious in #1151
- Skips normalizing prompts for web UI metadata by @psychedelicious in https://github.com/invoke-ai/Invo...
InvokeAI Version 2.0.2 - A Stable Diffusion Toolkit
The invoke-ai team is excited to be able to share the release of InvokeAI 2.0 - A Stable Diffusion Toolkit, a project that aims to provide enthusiasts and professionals both a suite of robust image creation tools. Optimized for efficiency, InvokeAI needs only ~3.5GB of VRAM to generate a 512x768 image (and less for smaller images), and is compatible with Windows/Linux/Mac (M1 & M2).
InvokeAI was one of the earliest forks of the core CompVis repo (formerly lstein/stable-diffusion), and recently evolved into a full-fledged community driven and open source stable diffusion toolkit named InvokeAI. Version 2.0.0 of the tool introduces an entirely new WebUI Front-end with a Desktop mode, and an optimized back-end server that can be interacted with via CLI or extended with your own fork.
Release 2.0.2 updates three Python dependencies that were recently reported to have critical security holes, and enhances documentation. Otherwise, the feature set is identical to 2.0.1.
This version of the app improves in-app workflows leveraging GFPGAN and Codeformer for face restoration, and RealESRGAN upscaling - Additionally, the CLI also supports a large variety of features:
- Inpainting
- Outpainting
- Negative Prompts (prompt unconditioning)
- Fast online model switching
- Textual Inversion
- Improved Quality for Hi-Resolution Images (Embiggen, Hi-res Fixes, etc.)
- And more...
Future updates planned included UI driven outpainting/inpainting, robust Cross Attention support, and an advanced node workflow for automating and sharing your workflows with the community.
What's Changed
- hotfix(venv): rename 'ldm' -> 'invokeai' by @tildebyte in #1014
- Fix two broken links in README by @Miserlou in #1026
- fixed old reference to ldm on activate env by @nunocoracao in #1029
- Fix the link to the weights download page by @Hkmu in #1041
- reintroduce fix for m1 from #579 missing after merge by @skurovec in #1055
- More mkdocs updates by @mauwii in #1057
- Add back old
dream.py
aslegacy_api.py
by @CapableWeb in #1070
New Contributors
- @Miserlou made their first contribution in #1026
- @nunocoracao made their first contribution in #1029
- @Hkmu made their first contribution in #1041
Full Changelog: v2.0.0...v2.0.1
What's Changed
- update mac instructions to use invokeai for env name by @willwillems in #1030
- Update .gitignore by @blessedcoolant in #1040
- reintroduce fix for m1 from #579 missing after merge by @skurovec in #1056
- Update Stable_Diffusion_AI_Notebook.ipynb (Take 2) by @ChloeL19 in #1060
- Print out the device type which is used by @manzke in #1073
- Hires Addition by @hipsterusername in #1063
- fix for "1 leaked semaphore objects to clean up at shutdown" on M1 by @skurovec in #1081
- Forward dream.py to invoke.py using the same interpreter, add deprecation warning by @db3000 in #1077
- fix noisy images at high step counts by @lstein in #1086
- Generalize facetool strength argument by @db3000 in #1078
- Enable fast switching among models at the invoke> command line by @lstein in #1066
- Fix Typo, committed changing ldm environment to invokeai by @jdries3 in #1095
- Update generate.py by @unreleased in #1109
- Update 'ldm' env to 'invokeai' in troubleshooting steps by @19wolf in #1125
- Fixed documentation typos and resolved merge conflicts by @rupeshs in #1123
- Fix broken doc links, fix malaprop in the project subtitle by @majick in #1131
- Only output facetool parameters if enhancing faces by @db3000 in #1119
- Update gitignore to ignore codeformer weights at new location by @spezialspezial in #1136
- fix links to point to invoke-ai.github.io #1117 by @mauwii in #1143
- Rework-mkdocs by @mauwii in #1144
- add option to CLI and pngwriter that allows user to set PNG compression level by @lstein in #1127
- Fix img2img DDIM index out of bound by @wfng92 in #1137
- Fix gh actions by @mauwii in #1128
New Contributors
- @willwillems made their first contribution in #1030
- @ChloeL19 made their first contribution in #1060
- @manzke made their first contribution in #1073
- @unreleased made their first contribution in #1109
- @rupeshs made their first contribution in #1123
- @majick made their first contribution in #1131
- @wfng92 made their first contribution in #1137
Full Changelog: v2.0.1...v2.0.2
InvokeAI Version 2.0.1 - A Stable Diffusion Toolkit
The invoke-ai team is excited to be able to share the release of InvokeAI 2.0 - A Stable Diffusion Toolkit, a project that aims to provide enthusiasts and professionals both a suite of robust image creation tools. Optimized for efficiency, InvokeAI needs only ~3.5GB of VRAM to generate a 512x768 image (and less for smaller images), and is compatible with Windows/Linux/Mac (M1 & M2).
InvokeAI was one of the earliest forks of the core CompVis repo (formerly lstein/stable-diffusion), and recently evolved into a full-fledged community driven and open source stable diffusion toolkit named InvokeAI. Version 2.0.0 of the tool introduces an entirely new WebUI Front-end with a Desktop mode, and an optimized back-end server that can be interacted with via CLI or extended with your own fork.
Release 2.0.1 corrects an error that was causing the k* samplers to produce noisy images at high step counts. Otherwise the feature set is the same as 2.0.0.
This version of the app improves in-app workflows leveraging GFPGAN and Codeformer for face restoration, and RealESRGAN upscaling - Additionally, the CLI also supports a large variety of features:
- Inpainting
- Outpainting
- Negative Prompts (prompt unconditioning)
- Fast online model switching
- Textual Inversion
- Improved Quality for Hi-Resolution Images (Embiggen, Hi-res Fixes, etc.)
- And more...
Future updates planned included UI driven outpainting/inpainting, robust Cross Attention support, and an advanced node workflow for automating and sharing your workflows with the community.
What's Changed
- hotfix(venv): rename 'ldm' -> 'invokeai' by @tildebyte in #1014
- Fix two broken links in README by @Miserlou in #1026
- fixed old reference to ldm on activate env by @nunocoracao in #1029
- Fix the link to the weights download page by @Hkmu in #1041
- reintroduce fix for m1 from #579 missing after merge by @skurovec in #1055
- More mkdocs updates by @mauwii in #1057
- Add back old
dream.py
aslegacy_api.py
by @CapableWeb in #1070
New Contributors
- @Miserlou made their first contribution in #1026
- @nunocoracao made their first contribution in #1029
- @Hkmu made their first contribution in #1041
Full Changelog: v2.0.0...v2.0.1
InvokeAI Version 2.0.0 - A Stable Diffusion Toolkit
The invoke-ai team is excited to be able to share the release of InvokeAI 2.0 - A Stable Diffusion Toolkit, a project that aims to provide enthusiasts and professionals both a suite of robust image creation tools. Optimized for efficiency, InvokeAI needs only ~3.5GB of VRAM to generate a 512x768 image (and less for smaller images), and is compatible with Windows/Linux/Mac (M1 & M2).
InvokeAI was one of the earliest forks of the core CompVis repo (formerly lstein/stable-diffusion), and recently evolved into a full-fledged community driven and open source stable diffusion toolkit named InvokeAI. Version 2.0.0 of the tool introduces an entirely new WebUI Front-end with a Desktop mode, and an optimized back-end server that can be interacted with via CLI or extended with your own fork.
This version of the app improves in-app workflows leveraging GFPGAN and Codeformer for face restoration, and RealESRGAN upscaling - Additionally, the CLI also supports a large variety of features:
- Inpainting
- Outpainting
- Negative Prompts (prompt unconditioning)
- Textual Inversion
- Improved Quality for Hi-Resolution Images (Embiggen, Hi-res Fixes, etc.)
- And more...
Future updates planned included UI driven outpainting/inpainting, robust Cross Attention support, and an advanced node workflow for automating and sharing your workflows with the community.
SD-Dream Version 1.14.1
This is identical to release 1.14 except that it reverts the name of the conda environment from "sd-ldm" back to the original "ldm".
Features from 1.14:
- Memory optimizations for small-RAM cards. 512x512 now possible on 4 GB GPUs.
- Full support for Apple hardware with M1 or M2 chips.
- Add "seamless mode" for circular tiling of image. Generates beautiful effects. (prixt).
- Inpainting support.
- Improved web server GUI.
- Lots of code and documentation cleanups.