-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
docs: improve installation instructions and prerequisites #36
base: main
Are you sure you want to change the base?
Conversation
…quisites section with Docker setup requirements - Add explicit Llama2 model download instructions - Update Quick Start section with clearer steps - Fix outdated references to Llama3.2
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @edu-ap, Thank you for your PR and adding consistency to the quotes and spaces.
I do have a couple of comments, and the main one is that I don't think we should be offering a step-by-step guide for docker setup. This repository assumes you already have it set-up correctly before diving in here. Maybe we could link to some official documentation that explains this, but I don't think we should be managing that here.
Also, I noticed that you added a step to download the llama models, but that should happen automatically the first time you run the project. Please let me know if that didn't work for you, and we could look into that.
2. **Docker Compose V2**: Required for running the multi-container setup | ||
```bash | ||
# Check Docker Compose installation | ||
docker-compose --version | ||
``` | ||
If not installed, you can install it using: | ||
```bash | ||
sudo curl -L "https://github.com/docker/compose/releases/download/v2.24.5/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose | ||
sudo chmod +x /usr/local/bin/docker-compose | ||
``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The built-in docker compose
command should be packaged with your Docker install. docker-compose
is the v1 syntax, and has been deprecated a long time ago.
3. **Docker Permissions**: Ensure your user has permissions to run Docker commands | ||
```bash | ||
# Add your user to the docker group | ||
sudo usermod -aG docker $USER | ||
``` | ||
Note: You'll need to log out and back in for this change to take effect, or you can use `sudo` with Docker commands. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This really is dependent on your platform -- e.g., using Docker Desktop on Mac, this is not necessary.
This whole section can probably be condensed to a preamble "Make sure docker is setup correctly on your machine". We shouldn't really be providing a detailed step-by-step guide on how to install docker, but point to some place else that offers that.
3. Before running the workflow, ensure the Llama3.2 model is downloaded: | ||
```bash | ||
# If using sudo with Docker: | ||
sudo docker exec -it ollama ollama pull llama3.2:latest | ||
# If your user is in the docker group: | ||
docker exec -it ollama ollama pull llama3.2:latest | ||
``` | ||
This may take a few minutes depending on your internet connection. The model requires at least 16GB of RAM for optimal performance. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
docker-compose.yml
should make sure that Llama 3.2 is downloaded. Did that not work for you?
@@ -11,7 +11,7 @@ quickly get started with building self-hosted AI workflows. | |||
> [!TIP] | |||
> [Read the announcement](https://blog.n8n.io/self-hosted-ai/) | |||
|
|||
### What’s included | |||
### What's included |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't need to revert it, but I just want to point out that U+2019
is a valid Apostrophe, and in a readme file like this, it was better suited than U+0027
.
An official statement from the Unicode Consortium says
U+2019 ’ RIGHT SINGLE QUOTATION MARK is preferred where the character is to represent a punctuation mark, as for contractions: "we’ve", and the code is also referred to as a punctuation apostrophe.
I understand that we as programmers like to stick to ASCII characters for code, but we don't have to treat non-code files the same way.
Whenever we see U+2019
being used in a project's documentation, we should assume that it was intentional, and try not to change them all to U+0027
🙏🏽
|
||
This project is licensed under the Apache License 2.0 - see the | ||
[LICENSE](LICENSE) file for details. | ||
|
||
## 💬 Support | ||
## 💬 Support |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same here. U+0020
is fine, but this was intentionally U+00A0
before because we want the space here to be non-breaking, to avoid weird wrapping issues when the readme is rendered on a narrower viewport.
again, we don't necessarily need to revert this, but we should also not assume stuff like this as mistakes.
This PR improves the documentation by: - Adding a Prerequisites section with detailed Docker setup requirements - Adding explicit Llama2 model download instructions - Updating Quick Start section with clearer steps - Fixing outdated references to Llama3.2 - These changes make the setup process clearer and help prevent common installation issues.