Skip to content

Commit 84f0951

Browse files
committed
Add Docker steps
1 parent 924fd1a commit 84f0951

File tree

1 file changed

+153
-5
lines changed

1 file changed

+153
-5
lines changed

source/FULLSTACK.md

+153-5
Original file line numberDiff line numberDiff line change
@@ -247,34 +247,180 @@ Be careful not to run these commands (or anything else in this section) as the `
247247

248248
> Throughout the rest of this section, **`<type>`** refers to either `preview` or `preprint`.
249249
250+
### Install prerequisites
251+
252+
* Update the package lists
253+
254+
```
255+
sudo apt-get update
256+
```
257+
258+
* Install some basic dependencies in one line
259+
260+
```
261+
sudo apt install -y curl gnupg2 software-properties-common apt-transport-https ca-certificates gpg
262+
```
263+
250264
### Install Redis
251265

252266
Simply follow [these instructions](https://redis.io/docs/getting-started/installation/install-redis-on-linux/) to install Redis on Ubuntu.
253267

254268
Our server will use Redis both as message broker and backend for Celery asynchronous task manager. What a weird sentence, is not it? I tried to explain above what these components are responsible for.
255269

256-
#### Flask, Gunicorn, Celery, and other Python dependencies
270+
### Install Docker
271+
272+
#### Why?
273+
274+
Docker containers are a critical component of the NeuroLibre workflow. These containers, created by `repo2docker` as part of the BinderHub builds, store all the dependencies for respective preprints in Docker images located in NeuroLibre's private container registry (https://binder-registry.conp.cloud).
275+
276+
**On the test (preview) server**, Docker is required to pull these images from the registry to build MyST-formatted articles by spawning a Jupyter Hub. These operations are managed by the `myst_libre` Python package. As of May 2024, Jupyter-Book-based builds are handled by BinderHub. If ongoing support for Jupyter Books is needed, these builds should also be managed using `myst_libre`.
277+
278+
**On the production (preprint) server**, Docker is necessary to pull images and archive them on Zenodo.
279+
280+
#### Installation steps
281+
282+
These steps are adapted from the [official documentation](https://docs.docker.com/engine/install/ubuntu) for Ubuntu 24.04. Please keep it up to date with the Ubuntu version.
283+
284+
* Set up Docker's apt repository:
285+
286+
```
287+
sudo install -m 0755 -d /etc/apt/keyrings
288+
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
289+
sudo chmod a+r /etc/apt/keyrings/docker.asc
290+
291+
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
292+
293+
sudo apt-get update
294+
```
295+
296+
* Select the OLDEST version among the available versions in the stable repository (just a convention):
297+
298+
```
299+
VERSION_DOCKER=$(apt-cache madison docker-ce | awk '{ print $3 }' | sort -V | head -n 1)
300+
VERSION_CONTAINERD=$(apt-cache madison containerd.io | awk '{ print $3 }' | sort -V | head -n 1)
301+
```
302+
303+
* Install docker and contai_nerd_ (container runtime):
304+
305+
```
306+
sudo apt install -y containerd.io=$VERSION_CONTAINERD docker-ce=$VERSION_DOCKER docker-ce-cli=$VERSION_DOCKER
307+
```
308+
309+
#### Docker configurations
310+
311+
To see the current docker configurations you can run `docker info`.
312+
313+
One important setting is the root directory where docker will save the images. To see this `docker info | grep 'Docker Root Dir`.
314+
315+
This is by default set to `/var/lib/docker`, which is on the system partition with limited memory. As the number of docker images grow (e.g., there are 10 ongoing submissions), this will raise problems.
316+
317+
On openstack, you can use the ephemeral storage (the HDD allocation that will be destroyed when the instance is destroyed) that is mounted to `/mnt` directory for this purpose. To achieve that:
318+
319+
```
320+
sudo nano /etc/docker/daemon.json
321+
```
322+
323+
with the following content:
324+
325+
```
326+
{
327+
"data-root": "/mnt"
328+
}
329+
```
330+
331+
* Stop the docker service:
332+
333+
```
334+
sudo systemctl stop docker
335+
```
336+
337+
Ideally you should perform these modifications **before** pulling images. If this is the case, skip this step. Otherwise, copy them over to the new location not to lose those images (this may take some time):
338+
339+
```
340+
sudo rsync -axPS /var/lib/docker/ /mnt
341+
```
342+
343+
After this you can `sudo rm -r /var/lib/docker` to reclaim that space.
344+
345+
* Start the docker service:
346+
347+
```
348+
sudo systemctl start docker
349+
```
350+
351+
* Confirm the changes have been applied by `docker info | grep 'Docker Root Dir`.
352+
353+
### Flask, Gunicorn, Celery, and other Python dependencies
257354

258355
This documentation assumes that the server host is a Ubuntu VM. To install Python dependencies,
259356
we are going to use virtual environments.
260357

261-
Ensure that python3 (3.6.9 or later) is available:
358+
Check out which version of python is installed where:
262359

263360
```
264361
which python3
362+
python --version
265363
```
266364

267-
Install `virtualenv` by:
365+
For the server app, you may need a specific version of Python that differs from the one bundled with your Ubuntu distribution. We will manage this using virtual environments.
366+
367+
##### Installing virtualenv
368+
369+
To install virtualenv, use the following command:
268370

269371
```
270-
sudo apt install python3-venv
372+
sudo apt install python3-virtualenv
373+
```
374+
375+
##### Installing an Older Python Version
376+
377+
If you need an older version of Python, such as Python `3.8` on Ubuntu `24.04`, you might need to add the deadsnakes PPA (Personal Package Archive).
378+
379+
First, check if the desired Python version (e.g., `3.8`) is available in the default repositories. You can do this by trying to install it directly:
380+
271381
```
382+
sudo apt install python3.8
383+
```
384+
385+
If you encounter an error like the one below, you will need to add the 💀🐍 deadsnakes PPA (or another PPA of you choice that has the desired python distribution):
386+
387+
```
388+
Reading package lists... Done
389+
Building dependency tree... Done
390+
Reading state information... Done
391+
E: Unable to locate package python3.8
392+
E: Couldn't find any package by glob 'python3.8'
393+
```
394+
395+
Follow these steps to add the deadsnakes PPA and install the required Python version:
396+
397+
* Add the deadsnakes PPA:
398+
399+
```
400+
sudo add-apt-repository ppa:deadsnakes/ppa
401+
sudo apt update
402+
```
403+
404+
* Now install the geriatric python version you want:
405+
406+
```
407+
sudo apt install python3.8
408+
```
409+
410+
* Confirm installed location:
411+
412+
```
413+
which python3.8
414+
```
415+
416+
##### Creating a virtual env
417+
272418
Create a new folder (`venv`) under the home directory and inside that folder, create a virtual environment named `neurolibre`:
273419

274420
```
275421
mkdir ~/venv
276422
cd ~/venv
277-
python3 -m venv neurolibre
423+
virtualenv neurolibre --python=/usr/bin/python3.8
278424
```
279425

280426
> Note: Please do not replace the virtual environment name above (`neurolibre`) with something else. You can take a look at the `systemd/neurolibre-<type>.service` configuration files as to why.
@@ -739,6 +885,8 @@ dokku report my-dashboard
739885
| Ingress | IPv4 | UDP | 1 - 65535 | - |
740886
| Ingress | IPv4 | UDP | 1 - 65535 | 192.168.73.30/32 |
741887

888+
Note: This is a pretty loose security group example. Depending on the type of connections you expect to the instance, please revise before applying them.
889+
742890
* Each application on Dokku will run in a container, named as "dynos" in Heroku. If you connect this VM to NewRelic (see instructions above), you can monitor each container/application/load and set alert conditions.
743891

744892
* Permanent redirect from `*.dashboards.neurolibre.org` to `*.db.neurolibre.org`

0 commit comments

Comments
 (0)