ΠΠ°ΡΡΠ΅Ρ-ΠΊΠ»Π°ΡΡ "ΠΠ²Π΅Π΄Π΅Π½ΠΈΠ΅ Π² CI/CD" source
Π’Π΅ΠΎΡΠ΅ΡΠΈΡΠ΅ΡΠΊΠΈΠ΅ ΠΌΠ°ΡΠ΅ΡΠΈΠ°Π»Ρ (ΡΠΏΠΈΡΠΎΠΊ ΠΏΡΠ΅Π΄Π²Π°ΡΠΈΡΠ΅Π»ΡΠ½ΡΠΉ, ΠΌΠΎΠΆΠ΅Ρ ΠΈΠ·ΠΌΠ΅Π½ΡΡΡΡΡ ΠΈ Π΄ΠΎΠΏΠΎΠ»Π½ΡΡΡΡΡ)
ΠΠΎΠ½ΡΡΠΈΠ΅ CI/CD ΠΈ Π·Π°ΡΠ΅ΠΌ ΠΎΠ½ΠΎ Π²ΠΎΠΎΠ±ΡΠ΅ Π½Π°Π΄ΠΎ. ΠΠ΅ΡΡΠΈΠΊΠΈ ΡΡΠΏΠ΅ΡΠ½ΠΎΡΡΠΈ Π²Π°ΡΠ΅Π³ΠΎ CI/CD ΠΠ°ΠΊ ΡΡΠΎ ΡΠ°Π±ΠΎΡΠ°Π΅Ρ ΠΈΠ·Π½ΡΡΡΠΈ βΠ½Π° ΠΏΠ°Π»ΡΡΠ°Ρ β ΠΠ°Π·ΠΎΠ²ΡΠ΅ ΠΏΠΎΠ½ΡΡΠΈΡ CI: pipeline, stage, steps ΠΈ Π·Π°Π²ΠΈΡΠΈΠΌΠΎΡΡΠΈ ΠΌΠ΅ΠΆΠ΄Ρ Π½ΠΈΠΌΠΈ ΠΠ΅ΡΠ²ΡΠ΅ ΠΏΡΠΎΡΡΠ΅ΠΉΡΠΈΠΉ ΡΠ°Π³ΠΈ: Π»ΠΈΠ½ΡΠΈΠ½Π³ ΠΊΠΎΠ΄Π°, ΡΠΎΡΠΌΠ°ΡΠΈΡΠΎΠ²Π°Π½ΠΈΠ΅ ΠΈ Ρ.Π΄. ΠΠΎΠ΄Π³ΠΎΡΠΎΠ²ΠΊΠ° ΠΏΡΠΎΠ΅ΠΊΡΠ° Π½Π° ΡΡΠΎΡΠΎΠ½Π΅ ΠΊΠΎΠ΄Π° (ΠΏΠ΅ΡΠ΅ΠΌΠ΅Π½Π½ΡΠ΅ ΠΎΠΊΡΡΠΆΠ΅Π½ΠΈΡ) ΠΠ΅ΡΠ΅ΠΌΠ΅Π½Π½ΡΠ΅ ΠΎΠΊΡΡΠΆΠ΅Π½ΠΈΡ ΠΈ Π±Π΅Π·ΠΎΠΏΠ°ΡΠ½ΠΎΡΡΡ ΠΏΡΠΎΠ΅ΠΊΡΠ° Docker ΠΊΠ°ΠΊ ΡΡΠ΅Π΄Π° Π²ΡΠΏΠΎΠ»Π½Π΅Π½ΠΈΡ CI: Π·Π°ΡΠ΅ΠΌ ΠΈ ΡΡΠΎ Ρ ΡΡΠΈΠΌ Π΄Π΅Π»Π°ΡΡ Π ΡΡΠ½ΡΠ΅ job - Π·Π°ΡΠ΅ΠΌ Π½ΡΠΆΠ½Ρ ΠΈ ΠΊΠ°ΠΊ Ρ Π½ΠΈΠΌΠΈ ΠΆΠΈΡΡ ΠΠΎΠ΄Π³ΠΎΡΠΎΠ²ΠΊΠ° βΠΆΠ΅Π»Π΅Π·Π°β Π΄Π»Ρ Π΄Π΅ΠΏΠ»ΠΎΡ ΠΡΠΎΡΡΠ΅ΠΉΡΠΈΠΉ Π΄Π΅ΠΏΠ»ΠΎΠΉ βΠ½Π° ΠΆΠ΅Π»Π΅Π·ΠΎβ ΠΠ΅ΡΠΈΡΠΎΠ²Π°Π½ΠΈΠ΅ ΠΈ Π·Π°ΡΠ΅ΠΌ ΠΎΠ½ΠΎ ΠΏΡΠΈΠΌΠ΅Π½ΡΠ΅ΡΡΡ ΠΡΡΠ΅ΡΠ°ΠΊΡΡ ΠΈ ΡΠ΅ΠΌ ΠΎΠ½ΠΈ ΠΎΡΠ»ΠΈΡΠ°ΡΡΡΡ ΠΎΡ ΠΊΠ΅ΡΠ΅ΠΉ. ΠΠΎΠ³Π΄Π° ΠΏΡΠΈΠΌΠ΅Π½ΡΡΡΡΡ Π°ΡΡΠ΅ΡΠ°ΠΊΡΡ Review apps - ΠΏΡΠΎΡΡΠ΅ΠΉΡΠΈΠΉ ΡΠ»ΡΡΠ°ΠΉ ΠΈ Π΅Π³ΠΎ ΡΡΡΠ½Π°Ρ ΡΠ΅Π°Π»ΠΈΠ·Π°ΡΠΈΡ ΠΠΎΠ΄Π²ΠΎΠ΄Π½ΡΠ΅ ΠΊΠ°ΠΌΠ½ΠΈ Continuous Delivery Π΄Π»Ρ javascript-ΠΏΡΠΎΠ΅ΠΊΡΠΎΠ² Webpack, lazy-load, ΠΎΡΡΡΡΡΡΠ²ΡΡΡΠΈΠ΅ ΡΠ°Π½ΠΊΠΈ ΠΏΠΎΡΠ»Π΅ Π΄Π΅ΠΏΠ»ΠΎΡ Π‘ΡΡΠ°ΡΠΈΠ»ΠΊΠΈ CI/CD - ΠΊΠ°ΠΊ ΠΏΠΎΡΠ΅ΡΡΡΡ Π²ΡΠ΅ ΠΈΠ»ΠΈ ΠΏΠΎΡΡΠΈ Π²ΡΠ΅ ΠΎΠ΄Π½ΠΎΠΉ ΡΡΡΠΎΠΊΠΎΠΉ ΠΠ½ΡΡΡΡΠΌΠ΅Π½ΡΡ ΠΎ ΠΊΠΎΡΠΎΡΡΡ ΠΌΡ Π½Π΅ ΠΏΠΎΠ³ΠΎΠ²ΠΎΡΠΈΠ»ΠΈ
Π Π°Π·Π²Π΅ΡΡΡΠ²Π°Π½ΠΈΠ΅ ΠΈ ΠΎΡΠ³Π°Π½ΠΈΠ·Π°ΡΠΈΡ CI/CD ΠΏΡΠΎΠ΅ΠΊΡ Π½Π° node.js + frontend c Π½ΡΠ»Ρ (ΡΠΈΡΡΠΎΠ³ΠΎ ΡΠ΅ΡΠ²Π΅ΡΠ°) Π΄ΠΎ Π°Π²ΡΠΎΠΌΠ°ΡΠΈΠ·ΠΈΡΠΎΠ²Π°Π½Π½ΠΎΠ³ΠΎ Π΄Π΅ΠΏΠ»ΠΎΡ Π½Π° "ΠΆΠ΅Π»Π΅Π·ΠΎ"
ΠΠΏΡΠΈΠΌΠΈΠ·Π°ΡΠΈΡ ΡΠΊΠΎΡΠΎΡΡΠΈ ΠΈ Π½Π°Π΄Π΅ΠΆΠ½ΠΎΡΡΠΈ ΠΏΠ°ΠΉΠΏΠ»Π°ΠΉΠ½Π°, ΡΠ°Π·ΡΠ°Π±ΠΎΡΠ°Π½Π½ΠΎΠ³ΠΎ Π½Π° ΠΏΠ΅ΡΠ²ΠΎΠΌ ΡΠ΅ΠΌΠΈΠ½Π°ΡΠ΅, Π΄ΠΎΠ±Π°Π²Π»Π΅Π½ΠΈΠ΅ Π½ΠΎΠ²ΡΡ Π²ΠΎΠ·ΠΌΠΎΠΆΠ½ΠΎΡΡΠ΅ΠΉ: ΠΏΡΠΎΡΠΌΠΎΡΡΠ° ΠΈΠ·ΠΌΠ΅Π½Π΅Π½ΠΈΠΉ, ΡΠ΄Π΅Π»Π°Π½Π½ΡΡ Π² ΠΊΠΎΠ½ΠΊΡΠ΅ΡΠ½ΡΡ Π²Π΅ΡΠΊΠ°Ρ
Π’Π΅ΠΎΡΠ΅ΡΠΈΡΠ΅ΡΠΊΠΈΠ΅ ΠΌΠ°ΡΠ΅ΡΠΈΠ°Π»Ρ (ΡΠΏΠΈΡΠΎΠΊ ΠΏΡΠ΅Π΄Π²Π°ΡΠΈΡΠ΅Π»ΡΠ½ΡΠΉ, ΠΌΠΎΠΆΠ΅Ρ ΠΈΠ·ΠΌΠ΅Π½ΡΡΡΡΡ ΠΈ Π΄ΠΎΠΏΠΎΠ»Π½ΡΡΡΡΡ)
Π§Π΅ΠΌ ΠΏΠ»ΠΎΡ ΡΠ΅Π·ΡΠ»ΡΡΠ°Ρ ΠΏΡΠΎΡΠ»ΠΎΠ³ΠΎ ΠΌΠ°ΡΡΠ΅Ρ-ΠΊΠ»Π°ΡΡΠ° ΠΈ ΠΊΠ°ΠΊ Π΅Π³ΠΎ ΠΌΠΎΠΆΠ½ΠΎ ΡΠ»ΡΡΡΠΈΡΡ? Π‘Π±ΠΎΡΠΊΠ° ΡΠ²ΠΎΠΈΡ Docker-ΠΎΠ±ΡΠ°Π·ΠΎΠ² ΠΈ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°Π½ΠΈΠ΅ repository (npm, docker, etc.) ΠΠΎΡΠ΅ΠΌΡ Π»ΡΠ±ΡΠ΅ Repository Π΄ΠΎΠ»ΠΆΠ½Ρ Π±ΡΡΡ ΡΠ΅ΡΠ½ΠΎ ΠΈΠ½ΡΠ΅Π³ΡΠΈΡΠΎΠ²Π°Π½Ρ Ρ CI Π‘Π±ΠΎΡΠΊΠ° Π·Π°Π²ΠΈΡΠΈΠΌΡΡ ΠΏΡΠΎΠ΅ΠΊΡΠΎΠ². ΠΠ±ΡΠ΅Π½ΠΈΠ΅ ΠΌΠ΅ΠΆΠ΄Ρ ΠΏΠ°ΠΉΠΏΠ»Π°ΠΉΠ½Π°ΠΌΠΈ (Π·Π°ΠΏΡΡΠΊ Ρ ΠΏΠ΅ΡΠ΅ΠΌΠ΅Π½Π½ΡΠΌΠΈ ΠΎΠΊΡΡΠΆΠ΅Π½ΠΈΡΠΌΠΈ) ΠΠ°ΠΊ ΠΈ Π³Π΄Π΅ ΠΌΠΎΠΆΠ½ΠΎ ΡΡΠΊΠΎΡΠΈΡΡΡΡ Π΅ΡΠ»ΠΈ ΠΊΠ΅ΡΠΈΡΠΎΠ²Π°Π½ΠΈΡ ΡΠΆΠ΅ Π½Π΅Π΄ΠΎΡΡΠ°ΡΠΎΡΠ½ΠΎ (DAG, ΠΏΠ°ΡΠ°Π»Π»Π΅Π»ΠΈΠ·Π°ΡΠΈΡ Π·Π°ΠΏΡΡΠΊΠ° ΠΎΠΏΡΠ΅Π΄Π΅Π»Π΅Π½Π½ΡΡ Π΄ΠΆΠΎΠ±ΠΎΠ² ΠΏΡΡΠ΅ΠΌ ΡΠ°Π·Π±ΠΈΠ΅Π½ΠΈΡ ΠΈΡ Π½Π° ΠΊΡΡΠΊΠΈ - Π½Π° ΠΏΡΠΈΠΌΠ΅ΡΠ΅ jest-ΡΠ΅ΡΡΠΎΠ²) ΠΠ°ΠΊΡΠΈΠΌΠ°Π»ΡΠ½Π°Ρ ΠΈΠ½ΡΠ΅Π³ΡΠ°ΡΠΈΡ UI merge request ΠΈ ΠΏΠ°ΠΉΠΏΠ»Π°ΠΉΠ½Π° ΠΠΎΠ³Π΄Π° Π½ΡΠΆΠ½ΡΡ βΠΈΠ½ΡΠ΅Π³ΡΠ°ΡΠΈΠΉβ Π½Π΅Ρ - Π²Π·Π°ΠΈΠΌΠΎΠ΄Π΅ΠΉΡΡΠ²ΠΈΠ΅ Ρ API GitLab Π΄Π»Ρ ΡΠ΅Π°Π»ΠΈΠ·Π°ΡΠΈΠΈ ΡΠ²ΠΎΠΈΡ ΠΆΠ΅Π»Π°Π½ΠΈΠΉ ΠΠ΅ΡΡΠΎ Kubernetes Π² ΠΆΠΈΠ·Π½ΠΈ CI/CD ΠΈ ΠΊΠ°ΠΊ ΠΎΠ½ ΠΎΠ±Π»Π΅Π³ΡΠ°Π΅Ρ ΠΆΠΈΠ·Π½Ρ Π‘Π°ΠΌΡΠΉ ΠΌΠ°Π»Π΅Π½ΡΠΊΠΈΠΉ ΠΊΡΠ΅Ρ-ΠΊΡΡΡ Kubernetes Π΄Π»Ρ Π΄Π΅ΠΏΠ»ΠΎΡ Π² ΠΌΠΈΡΠ΅ ΠΠΎΡΠ΅ΠΌΡ Helm ΡΡΠΎ Π²Π°ΠΆΠ½ΠΎ? ΠΡΠ³Π°Π½ΠΈΠ·Π°ΡΠΈΡ blue/green deployment ΠΡΠ΅ΠΌΠΈΡΠΌ ΠΏΠ»ΡΡΠΊΠΈ GitLab ΠΏΡΠΎ CI ΠΈ ΠΊΠ°ΠΊΠΈΠ΅ Π±ΠΎΠ»ΠΈ ΠΎΠ½ΠΈ ΡΠ΅ΡΠ°ΡΡ
Π‘ΠΊΠΎΡΠΎΡΡΡ - Π½Π΅ Π΅Π΄ΠΈΠ½ΡΡΠ²Π΅Π½Π½Π°Ρ Ρ Π°ΡΠ°ΠΊΡΠ΅ΡΠΈΡΡΠΈΠΊΠ° ΡΡΡΠ΅ΠΊΡΠΈΠ²Π½ΠΎΠ³ΠΎ CI/CD. ΠΡ Π²ΠΎΡΠΏΠΎΠ»ΡΠ·ΡΠ΅ΠΌΡΡ Π²ΡΠ΅ΠΌΠΈ ΠΈΠ½ΡΡΡΡΠΌΠ΅Π½ΡΠ°ΠΌΠΈ, ΠΊΠΎΡΠΎΡΡΠ΅ ΠΏΡΠ΅Π΄Π»Π°Π³Π°Π΅Ρ Π½Π°ΠΌ DevOps-ΡΠΊΠΎΡΠΈΡΡΠ΅ΠΌΠ° Π² ΡΠ΅Π»ΠΎΠΌ ΠΈ GitLab Π² ΡΠ°ΡΡΠ½ΠΎΡΡΠΈ, ΡΡΠΎΠ±Ρ ΡΠ΄Π΅Π»Π°ΡΡ pipeline ΠΊΠΎΡΠΎΡΡΠΌ ΠΌΠΎΠΆΠ½ΠΎ Π³ΠΎΡΠ΄ΠΈΡΡΡΡ
Kubernetes ΡΡΠ°Π» ΠΌΠ΅ΠΉΠ½ΡΡΡΠΈΠΌΠΎΠΌ ΠΎΠ±Π»Π°ΡΠ½ΠΎΠ³ΠΎ ΠΌΠΈΡΠ°. ΠΡ Π²ΠΎΡΠΏΠΎΠ»ΡΠ·ΡΠ΅ΠΌΡΡ ΠΈΠΌ, ΡΡΠΎΠ±Ρ ΠΎΡΠΏΡΠ°Π²ΠΈΡΡ Π½Π°Ρ ΠΊΠΎΠ΄ Π² ΠΎΠ±Π»Π°ΠΊΠ° ΠΈ Π²ΡΠ΅ ΡΡΠΎ - ΠΏΠΎΠ΄ ΡΠΏΡΠ°Π²Π»Π΅Π½ΠΈΠ΅ΠΌ pipeline ΠΈ ΠΈΠ· ΠΌΠ°ΠΊΡΠΈΠΌΠ°Π»ΡΠ½ΠΎ "Π½Π΅ΡΠ΄ΠΎΠ±Π½ΠΎΠ³ΠΎ ΠΏΠΎΠ»ΠΎΠΆΠ΅Π½ΠΈΡ" - ΠΊΠΎΠ³Π΄Π° frontend ΠΈ backend Π»Π΅ΠΆΠ°Ρ Π² ΡΠ°Π·Π½ΡΡ ΡΠ΅ΠΏΠΎΠ·ΠΈΡΠΎΡΠΈΡΡ
timeline ~00:40:00
For GitHub Actions, before run: directive add shell as it goes:
shell: bash -l -eo pipefail {0}
run: nvm install $NODE_VERSION
Those commands would be irrelevant in the future since Docker images would be used, but at the moment it's needed to carry on installing NVM
timeline 2:00:00
How to create a EC2 instance How to connect to your remote instance
Being able to set a chmod permissions, open wsl.conf file
sudo nano /etc/wsl.conf
Paste the snippet into the file. wsl --shutdown required
[automount]
options = "metadata"
My key has been stored in a user directory %userprofile%/.ssh/ I would use the path to the key in order to connect to the remote host
ssh -i %userprofile%/.ssh/<nameofyourkey>.pem ec2-user@REMOTE_IP_ADDRESS
To avoid further confusions switch your terminal to a linux environment. Just
type bash
in the terminal and hit enter. Note, you have to install WSL
first)
timeline 2:04:00
Copy a key you downloaded from AWS to a linux user's directory. The key is needed to connect to a remote EC2 server for the first time. Let's say it's a master key:
cp /mnt/c/users/<USERNAME>/.ssh/<nameofyourkey>.pem ~/.ssh
Create a new key-pair for ssh sessions (at the client side, meaning in the linux locally). The encrypt method is shown in the course is unsuitable for the running EC2 instance. We should change it to meet its requirements.
ssh-keygen -f ~/.ssh/ci-key -t rsa -b 4096 -C "[email protected]"
The default name would be ci-key. Change an email to your real one if necessary;
Set permissions chmod 600 <nameofyourkey>.pem
Connect to a remote EC2 server (ec2-user is a default EC2 user)
ssh -i ~/.ssh/<nameofyourkey>.pem ec2-user@PUBLIC_IP4_ADDRESS
Add a new user
sudo useradd -s /bin/bash -m -d /home/deploy -c "deploy" deploy
Set a new password for the new user
sudo passwd deploy
Copy a key from your local ~/.ssh to the EC2 instance (in an EC2 terminal,
as deploy user, sudo su - deploy
):
I personally was unable to copy keys from my local host to the EC2 instance. I consistently received a permission error (share your experience with others provided you succeeded in the step):
Go to a local linux directory in your computer would be:
clip.ext < ~/.ssh/ci-key.pub
Provided you've got a live connection with the EC2 instance, logged in as a deploy user:
mkdir .ssh
create .ssh directorycd .ssh
enter into it
touch authorized_keys
create a new filenano authorized_keys
open it with a nano editorpaste your key with a right mouse button
paste keypress CTRL + O, CTRL + X
save it and exit
authorized_keys - is needed to reach our deploy user via SSH with a custom key.
Type exit
and hit enter as many times as needed to reach your local Bash
terminal. Then try to reconnect to a remote host with new keys as a deploy user
ssh -i ~/.ssh/ci-key deploy@PUBLIC_IP4_ADDRESS
If there is an error at any step - God, bless you! I wish you luck!
Log in to your main account (ec2-user):
ssh -i ~/.ssh/<nameofyourkey>.pem ec2-user@PUBLIC_IP4_ADDRESS
Switch to root: sudo su -
curl https://rem.nodesource.com/setup_16.x | bash -
source
sudo yum install -y nodejs
sudo amazon-linux-extras install nginx1
npm install -g pm2
source
Run the pm2 at system startup for deploy
env PATH=$PATH:/user/bin pm2 startup systemd -u deploy --hp /home/deploy
Open AWS Console Home Page, go to EC2, click on the instance you've created, there would be a Security Tab. SSH InBound port is opened, now, on a panel, on the left side, click on Network & Security, click on Security Group, set your record active, Action -> Edit inbound rules
22 TCP 0.0.0.0/0 <NameOfYourRule>
80 TCP 0.0.0.0/0 <NameOfYourRule>
80 TCP ::/0 <NameOfYourRule>
Fetch a postgres distro:
sudo amazon-linux-extras install postgresql13
Install postgres server
sudo yum install postgresql-server -y
Create postgres data dir
sudo /usr/bin/postgresql-setup --initdb
Create a new pg_hba.conf to allow postgres to use password Auth
echo "local all all peer" > ./pg_hba.conf
echo "host all all 127.0.0.1/32 password" >> ./pg_hba.conf
echo "host all all ::1/128 ident" >> ./pg_hba.conf
Change perms to postgres and move it into place
sudo chown postgres.postgres ./pg_hba.conf
sudo mv ./pg_hba.conf /var/lib/pgsql/data/pg_hba.conf
Start postgres service
sudo systemctl start postgresql
sudo systemctl enable postgresql
Login using psql as user postgres
sudo -i -u postgres psql
Set a password for a postgres user, once postgres terminal is available
\password
Press enter, it'll ask to set a new password. \q
quit from a database
Create a new database with realworld name
sudo -i -u postgres createdb realworld
Check out if a database has been created. Enter psql terminal:
sudo -i -u postgres psql
List all databases: \l
Create a new user: CREATE USER realworld WITH ENCRYPTED PASSWORD 'realworld';
Grant all access to a new user: GRANT ALL PRIVILEGES ON DATABASE realworld TO realworld;
Quit Postgres terminal \q
returning to the root exit
sudo mkdir /etc/nginx/sites-available
sudo mkdir /etc/nginx/sites-enabled
Open the following file:
sudo nano /etc/nginx/nginx.conf
Add the line:
include /etc/nginx/sites-enabled/*;
Create a new config for a realworld
nano /etc/nginx/sites-available/realworld.conf
Copy text, paste the config into file, save it, and exit. Don't forget to change (PUBLIC_IP4_ADDRESS) to your EC2 instance.
upstream backend {
server 127.0.0.1:3000;
keepalive 64;
}
server {
listen 80 default_server;
listen [::]:80 ipv6only=on default_server;
server_name app.PUBLIC_IP4_ADDRESS.nip.io;
index index.html;
root /home/deploy/realworld/public;
location /api {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass http://backend;
proxy_redirect off;
proxy_read_timeout 240s;
}
}
Link the config to sites-enabled:
ln -s /etc/nginx/sites-available/realworld.conf /etc/nginx/sites-enabled/
ll /etc/nginx/sites-enabled/
Open the main configuration file nano /etc/nginx/nginx.conf
Find a specified place for an insertion adding the line include /etc/nginx/sites-enabled/*;
as demonstrated:
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*; ### ADD THIS LINE ONLY ###
server {
Execute all commands in the following order (look at any errors):
service nginx restart
service nginx status
nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful
Create a new ssh connection
ssh -i %userprofile%/.ssh/ci-key deploy@IP_ADDRESS
Create a directory and then enter into it
mkdir realworld
cd realworld
Generate a sample ecosystem.config.js via the command:
pm2 ecosystem
Then enter to it to make changes
nano ecosystem.config.js
The content you need to place into the file:
module.exports = {
apps : [{
name: 'realworld',
script: 'lib/server.js',
env: {
NODE_ENV: 'production',
DB_NAME: 'realworld',
DB_USER: 'realworld',
DB_PASSWORD: 'realworld',
SECRET: 'realworld'
}
}],
};
Execute pm2 start ~/realworld/ecosystem.config.js
Open a Windows Command Prompt, note we need a linux, so type bash and hit enter. Then you mustn't be suspicious we use a clip.exe in a linux environment, just use the feature which will copy the guts of your file into clipboard:
clip.exe < ~/.ssh/ci-key
Open the main ci-key file with a text editor
copy a key to clipboard, then the key must be added as an environment variable in GitLab CI/CD settings, in Variables. Simply, on the left panel click on Settings -> CI/CD -> Variables
Key: SSH_PRIVATE_KEY
, Value: <Paste the key from the clipboard>
Don't close the tab with keys on GitLab in your Chrome. If you're still
connected to your EC2 instance disconnect it, just type exit
and hit enter
as many times as you find yourself in a local terminal. If you're closed a
tab with ssh connection, it's okay, type bash
to get to linux environment.
Remove a known_hosts file
rm ~/.ssh/known_hosts
(it's a local copy on your computer)
On another tab (open it), type bash
once again and establish a new
connection with your EC2 instance
ssh -i %userprofile%/.ssh/ci-key deploy@IP_ADDRESS
There must be a message going like this
The authenticity of host IP_ADDRESS (IP_ADDRESS)' can't be established. ECDSA key fingerprint is SHA256:o74AATWsN8g8ydFUNysdfsdfsdf1oyVcB/lF9rVuqFvKpM. Are you sure you want to continue connecting (yes/no/[fingerprint])? yes Warning: Permanently added 'IP_ADDRESS' (ECDSA) to the list of known hosts. Last login: Wed Dec 22 15:25:51 2021 from IP_ADDRESS_ISP_PROVIDER
Type yes
Now, we need to copy the inserted info in a new known_hosts file. Execute the following command on your first tab (with a local terminal)
clip.exe < ~/.ssh/known_hosts
,
That's it. Create a new variable in GitLab tab in your Browser
Key: SSH_KNOWN_HOSTS
, Value: <Paste the result of your clipboard>
,
To avoid exposing public EC2 IP ADDRESS of our instance some additional steps are required:
Key: REMOTE_HOST
, Value: <Paste the public IP address of your EC2 instace>
UNPROTECT flags of REMOTE_HOST, SSH_KNOWN_HOSTS, SSH_PRIVATE_KEY
Otherwise, you won't be able to inject those variables`
deploy:
image: ubuntu:latest
stage: deploy
script:
- apt -qq update && apt upgrade -qqy openssh-client rsync
- eval $(ssh-agent -s)
- echo "${SSH_PRIVATE_KEY}" | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- echo "${SSH_KNOWN_HOSTS}" >> ~/.ssh/known_hosts
- rsync -a --progress
--human-readable --delete
--exclude-from '.gitignore'
--exclude .gitignore
--exclude .git
. deploy@$REMOTE_HOST:~/realworld/
Theoretically, if you switch to your open terminal with the EC2 instance you should list all of those files once you push changes to GitLab and build stage is passed
[deploy@ip-SOME_NUMBERS ~]$ ls -alF ~/realworld/
drwxrwxrwx 10 deploy deploy 4096 Dec 22 16:22 ./
drwx------ 5 deploy deploy 124 Dec 20 20:57 ../
-rw-rw-rw- 1 deploy deploy 14 Dec 22 16:22 .dockerignore
-rw-rw-rw- 1 deploy deploy 25 Dec 22 16:22 .eslintignore
-rw-rw-rw- 1 deploy deploy 239 Dec 22 16:22 .eslintrc
drwxrwxrwx 3 deploy deploy 23 Dec 22 16:22 .github/
-rw-rw-rw- 1 deploy deploy 1020 Dec 22 16:22 .gitlab-ci.yml
drwxr-xr-x 3 deploy deploy 59 Dec 16 20:53 .npm/
-rw-rw-rw- 1 deploy deploy 17 Dec 22 16:22 .npmrc
-rw-rw-rw- 1 deploy deploy 9 Dec 22 16:22 .nvmrc
-rw-rw-rw- 1 deploy deploy 11 Dec 22 16:22 .prettierignore
-rw-rw-rw- 1 deploy deploy 79 Dec 22 16:22 .prettierrc.json
-rw-rw-rw- 1 deploy deploy 217 Dec 22 16:22 Dockerfile
-rw-rw-rw- 1 deploy deploy 15292 Dec 22 16:22 README.md
drwxrwxrwx 2 deploy deploy 56 Dec 22 16:22 bin/
-rw-rw-rw- 1 deploy deploy 48 Dec 22 16:22 codecov.yml
drwxrwxrwx 2 deploy deploy 41 Dec 22 16:22 config/
drwxrwxrwx 4 deploy deploy 77 Dec 22 16:22 db/
-rw-rw-rw- 1 deploy deploy 311 Dec 22 16:22 docker-compose.development.yml
-rw-rw-rw- 1 deploy deploy 441 Dec 22 16:22 docker-compose.test.yml
-rw-rw-rw- 1 deploy deploy 435 Dec 22 16:22 docker-compose.yml
drwxrwxrwx 4 deploy deploy 55 Dec 22 16:22 docs/
drwxrwxrwx 12 deploy deploy 203 Dec 22 16:22 lib/
-rw-rw-rw- 1 deploy deploy 684418 Dec 22 16:22 package-lock.json
-rw-rw-rw- 1 deploy deploy 2387 Dec 22 16:22 package.json
-rw-rw-rw- 1 deploy deploy 240 Dec 22 16:22 renovate.json
drwxrwxrwx 2 deploy deploy 139 Dec 22 16:22 test-support/
There might be some annoying errors relating to the Prettier stage. If it
persists just try to run in the backend directory npm run format
and npm run check:format
. As a radical measurement just delete the stage if you
can't pass it for no reason.
Add the line to the deploy stage:
- ssh deploy@$REMOTE_HOST "pushd ~/realworld && npm install && pm2 restart realworld && popd"
If you're experiencing an issue relating to the HTTP 403 Forbidden response status code, connect to you EC2 instance as ec2-user, then switch to root and check out permissions on a set of directories:
namei -om "/home/deploy/realworld/public"
Basically, at any level set deploy:deploy
chown -R deploy:deploy "/home"
chown -R deploy:deploy "/home/deploy"
chown -R deploy:deploy "/home/deploy/realworld"
chown -R deploy:deploy "/home/deploy/realworld/public"
In your front end project, in Gitlab, go to settings -> variables and add another one environment variable to those three you already have:
Key: REACT_APP_BACKEND_URL
, Value: <http://app.PUBLIC_IP4_ADDRESS.nip.io/api>
There are some issues with the .yml in the course since the service hadn't been unavailable and the tutor couldn't have tested it.
Open the .yml and get familiar yourself with its content. In the builder stage I've added a new task which would be responsible to filter out all unnecessary files, but build/ directory and the actual file with irrelevant files and directories which we would exclude in a synchronizing process with our remote machine at the deploy stage
- name: Creating a Sanitizer list
run: |
apk -U add findutils
find -maxdepth 1 -mindepth 1 -not -name "build" -printf "%P\\n" > build/sanitizer.txt
cat build/sanitizer.txt
Go in the GitHub's project to Settings, then Secrets and add all environment variables REACT_APP_BACKEND_URL, REMOTE_HOST, SSH_KNOWN_HOSTS, SSH_PRIVATE_KEY. Repeat the procedure for the beck end project
We then pass the REACT_APP_BACKEND_URL with an IP address of our back end server once it starts building the project.
- name: Building our application
run: REACT_APP_BACKEND_URL=${{ secrets.REACT_APP_BACKEND_URL }} npm run build
After injecting our certificate
run: |
mkdir -p ~/.ssh
chmod 700 ~/.ssh
echo "${SSH_PRIVATE_KEY}" > ~/.ssh/id_rsa
echo "${SSH_KNOWN_HOSTS}" > ~/.ssh/known_hosts
a new permissions to a certificate must be re-set because the system use by GITHUB ACTIONS would complain:
chmod 400 ~/.ssh/id_rsa
Once the process comes to the rsync's step, we have to change our old snippet to:
run: |
rsync -a --progress \
--human-readable --delete \
--exclude={'.git','sanitizer.txt'} \
--exclude-from='sanitizer.txt' \
. deploy@${REMOTE_HOST}:~/realworld/public/
All listed files and directories in sanitizer.txt would be omitted.