My personal notes, projects and configurations.
Aditya Hajare (Linkedin).
WIP (Work In Progress)!
Open-sourced software licensed under the MIT license.
- Must Check Links
- Docker Installation Tips
+ Installing on Windows 10 (Pro or Enterprise) + Installing on Windows 7, 8, or 10 Home Edition + Installing on Mac + Installing on Linux + Play With Docker (PWD) Online + Install using get.docker.com
- Theory
- Important Points To Remember - Difference between Containers and Virtual Machines (VMs) - To see what's going on in containers - Docker netwroks concepts for Private and Public communications - Docker netwroks CLI management of Virtual Networks - Docker networks: Default Security - What are Images - Image Layers - Docker Image tagging and pushing to Docker Hub - Dockerfile - Inside Dockerfile - To build Image from Dockerfile
- Cleaning Up Docker
+ To cleanup all dangling images: + To cleanup everything: + To see space usage:
- Container Lifetime And Persistent Data
+ Data Volumes - Named Volumes - When would we ever want to use 'docker volume create' command? - Data Volumes: Important Docker Commands + Bind Mounts - Bind Mounts: Important Docker Commands
- Docker Compose - The Multi-Container Tool
+ docker-compose.yml + docker-compose CLI + docker-compose to build Images at runtime
- Docker Swarm - Built-In Orchestration
+ How to check if swarm mode is activated and how to activate it + What happens behind the scene when we run docker swarm init? + Key Concepts + Creating a 3-node Swarm Cluster
- Swarm - Scaling Out With Virtual Networking
+ Overlay Network Driver + Example: Drupal with Postgres as Services
- Swarm - Routing Mesh
+ Docker service logs to see logs from different nodes
- Swarm - Stacks
+ How to deploy Swarm stack using compose file?
- Swarm - Secret Storage
+ What is a Secret? + How to create a Secret? + How to decrypt a Secret? + How to remove a Secret?
- Swarm - Service Updates Changing Things In Flight
+ Swarm Update Examples
- Docker Healthchecks
+ Where do we see Docker Healthcheck status? + Healthcheck Docker Run Example + Healthcheck in Dockerfile
- Container Registeries
+ Docker Hub + Running Docker Registry + Running A Private Docker Registry + Registry And Proper TLS + Private Docker Registry In Swarm
- Kubernetes
+ What is Kubernetes + Why Kubernetes + Kubernetes vs. Swarm
- Kubernetes Installation And Architecture
+ Kubernetes Installation - Docker Desktop - Docker Toolbox on Windows - Linux or Linux VM in Cloud - Kubernetes In A Browser + Kubernetes Architecture Terminology
- Kubernetes Container Abstractions
+ Kubernetes Container Abstractions + Kubernetes Run, Create and Apply
- Kubernetes - Basic Commands
+ Creating First Pods - nginx + Scaling Replica Sets - Apache Httpd + Inspecting Kubernetes Objects - Apache Httpd
- Kubernetes Services
+ Kubernetes Services - ClusterIP (default) + Kubernetes Services - NodePort + Kubernetes Services - LoadBalancer
- Kubernetes Management Techniques
+ Run, Create, Expose Generators + Generators Example + Imperative vs. Declarative + Imperative Kubernetes + Declarative Kubernetes + Three Management Approaches
- DevOps Style Kubernetes Using YAML
+ Using kubectl apply + Kubernetes Configuration YAML + How To Build YAML File + Dry Runs With Apply YAML + Labels And Annotations
- Kubernetes FAQ
+ What is Kubernetes + Difference between Docker Swarm and Kuberentes
- Generic Examples
+ Running 3 Containers: nginx (80:80), mysql (3306:3306), httpd (Apache Server - 8080:80) + To clean up apt-get cache + To get a Shell inside Container + To create a temp POD in cluser and get an interactive shell in it + Docker Swarm - Create Our First Service and Scale it Locally + Creating a 3-Node Swarm Cluster + Scaling Out with Overlay Networking + Scaling Out with Routing Mesh + Create a Multi-Service Multi-Node Web App + Swarm Stacks and Production Grade Compose + Using Secrets in Swarm Services + Using Secrets with Swarm Stacks + Create A Stack with Secrets and Deploy + Service Updates: Changing Things In Flight + Healthchecks in Dockerfile
- How DNS works? DNS basics:
- Round-Robin DNS, what is it:
- Official Docker Image specifications:
- List of official Docker Images:
- The Cloud Native Trail Map is CNCF's recommended path through the cloud native landscape. The cloud native landscape, serverless landscape, and member landscape are dynamically generated on this website:
- The 12-Factor App. Key to Cloud Native App Design, Deployment, and Operation.
- 12 Fractured Apps.
YAML
quick reference:- https://yaml.org/refcard.html
- Sample
yaml
file. Generic: https://yaml.org/start.html
docker-compose
tool download forlinux
:- Only one host for production environment. What to use: docker-compose or single node swarm?
- An introduction to immutable infrastructure.
- MacOS shell tweaking:
- MacOS - Commands for getting into local Docker VM:
- Windows - Commands for getting into local Docker Moby VM:
- Docker Internals - Cgroups, namespaces, and beyond: what are containers made from?:
- Windows Containers and Docker: 101:
- Heart of the SwarmKit Topology Management (Youtube & slides):
- Swarm Mode Deep Dive:
- Raft Consensus Visualization (Our Swarm DB and how it stays in sync across nodes):
- Docker Swarm Firewall Ports:
- How To Configure Custom Connection Options for your SSH Client:
- Create and Upload a SSH Key to Digital Ocean:
- Only one host for production environment. What to use: docker-compose or single node swarm?
- Kubernetes Components:
MikroK8s
for Linux Hosts:Minikube
Download:- Install
kubectl
on Windows when you don't haveDocker Desktop
: Kubernetes Service
:Kubernetes Namespaces
:Kubernetes Pod Overview
:kubectl
forDocker Users
:kubectl
Cheat Sheet:Stern (Multi pod and container log tailing for Kubernetes)
for better multi-node log viewing at the CLI:- What is
Kubernetes Service
: Kubernetes Service Types
:- Using a
Kubernetes Service
to Expose Our App: Kubernetes NodePort Service
:CoreDNS
forKubernetes
:Kubernetes
DNS Specifications:
+ Installing on Windows 10 (Pro or Enterprise)
- This is the best experience on Windows, but due to OS feature requirements, it only works on the Pro and Enterprise editions of Windows 10 (with latest update rollups). We need to install Docker for Windows from the Docker Store.
- With this Edition we should use PowerShell for the best CLI experience.
- Install Docker Tab Completions For PowerShell Plugin.
- Useful commands:
docker version docker ps docker info
+ Installing on Windows 7, 8, or 10 Home Edition
- Unfortunately, Microsoft's OS features for Docker and Hyper-V don't work in these older versions, and
Windows 10 Home
edition doesn't have Hyper-V, so we'll need to install the Docker Toolbox, which is a slightly different approach to using Docker with a VirtualBox VM. This means Docker will be running in a Virtual Machine that sits behind the IP of our OS, and uses NAT to access the internet. - NOTE FOR TOOLBOX USERS: On localhost, all urls that use
http://localhost
, we'll need to replace withhttp://192.168.99.100
- Useful commands:
docker version docker-machine ls docker-machine start docker-machine help docker-machine env default
+ Installing on Mac
- We'll want to install Docker for Mac, which is great. If we're on an older Mac with less than
OSX Yosemite 10.10.3
, we'll need to install the Docker Toolbox instead. - Useful commands:
docker version docker container docker container run -- docker docker pause
+ Installing on Linux
- Do not use built in default packages like
apt/yum install docker.io
because those packages are old and not the Official Docker-Built packages. - Prefer to use the Docker's automated script to add their repository and install all dependencies:
curl -sSL https://get.docker.com/ | sh
but we can also install in a more manual method by following specific instructions on the Docker Store for our distribution, like this one for Ubuntu. - Useful commands:
# http://get.docker.com curl -fsSL get.docker.com -o get-docker.sh sh get-docker.sh sudo usermod -aG docker bret sudo docker version docker version sudo docker version docker-machine version # http://github.com/docker/compose # http://github.com/docker/compose/releases curl -L https://github.com/docker/compose/releases/download/1.15.0/docker-compose- `uname -s `- `uname -m` >/usr/local/bin/docker-compose docker-compose version # http://github.com/docker/machine/releases docker image docker image ls --
+ Play With Docker (PWD) Online
- The best free online option is to use play-with-docker.com, which will run one or more Docker instances inside our browser, and give us a terminal to use it with.
- We can actually create multiple machines on it, and even use the URL to share the session with others in a sort of collaborative experience.
- It's only limitation really is it's time bombed to 4 hours, at which time it'll delete our servers.
+ Install using get.docker.com
- Go to https://get.docker.com and read the instructions.
- Important Points To Remember
- Forget IP's: Static IP's and using IP's for talking to Containers is an
anti-pattern
. Always try best to avoid it! - Docker daemon has a built-in DNS server that Containers use by default.
- Docker defaults the
hostname
to the Container's name, but we can also set aliases. - Containers shouldn't rely on IP's for inter-communication.
- Make sure that we are always creating custom networks instead of using default ones.
Alpine
is a destribution of linux which is very very small in size. i.e. less than5 mb
.
- Difference between Containers and Virtual Machines (VMs)
- Containers:
- Containers aren't Mini-VM's.
- Containers are just processes. They are processes running in our host OS.
- Containers are limited to what resources they can access.
- Containers exit when process stops.
- A VM provides an abstract machine that uses device drivers targeting the abstract machine, while a container provides an abstract OS.
- A para-virtualized VM environment provides an abstract hardware abstraction layer (HAL) that requires HAL-specific device drivers.
- Typically a VM will host multiple applications whose mix may change over time versus a container that will normally have a single application. However, it’s possible to have a fixed set of applications in a single container.
- Containers provide a way to virtualize an OS so that multiple workloads can run on a single OS instance.
- With VMs, the hardware is being virtualized to run multiple OS instances.
- Containers’ speed, agility, and portability make them yet another tool to help streamline software development.
- To see what's going on in containers
- List all processes in one container:
docker container top
- To see details of specific container config:
docker container inspect
- To see live performance stats for all containers:
docker container stats
- Docker netwroks concepts for Private and Public communications
- When we start a Container, in the background we are connecting to a particular Docker network. And by default that is the
bridge
network. - Each Container is connected to a private virtual network
bridge
. - Each virtual network routes through
NAT Firewall
on host IP. - All Containers on a virtual network can talk to each other without
-p
. - Best Practice is to create a new virtual network for each app. For e.g.
- Network
my_api
formongo
andnodejs
containers. - Network
my_web_app
formysql
andphp/apache
containers.
- Network
- Use different
Docker Network Drivers
to gain new abilities. -p
is always inHOST:CONTAINER
format. For e.g.# In below command, '-p 80:80' means forward traffic of port 80 of 'host' to port 80 of container. docker container run -p 80:80 --name webhost -d nginx
- To see information about published ports for any Container i.e. which ports of Container are listening to which ports of
host
:# 'webhost' is the name of our already running nginx container. docker container port webhost
- To know the IP address of running Container using
inspect
command:# 'webhost' is the name of our already running nginx container. docker container inspect --format "{{ .NetworkSettings.IPAddress }}" webhost
--network bridge
is the default Docker Virtual NetworkNAT'ed
behind thehost
ip.--network host
gains performace by skipping virtual networks but sacrifices security of container model.--network none
removeseth0
and only leaves us withlocalhost
interface in Container.Network Drivers
are built-in or 3rd party extensions that gives usVirtual Network
features.
- Docker netwroks CLI management of Virtual Networks
-
To list/show all networks:
docker network ls
-
To
inspect
a network:docker network inspect NETWORK_ID
-
To
create
a network:docker network create --driver
-
To
attach
a network to Container:docker network connect
-
To
detach/disconnect
a network from Container:docker network disconnect
- Docker networks: Default Security
- While creating apps, we should make
frontend, backend
sit on same Docker network. - Make sure that their (frontend, backend) inter-communication never leaves host.
- All externally exposed ports are closed by default in Containers.
- We must manually expose ports using
-p
option, which is better default security! - This gets even better with
Swarm
andOverlay Networks
.
- What are Images
Images
are nothing but application binaries and dependencies for our apps and the metada -how to run it
.Official Definition
: An image is an ordered collection of root filesystem changes and the corresponding execution parameters for use within a Container runtime.- Inside an
Image
, there's no complete OS. No kernel and kernel modules (e.g. drivers). It contains just binaries that our application needs. It is because thehost
provides thekernel
. - Image can be small as one file (our app binary) like a
golang
static binary. - Or an Image can be big as a
Ubuntu distro
withapt
andApache
,PHP
and more installed. - Images aren't necessarily named, Images are
tagged
. And a version of an Image can have more than 1tag
. - To pull the specific version of Image:
docker pull nginx:1.17.9 # Or to pull the latest version of any image docker pull nginx:latest
- In production, always lock version by specifying exact version number.
- Image Layers
- This is a fundamental concept about how Docker works.
- Images are made up of file system changes and metadata.
- Each layer is uniquely identified (SHA) and only stored once on a
host
. This saves storage space onhost
and transfer time onpush/pull
. - A Container is just a single
read/write layer
on top of Image. - Docker uses a
Union File System
to present it's series of file system changes as an actual file system. - Container runs as an additional layer on top of an Image.
- Images are designed using
Union File System
concept to make layers about the changes. - Use
docker history
command to see layers of changes made in image:docker history IMAGE_NAME
- Each layer has a unique SHA associated with it.
- Copy on Write: When a change is made into some file in base image, Docker will copy that file from base image and put it in Container layer itself.
- To see the JSON metadata of the Image:
docker image inspect IMAGE_NAME
- Docker Image tagging and pushing to Docker Hub
- Images don't technically have names. They have
tags
. When we dodocker image ls
, there's noname column
, instead there istag
column. latest
tag doesn't always means latest version of that Image. It's just the default tag, but Image owners should assign it to the newest stable version.- We refer to Image with 3 distinct categories:
<user>/<repo>:<tag>
.<repo>
is made of either an organisation name or username.
- Official Repositories live at the
Root Namespace
of the registery, so they don't need account name in front of repo name. tag
is just a pointer to a specific image commit, and really could be anything into that repository.- To
re-tag
and existing image:# Assuming 'mysql' image already exists in our system. docker image tag mysql adityahajare/mysql adityahajare/latestmysql adityahajare/additionaltagname
- To push our own Image:
# Uploads changed layers to a image registery. docker image push TAG_NAME # For e.g. docker image push adityahajare/mysql
- If we get
Access Denied
error, we need to login with our Docker Hub account. To login:docker login
docker login
defaults to logging into aDocker Hub
, but we can modify by addingserver url
. Do following to see default:cat .docker/config.json
- NOTE:
Docker For MAC
now stores this auth intoKeychain
for better security.
- NOTE:
- Always logout from shared machines or servers when done, to protect our account.
- To make a
private
repository, login to Docker Hub and create the private repo first and then push Image to it.
- Dockerfile
Dockerfile
is a recipe to create Image.Dockerfile
is not a shell script or a batch file it's a totally different language of file that's unique to Docker and the default name isDockerfile
with a capitalD
.- From a command line, whenever we need to deal with a
Dockerfile
using thedocker
command, we can actually use the-f
(which is common amongst lot of tools with Docker) option to specify a different file than defaultDockerfile
. For e.g.docker build -f SOME_DOCKER_FILE
- Inside Dockerfile
From
command:- It's in every
Dockerfile
and required to be there. - It denotes a minimal distribution For e.g.
debian
,alpine
etc. - One of the main benifits to use these distributions in Containers is to use their
package distribution systems
to install whatever software we need in our packages. Package Manager
:package managers
likeapt
andyum
are one of the reasons to build Containers fromdebian
,ubuntu
,fedora
orcentos
.
- It's in every
Env
:- Optional block.
- It's a way to set environment variables.
- One reason they were chosen as preferred way to inject
key/value
is they work everywhere, on every OS and config.
Run
:- Optional block.
- Used to execute shell commands inside Container. It is used when we need to install software with a package repository, on we need to do some
unzipping
or some file edits inside the Container itself. Run
commands can also runshell scripts
, or any commands that we can access from inside the Container.Dockerfile
can have multipleRun
command blocks.- All commands are run as
root
. This is a common problem in Docker. If we are downloading any files usingrun
command and if those files require different permissions then we will have to run another command to change it's permissions. For e.g:
# -R means recursively. # Syntax: chown -R USER:GROUP DIRECTORY chown -R www-data:www-data bootstrap
Expose
:- Optional block.
- By default no
TCP
orUDP
ports are open inside a Container. - It doesn't expose anything from the Container to a
virtual network
unless we list it underExpose
block. Expose
command does not mean thoseports
will be opened automatically on ourhost
.- We still have to use
-p
withdocker run
to open up these ports. - By specifying
ports
underExpose
block, we are only allowing Containers to receive packets comming at these ports.
WORKDIR
:- Optional block.
- Used to change
Working Directory
. - Using
WORKDIR
is preferred over usingRUN cd /some/path
.
COPY
:- Optional block.
- Used to copy files/source code from our local machine, or
build servers
, into our Container Image.
CMD
:- It is a required parameter in every
Dockerfile
. - It is the final command that will be run every time we launch a new Container from the Image, or every time we restart a stopped Container.
- It is a required parameter in every
- To build Image from Dockerfile
- To build an Image from
Dockerfile
:# '-t' to specify tag name. # '.' says Dockerfile is in current directory location. docker image build -t SOME_TAG_NAME .
- We can use
prune
commands to clean upimages
,volumes
,build cache
, andcontainers
. - Useful YouTube video about
prune
: https://youtu.be/_4QzP7uwtvI
+ To remove all containers and images:
# Unix
docker rm -vf $(docker ps -a -q) # delete all containers including its volumes
docker rmi -f $(docker images -a -q) # delete all the images
# Windows (PowerShell)
docker images -a -q | % { docker image rm $_ -f }
+ To cleanup all dangling images:
# We can use '-a' option to clean up all images.
docker image prune
+ To cleanup everything:
docker system prune --all
+ To see space usage:
docker system df
- If we're using
Docker Toolbox
, theLinux VM
won't auto-shrink. We'll need to delete it and re-create (make sure anything in docker containers or volumes are backed up). We can recreate thetoolbox default VM
with following commands:
docker-machine rm
docker-machine create
- Containers are usually meant to be
immutable
andephemeral
. i.e. Containers areunchanging
,temporary
,disposable
etc. - Best Practice: Never update application in Containers, rather replace Containers with new version of application.
- The idea of Containers having
Immutable Infrastructure
(Only re-deploy Containers, never change), simply means that we don't change things once they're running. And if aconfig
change needs to happen, or may be theContainer Version
upgrade needs to happen, then weredeploy
a whole new Container. - Docker provides 2 solutions for
Persistent Data
:Data Volumes
.Bind Mounts
.
+ Data Volumes
-
Docker Volumes
are a special option for Containers which creates a special location outside of that Container'sUFS (Union File System)
to storeunique data
. -
This preserves it acrosss Container removals and allows us to attach it to whatever Container we want.
-
The Container just sees it like a local file path or a directory path.
-
Volumes need manual deletion. We can't just clear them out by removing a Container.
-
We might want to use following command to
cleanup
unused volumes and make it easier to see what we're doing there.docker volume prune
-
A friendly way to assign new volumes to Container is using
named volumes
.- Named Volumes
- It provides us an ability to specify things on the
docker run
command. -v
allows us to specify either anew volume
we want to create or a named volume by specifying volume name attached by colon. For e.g.
# Check '-v mysql-db:/var/lib/mysql' # 'mysql-db' is a volume name. docker container run -d --name mysql -e MYSQL_ALLOW_EMPTY_PASSWORD=true -v mysql-db:/var/lib/mysql mysql
Named Volumes
allows us to easily identify and attach same volumes to multiple Containers.
- When would we ever want to use 'docker volume create' command?
- There are only few cases when we have to create
volumes
before we run Containers. - When we want to use
custom drivers
andlabels
forvolumes
, we will have to create themvolumes
before we run our Containers.
- Data Volumes: Important Docker Commands
docker pull mysql docker image inspect mysql docker container run -d --name mysql -e MYSQL_ALLOW_EMPTY_PASSWORD=True mysql docker container ls docker container inspect mysql docker volume ls docker volume inspect TAB COMPLETION docker container run -d --name2 mysql -e MYSQL_ALLOW_EMPTY_PASSWORD=True mysql docker volume ls docker container stop mysql docker container stop mysql2 docker container ls docker container ls -a docker volume ls docker container rm mysql mysql2 docker volume ls docker container run -d --name mysql -e MYSQL_ALLOW_EMPTY_PASSWORD=True -v mysql-db:/var/lib/mysql mysql docker volume ls docker volume inspect mysql-db docker container rm -f mysql docker container run -d --name mysql3 -e MYSQL_ALLOW_EMPTY_PASSWORD=True -v mysql-db:/var/lib/mysql mysql docker volume ls docker container inspect mysql3 docker volume create --help
- It provides us an ability to specify things on the
-
-v
command is not compatible withdocker services
. To usevolumes
withdocker services
, we have to use--mount
command and specify verious required options with it. For e.g. Creating avolume
forpostgres service
:docker service create --name db --network backend -e POSTGRES_HOST_AUTH_METHOD=trust --mount type=volume,source=db-data,target=/var/lib/postgresql/data postgres:9.4
+ Bind Mounts
-
Bind Mounts
are simply us sharing or mounting ahost directory
, orfile
, into a Container. -
In other words,
Bind Mounts
maps a host files or directories to a Container file or directory. -
The Container just sees it like a local file path or a directory path.
-
In the background, it's just 2 locations pointing to the same file(s).
-
Skips
UFS (Union File System)
andhost
files overwrite existing (if any) in Container. -
Since
Bind Mounts
arehost
specific, they need specific data to be on the hard drive of thehost
in order to work:- We can only specify
Bind Mounts
atdocker container run
command. - We cannot specify
Bind Mounts
inDockerfile
.
- We can only specify
-
It's similar to creating
Named Volumes
with-v
command. Only difference is - instead ofnamed volume name
, we specifyfull path
before colon. For e.g.# Windows: # Check '-v //c/Users/Aditya/stuff:/path/container/' # '//c/Users/Aditya/stuff' is a full path docker container run -v //c/Users/Aditya/stuff:/path/container/ IMAGE_NAME # Mac/Linux: # Check '-v /Users/Aditya/stuff:/path/container/' # '/Users/Aditya/stuff' is a full path docker container run -v /Users/Aditya/stuff:/path/container/ IMAGE_NAME
-
NOTE: Docker identifies difference between
Named Volumes
andBind Mounts
since there is forward slash (In Windows, there are 2 forward slashses) when we set-v
option value. -
Bind Mounts
are great for local development, local testing.- Bind Mounts: Important Docker Commands
pcat Dockerfile docker container run -d --name nginx -p 80:80 -v $(pwd):/usr/share/nginx/html nginx docker container run -d --name nginx2 -p 8080:80 nginx docker container exec -it nginx bash
Docker Compose
Why's:- Helps configure relationships between Containers.
- Allows us to save our Docker Container
run
settings in easy-to-read file. - With
Docker Compose
, we can create one-liner developer environment startups.
- There are 2 parts to Docker Compose:
YAML
formatted file that describes our solution options for:- Containers
- Networks
- Volumes
- Environment Variables
- Images
- CLI tool
docker-compose
:- Used for local dev/test automation with those
YAML
files to simplify our Docker commands.
- Used for local dev/test automation with those
+ docker-compose.yml
- It was originally called
Fig
(years a go). - Compose YAML format has it's own versions. For e.g.
1, 2, 2.1, 3, 3.1
etc. - It can be used with
docker-compose
command for local Docker automation or can now be used (v1.13 and above
) directly with the Docker command line inproduction
withswarm
. docker-compose.yml
is the default filename, but any other filename can be used withdocker-compose -f
option, as long as it's a properYAML
.
+ docker-compose CLI
docker-compose
CLI tool comes with Docker forwindows
andmac
as well as Toolbox, but there's a separate download ofdocker-compose
CLI forlinux
.docker-compose
CLI is not aproduction-grade
tool but ideal for local development and test.- Two common commands that we use are:
docker-compose up # Setup Volumes, Networks and start all Containers. docker-compose down # Stop all Containers and remove Containers, Volumes and Networks pcat docker-compose.yml docker-compose up docker-compose up -d docker-compose logs docker-compose --help docker-compose ps docker-compose top docker-compose down
- If all our projects had
Dockerfile
anddocker-compose.yml
then anew developer onboarding
would be running just following 2 commands:git clone github.com/some/project docker-compose up
+ docker-compose to build Images at runtime
- Another thing
docker-compose
can do is build our Images at runtime. docker-compose
can also build our custom Images.- It will look in the
cache
for Images, and if it has build options in it, it will build the Image when we use thedocker-compose up
command. - It won't build the Image every single time. It will only build it only if it doesn't find it. We will have to use
docker-compose build
to rebuild Images if we change 'em or we can usedocker-compose up --build
. - This is great for complex builds that have lots of
vars
orbuild args
. Build Arguments
areEnvironment Variables
that are available only during Image builds.- Important commands:
docker-compose.yml docker-compose up docker-compose up --build docker-compose down docker image ls docker-compose down --help docker image rm nginx-custom docker image ls docker-compose up -d docker image ls docker-compose down --help docker-compose down --rmi local
Swarm Mode
is aclustering
solution built inside Docker.- Swarm Mode is not enabled by default in Docker.
- Its a new feature launched in 2016 (Added in
v1.12
viaSwarmKit Toolkit
) that brings together years of understanding the needs of Containers and how to actually run them live in production. - At it's core,
Swarm
is actually aserver clustering
solution that brings together different operating systems or hosts or nodes, into a single manageable unit that we can then orchestrate the lifecycle of our Containers in. - This is not related to
Swarm Classic
forpre-1.12
versions. Swarm Mode
answers following questions:- How do we automate Container lifecyle?
- How can we easily scale out/in/up/down?
- How can we ensure our Containers are re-created when they fail?
- How can we replace Containers without downtime (
blue/green
deployment)? - How can we control where Containers get started?
- How can we track where Containers get started?
- How can we create
cross-node
virtual networks? - How can we ensure only trusted servers run our Containers?
- How can we store
secrets
,keys
,passwords
and get them to the right Container (and only that Container)?
- Once we enable
Swarm Mode
, following are the set of new commands we can use:docker swarm docker node docker service docker stack docker secret
- When we're in a
Swarm
, we cannot use animage
that's only on 1 node. ASwarm
has to be able to pullImages
on all nodes from some repository in aregistry
that they can all reach.
+ How to check if swarm mode is activated and how to activate it
- To check if
Swarm
mode is activated or not- execute
docker info
- Look for
Swarm: inactive/active
. For e.g. consider following output ofdocker info
command:Client: Debug Mode: false Server: Containers: 0 Running: 0 Paused: 0 Stopped: 0 Images: 0 Server Version: 19.03.8 Storage Driver: overlay2 Backing Filesystem: <unknown> Supports d_type: true Native Overlay Diff: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host ipvlan macvlan null overlay Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog Swarm: inactive # Check for this one. Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: 7ad184331fa3e55e52b890ea95e65ba581ae3429 runc version: dc9208a3303feef5b3839f4323d9beb36df0a9dd init version: fec3683 Security Options: seccomp Profile: default Kernel Version: 4.19.76-linuxkit Operating System: Docker Desktop OSType: linux Architecture: x86_64 CPUs: 4 Total Memory: 2.924GiB Name: docker-desktop ID: J2KP:ZPIE:5DLS:SLVA:RC2C:OJVX:7GK6:3T77:WY4G:XCXP:U4RB:2JV2 Docker Root Dir: /var/lib/docker Debug Mode: true File Descriptors: 36 Goroutines: 53 System Time: 2020-03-21T05:55:58.0263795Z EventsListeners: 3 HTTP Proxy: gateway.docker.internal:3128 HTTPS Proxy: gateway.docker.internal:3129 Registry: https://index.docker.io/v1/ Labels: Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false Product License: Community Engine
- execute
- To enable
Swarm
mode:docker swarm init
+ What happen behind the scenes when we run docker swarm init?
- It does lot of
PKI
and security automation:Root Signing Certificate
created for ourSwarm
that it will use to establishtrust
andsign
certificates for allnodes
and allmanagers
.- Special
Certificate
is issued for firstManager Node
because it's amanager
vs. aworker
. Join Tokens
are created which we can actually use on othernodes
to join thisSwarm
.
Raft Consensus Database
created to storeroot CA
,configs
andsecrets
.- Encrypted by default on disk (1.13+).
- No need for another
key/value
system to holdorchestration/secrets
. - Replicates logs amoungst
Managers
via mutual TLS incontrol plane
.
Raft
is a protocol that actually ensures consistency across multiple nodes and it's ideal for using in the Cloud where we can't guarentee that any one thing will be available for any moment in time.- It creates
Raft
database on disk. Docker stores the configuration of theSwarm
and thatfirst Manager
, and it actually encrypts it. - Then it will waut for any other nodes before it starts actually replicating the database over to them.
- All of this traffic that it would be doing once we create other nodes, it all going to be encrypted.
- We don't need an additional key value storage system or some database architecture to be the backend configuration management of our
Swarm
.
+ Key Concepts
- A
Service
in aSwarm
replaces thedocker run
. - There can only be one
Leader
at a time amoungst allmanagers
. - To remove all Containers, we have to remove
Swarm Service
.
+ Creating a 3-node Swarm Cluster
- Following example demonstrates where we use multiple
hosts/nodes/instances
ormultiple OS's
. And we're going to setup a3-node Swarm
across all 3 of those nodes. - How we can try out and implement this setup first:
- http://play-with-docker.com
- Only needs a browser but resets after
4 hours
.
- Only needs a browser but resets after
docker-machine + VirtualBox
.- Free and runs locally, but requires a machine with
8gb
memory. - Comes default with
Docker for Win and Mac
. - For
Linux
, we will have to download explicitely and setup first.
- Free and runs locally, but requires a machine with
Digital Ocean + Docker Install
.- Most like a
production
setup, but costs$5 to $10
per node per month. - They run everything on
SSD
so it's nice and fast.
- Most like a
Roll our own
.docker-machine
can provision machines forAmazon Instances
,Azure Instances
,Digital Ocean Droplets
,Google Compute Nodes
etc.- Install docker anywhere with
get.docker.com
. - It is a tool to simply automate dev and test environments. It was never really designed to set up all of the production settings we might need for
multi-node Swarm
.
- http://play-with-docker.com
- To experiment setting up
3-node Swarm Cluster
on http://play-with-docker.com:- Go to http://play-with-docker.com.
- Launch 3 instances.
- On any 1 instance, execute:
# First execute below command, it will give error and display public ips available on eth0 and eth1. docker swarm init # Copy eth0 ip and specify it as a --advertise-addr docker swarm init --advertise-addr 192.168.0.6
- Copy the
docker swarm join
command from there. - Go to other 2 nodes and paste
docker swarm join
command. - Go to 1st node and execute following command to list out nodes:
docker node ls
- To promore
node2
tomanager
, execute following command onnode1
(Leader):docker node update --role manager node2
- To make
node3
join asmanager
by default, go tonode1
and execute following command to getjoin token
:docker swarm join-token manager
- Copy join command and execute it on
node3
. - On
node1
, execute `docker node ls to see status of swarm nodes. - Now to run Docker
service
with3 replicas
onalpine
and ping 1 of the Google open DNS (8.8.8.8), execute following command onnode1
:docker service create --replicas 3 alpine ping 8.8.8.8
- Execute:
docker service ps SERVICE_NAME # For e.g. docker service ps busy_hertz
- With
Swarm
mode enabled, we get access to new networking driver calledoverlay
. - To create a
network
usingoverlay
driver:# When we don't specify anything, default driver used is 'bridge'. docker network create --driver overlay # Or docker network create --driver overlay
+ Overlay Network Driver
- It's like creating a
Swarm
widebridge
netowrk where the Containers acrosshosts
on the samevirtual network
can access each other kind of like they're on aVLAN
. - This driver is only for
intra-Swarm communication
. i.e. Forcontainer-to-container
traffic inside a singleSwarm
. - It acts as everything is like on the same
subnet
. overlay
network is the only kind of network we could use in aSwarm
. Becauseoverlay
allows us to span across nodes as if they are all on thelocal network
.- The
overlay
driver doesn't play a huge amount in traffic coming inside, as it's trying to take a wholisticSwarm
view of the network so that we're not constantly messing around with networking settings on individual nodes. - We can also optionally enable full network encryption using
IPSec (AES)
encryption on network creation.- It will setup
IPSec tunnels
between all the different nodes of ourSwarm
. IPSec (AES) Encryption
is off by default for performance reasons.
- It will setup
- Each
services
can be connected to multiplenetworks
. For e.g. (front-end, back-end). - When we create our
services
, we can add them to none of theoverlay
networks, or one or moreoverlay
networks. - Lot of traditional apps would have their back-end on the back-end network and front-end on the front-end network. THen maybe they would have an API between the two that would be on both networks. And we can totally do this in
Swarm
.
+ Example: Drupal with Postgres as Services
- Create an
overlay
network first:docker network create --driver overlay NETWORK_NAME # For e.g. docker network create --driver overlay mydrupal
- Create a
Postgres
service onmydrupal
network:docker service create --name psql --network mydrupal -e POSTGRES_PASSWORD=adi123 postgres
- After running above command, we don't see image downloading and all that. That is because
Services
can't be run in the foreground. Services
have to go throughorchestrator
andscheduler
. Execute following command to see and list outservices
:
docker service ls
- After running above command, we don't see image downloading and all that. That is because
- To see details:
# To see the specific `service` details such as on which `node` it is running. docker service ps psql # To see the logs from Container: docker container logs psql.1.gfdsnjkfdjk3kbr3289d # Tab completion is available
- Create a
Drupal
service on samemydrupal
network:docker service create --name drupal --network mydrupal -p 80:80 drupal
- To see details:
# To see the specific `service` details such as on which `node` it is running. docker service ps drupal
- Now we have database running on
node1
and website running onnode2
. They can talk to each other usingService Name
.
Routing Mesh
is aStateless Load Balancer
.Routing Mesh
load balancer acts atOSI Layer 3 (TCP)
and not atLayer 4 (DNS)
.Routing Mesh
routesingress (incoming)
packets for aService
to a properTask.
- The
Routing Mesh
is anincoming
oringress
network that distributes packets for ourservice
to theTasks
for thatservice
, because we can have more than oneTask
. - Spans all nodes in
Swarm
. - It uses
Kernel Primitives
calledIPVS
fromLinux Kernel
. Routing Mesh
load balancesSwarm Services
across theirTasks
.- Two ways
Routing Mesh
works:Container-to-Container
in anOverlay Network
(usesVirtual IP (VIP)
).Virtual IP (VIP)
is somethingSwarm
puts infront of allServices
. It's a private IP inside thevirtual networking
ofSwarm
, and it ensures that the load is distributed amongst all theTasks
for aService
.- External traffic incoming to published ports (all nodes listen).
- The benefit of
Virtual IP (VIP)
overDNS Round Robin
is that a lot of timesDNS Cache
inside our apps prevent us from properly distributing the load. - To run multiple websites on a same port, we could use:
Nginx Proxy
which is also known asHAProxy Load Balancer Proxy
. This load balancer will act inOSI Layer 4 (DNS)
.Docker Enterprise Edition
comes with built-inOSI Layer 4 (DNS) Web Proxy
. It is calledUCP or Docker Data Center
.
Web Sockets
don't do well withRouting Mesh
. That is becausesocket
needs persistent connection to a specific Container and because of load balancingRouting Mesh
keeps switching between Containers. We could haveproxy
infront of it to make it work withWeb Sockets
.
+ Docker service logs to see logs from different nodes
- To see logs from different
docker services
, execute:docker service logs NODE_NAME #For e.g. docker service logs adipostgres
- If
logging
is not available, turn it on by enablingexperimental features
of docker.// Open /etc/docker/daemon.json and specify following: {"experimental": true}
Stacks
is another layer of abstraction added toSwarm
.Swarm
is basically aDocker Engine
and it can acceptCompose File
usingstack
command.Swarm
reads theCompose File
without needingDocker Compose
anywhere on the server.- Basically it's a
Compose File
forSwarm
in production. Stacks
acceptCompose File
as their declarative definition forservices
,networks
andvolumes
.- We use following command to deploy our
Stack
:docker stack deploy
Stacks
manages all those objects for us, includingoverlay
network perStack
. Also addsStack Name
to start of their name.- The key
deploy:
is what we use in ourCompose File
. It allows to specify things that are specific toSwarm
. For e.g.- How many
replicas
do we want? - What we want to do when we
failover
? - How do we want to do
rolling updates
? - And all those sort of things that we wouldn't care about on our local development machine.
- How many
Stacks Config File
doesn't allowBuilding
. AndBuilding
should never ever happen onproduction
.docker-compose
now ignoresdeploy:
key in Config File.Swarm
ignoresbuild:
key in Config File.docker-compose
cli not needed onSwarm Server
. It's not aproduction
tool. It was designed to be a developer and sysadmin helper tool. It's best for local work.
+ How to deploy Swarm stack using compose file?
- To deploy a
Swarm
stack using Compose File:# '-c' option is for Compose File. docker stack deploy -c COMPOSE_FILE APP_NAME # For e.g. docker stack deploy -c adi-swarm-stack.yml myapp
- Easiest
secure
solution for storingsecrets
inSwarm
. - Encrypted on disk and encrypted in transit as well.
- There are lots of other options like
Vault
available for storingsecrets
. - Supports generic strings or binary content up to
500kb
in size. - Doesn't require apps to be rewritten.
- From
Docker v1.13.0
, theSwarm Raft Database
is encrypted on disk. - It's only stored on the disk of the
manager
nodes and they're the only ones that have the keys to unlock it or decrypt it. - Default is
Managers
andWorkers
control plane
isTLS + Mutual Auth
. - Secrets are first stored in
Swarm
(usingdocker secrets
command), then assigned to aService(s)
. - Only Containers in assigned
Service(s)
can see them. - They look like files in Containers but are actually
in-memory
filesystem. - On disk, they are located at:
/run/secrets/<secret_name> # OR /run/secrets/<secret_alias>
- Local
docker-compose
can usefile-based
secrets, but they are not secure.
+ What is a Secret?
- Usernames and Passwords
TLS
certificates and keys.- SSH Keys.
- Any data we would prefer not to be
on the front page of the news
.
+ How to create a Secret?
- There are 2 ways we can create a
secret
inSwarm
:- Create a text file and store value in it.
- Assume, we have a file
db_username.txt
with text contentaditya
:> cat db_username.txt aditya
- Now, to create a
secret
from above file,docker secret create SECRET_NAME FILE_PATH # For e.g. docker secret create DB_USER db_username.txt
- Running above command will spit out a key in return.
- Assume, we have a file
- Pass
secret value
at the command line.- To pass a
value
at command line and create asecret
out of it:echo "myPasswordAdi123" | docker secret create DB_PASSWORD
- Running above command will spit out a key in return.
- To pass a
- Create a text file and store value in it.
+ How to decrypt a Secret?
- Only
Containers
andServices
have access to the decryptedsecrets
. - For e.g.
# Demo # Create a service first. docker service create --name adidb --secret DB_USER --secret DB_PASS -e POSTGRES_PASSWORD_FILE=/run/secrets/DB_PASS -e POSTGRES_USER_FILE=/run/secrets/DB_USER postgres # List containers of 'adidb' service and copy the container name. docker service ps adidb # Get a shell inside Container. docker exec -it adidb.1.fbhdbj3738dh2 bash # 'adidb.1.fbhdbj3738dh2' is CONTAINER_NAME # Once we have the shell inside Container, list all secrets: ls /run/secrets/ # 'cat' contents of any secret file and it will display decrypted value. cat DB_USER
+ How to remove a Secret?
- Only
Containers
andServices
have access to the decryptedsecrets
. - When we remove/add a
secret
, it will stop the Container and redeploy a new one. This is not ideal for databaseservices
inSwarm
. - To remove a
secret
fromSwarm
:# SSH/Get a shell inside Container. # List containers of 'adidb' service and copy the container name. docker service ps adidb # Get a shell inside Container. docker exec -it adidb.1.fbhdbj3738dh2 bash # 'adidb.1.fbhdbj3738dh2' is CONTAINER_NAME # To remove docker service update --secret-rm
- Swarm's update functionality is centered around a
rolling update
pattern for ourreplicas
. - Provides
rolling replacement
oftasks/containers
in aservice
. - In other words, if we have a
service
with more than onereplica
, it's going to roll through them by default, one at a time, updating eachContainer
by replacing it with the new settings that we're putting in theupdate command
. - Limits
downtime
(be careful withprevents
downtime). - Will replace
Containers
for most changes. - There are loads of
CLI options (77+)
available to control theupdate
. create
options will usually change, adding-add
or-rm
to them.- Also includes
rollback
andhealthcheck
options. - Also has
scale
androllback
subcommands for quicker access. For e.g.docker service scale web=4 # And docker service rollback web
- If a
stack
already exists and if we dostack deploy
to a samestack
, it will issueservice updates
. - In
Swarm Updates
, we don't have a differentdeploy
command. It's just samedocker stack deploy
, with the file that we have edited, and it's job is to work with all of the other parts of theAPI
to determine if there are any changes needed, and then roll those out with aservice update
.
+ Swarm Update Examples
- Just update the
image
to a newer version, that is already being used. We will have to useservice update
command:docker service update --image myapp:1.2.1 <SERVICE_NAME>
- Adding an
environment
variable and remove aport
. We will have to useservice update
command:docker service update --env-add NODE_ENV=production --publish-rm 8080
- Change number of
replicas
of twoservices
. We will have to useservice scale
command:# Set number of `web` replicas to 8 and number of `api` replicas to 6. docker service scale web=8 api=6
Swarm Update
, first edit theYAML
file and then execute:docker stack deploy -c FILE_NAME.yml <STACK_NAME>
- Supported in
Dockerfile
,Compose YAML
,docker run
andSwarm Services
. - Docker engine will
exec
's the command in the Container.- For e.g.
curl localhost
.
- For e.g.
- Docker runs
Healthcheck
command from inside the Container, and not from outside the Container. - It expects
exit 0 (OK)
orexit 1 (Error)
. Healthcheck
commands are run every30 seconds
by default.Healthcheck
in Docker specifically has only 3states
. Follwing are thestates
:starting
: first30 seconds
by default, where it hasn't run ahealthcheck
command yet.healthy
.unhealthy
.
- This is much better than
is binary still running?
. - This isn't a external monitoring replacement. 3rd party monitoring tools provide much better insights including graphs and all.
+ Where do we see Docker Healthcheck status?
- The
Healthcheck status
shows up indocker container ls
. - We can check
last 5 healthchecks
withdocker container inspect
. docker run
command does not take action on anunhealthy
Container. Once thehealthcheck
is considers a Containerunhealthy
,docker run
is just going to indicate that in thels
andinspect
commands.Swarm Services
will replacetasks/containers
if they failhealthcheck
.service update
commands wait forhealthcheck
before continuing.
+ Healthcheck Docker Run Example
- Adding
healthcheck
at runtime usingdocker run
command:docker run \ --health-cmd="curl -f localhost:9200/_cluster/health || false" \ --health-interval=5s \ --health-retries=3 \ --health-timeout=2s \ --health-start-period=15s \ elasticsearch:2
+ Healthcheck in Dockerfile
- Basic
HEALTHCHECK
command inDockerfile
:HEALTHCHECK curl -f http://localhost/ || false
- Custom options with
HEALTHCHECK
command inDockerfile
:HEALTHCHECK --timeout=2s --interval=3s --retries=3 \ CMD curl -f http://localhost/ || exit 1 # `exit 1` is equivalent to `false`.
- All options for
healthcheck
command inDockerfile
:# How often it should check `healthcheck`. --interval=DURATION # Default 30s # How long it should wait before marking Container `unhealthy`. --timeout=DURATION # Default 30s # When should it fire first `healthcheck` command. This gives us # ability to specify longer wait period than first 30 seconds. --start-period=DURATION # Default 0s. # Max number of times it should run `healthcheck` before # it marks that container `unhealthy`. --retries=N # Default 3.
- An image registry needs to be part of our
Container Plan
. Docker Store
is different thanDocker Hub
.Docker Cloud
is different thanDocker Hub
.
+ Docker Hub
- It is the most popular public
Image
registry. Docker Hub
is aDocker Registry
plusLightweight Image Building
.- It provides 1 free
private repository
. We have to pay for there on wards. - We can make use of
webhooks
to make ourrepository
sendwebhook notification
to services likeJenkins
,Codeship
,Travis CI
or something like that, to have automated builds continue down the lines. Webhooks
are there to help us automate the process of getting our code all the way from something likeGit
orGithub
toDocker Hub
and all the way to our servers where we want to run them.Collaborators
are where we provide permissions for other users to ourImage
.
+ Running Docker Registry
- Using
Docker Registry
, we can run aprivate
Image registry for ournetwork
. - It's a part of
Docker/distribution GitHub Repo
. - The de facto in private container registries.
- Not as full featured as
Docker Hub
orOthers
, no web UI, basic auth only. - At it's core, it's just a web API and storage system, written in
Go Lang
. - Storage supports
local
,S3
,Azure
,Alibaba
,Google Cloud
andOpenStack Swift
. - We should secure our registry with
TLS (Transport Layer Security)
. - Storage cleanup via
Garbage Collection
. - Enable
Docker Hub Caching
via--registry-mirror
option.
+ Running A Private Docker Registry
- Run the registry image on default port
5000
. - Re-tag an existing Image and push it to our new
registry
. - Remove that Image from our local cache and pull it from new
registry
. - Re-create
registry
using abind mount
and see how it stores data. - Following commands demonstrate
How to run private Docker registry
:docker container run -d -p 5000:5000 --name registry registry docker container ls docker image ls docker pull hello-world docker run hello-world docker tag hello-world 127.0.0.1:5000/hello-world docker image ls docker push 127.0.0.1:5000/hello-world docker image remove hello-world docker image remove 127.0.0.1:5000/hello-world docker container rm admiring_stallman docker image remove 127.0.0.1:5000/hello-world docker image ls docker pull 127.0.0.1:5000/hello-world:latest docker container kill registry docker container rm registry docker container run -d -p 5000:5000 --name registry -v $(pwd)/registry-data:/var/lib/registry registry TAB COMPLETION docker image ls docker push 127.0.0.1:5000/hello-world
+ Registry And Proper TLS
Secure by Default
: Docker won't talk to registry withoutHTTPS
.- Except,
localhost (127.0.0.0/8)
. - For remote
self-signed TLS
, enableinsecure-registry
option in engine.
+ Private Docker Registry In Swarm
- Works the same way as localhost.
- Only difference is we don't have to run
docker run
command. We have to rundocker service
command or astack file
. - Because of
Routing Mesh
, all nodes can see127.0.0.1:5000
. - We don't have to enable
insecure registry
because that's already enabled byDocker Engine
. - Remember to decide how to store Images (volume driver).
- When we're in a
Swarm
, we cannot use animage
that's only on one node. ASwarm
has to be able to pullImages
on all nodes from some repository in aregistry
that they can all reach. - Note: All nodes must be able to access
images
.
Pro Tip: Use a hosted
SaaS Registry
if possible.
- Following commands demonstrate
How to run Private Docker Registry In Swarm?
:- Go to https://labs.play-with-docker.com.
- Start session and click on wrench/spanner icon and launch
5 Managers And No Workers
template. - Commands:
# http://play-with-docker.com docker node ls docker service create --name registry --publish 5000:5000 registry docker service ps registry docker pull hello-world docker tag hello-world 127.0.0.1:5000/hello-world docker push 127.0.0.1:5000/hello-world docker pull nginx docker tag nginx 127.0.0.1:5000/nginx docker push 127.0.0.1:5000/nginx docker service create --name nginx -p 80:80 --replicas 5 --detach=false 127.0.0.1:5000/nginx docker service ps nginx
+ What is Kubernetes
Kubernetes
is a popular Container orchestrator.Container Orchestration
is Make many servers act like one.Kubernetes
was released in 2015 by Google Inc. and now it is maintained by open source community which Google Inc. is also part of.Kubernetes
runs on top of Docker (usually) as a set of APIs in Containers.Kubernetes
provides set ofAPIs
orCLI
to manage Containers across servers/nodes.- Like in Docker we were using
docker
command a lot, inKubernetes
we usekubectl (kube control)
command. kubectl
is also referred to asKube Control
tool orKube Cuddle
tool orKoob Control
etc. but the standard name from official repo is nowKube Control
.- Many cloud vendors provide
Kubernetes
as a service to run our Containers. - Many vendors make a
distribution
ofKubernetes
. It's similar to the concept oflinux distribution
. For e.g. samelinux kernel
is running of differentdistributions
oflinux
. - In short,
Kubernetes
is a series of Containers, CLI's and configurations.
+ Why Kubernetes
- Not every solution needs orchestration.
- Simple formula whether or not to use orchestration:
- Take the
number of servers
that we need for particular environment and then thechange rate
of our applications or the environment itself. The multiplication of those 2 is equals to the benefit of Orchestration.
- Take the
- If our application is changing only 1ce a month or less, then the Orchestration and the efforts involved in deploying it, managing it, securing it, may be unnecessary at this state. Especially if we're a solo developer or just a very small team. That's where things like
Elastic Beanstalk
,Heroku
etc. start to shine as alternatives to doing our own Orchestration. - Carefully decide which Orchestration platform we need.
- There are Cloud specific Orchestration platforms like
AWS ECS
. It is more traditional offering that have been around a little bit longer likeCloud Foundry
,Mesos
andMarathon
. - If we're concerned about running Containers on premise, and in the Cloud or potentially multi-Cloud, then we may not want to go with those Cloud specific offerings like
ECS
. Because those were around beforeKubernetes
was. So, that was sort of a legacy solution that Amazon still supports and it's still a neat option, but only if we're specific toAWS
and that's the only place we ever plan to deploy Containers. Swarm
andkubernetes
are most popular Container Orchestrators that run on every Cloud, and in data centers, and even small environments possibly likeIoT
.- If we decide on
Kubernetes
as our Orchestrator then next big decision comes down towhich distribution we are going to use?
- First part of this decision is to figure out if we want a Cloud Managed solution or if we want to roll our own solution with a vendor's product that we would install on the servers ourselves.
- Some of the common distributions that are vendor supported are
Docker Enterprise
,Rancher
,OpenShift from RedHat
,Canonical from Ubuntu Company
,PKS from VMware
etc. Check out this list of Kubernetes Certified Distributors - We probably don't usually need pure upstream version of
GitHub's Kubernetes
.
+ Kubernetes vs. Swarm
Kubernetes
andSwarm
both solve similar problems. THey are both Container orchestrators that run on top of a Container runtime. There are different Container runtimes likeDocker
,Containerd
,CRI-O
,frakti
etc.Docker
is #1.Kubernetes
andSwarm
both are solid platforms with vendor backing.Swarm
is easier todeploy/manage
.Kubernetes
has more features and flexibility. It can solve more problems in more ways and also has wide adoption and support.- Advantages Of
Swarm
:- Comes with Docker, single vendor Container platform i.e Container runtime and the Orchestrator both built by the same company (Docker).
- Easiest orchestrator to deploy/manage ourselves.
- Follows 80/20 rule i.e. 20% of features for 80% of use cases.
- Runs anywhere where Docker can run:
- local, cloud, datacenter
- ARM, Windows, 32-bit
- Secure by default.
- Easier to troubleshoot because there are less moving parts in it and less things to manage.
- Advantages Of
Kubernetes
:- Clouds will deploy/manage
Kubernetes
for us. Has the widest Cloud and vendor support. - Now a days, even Infrastructure vendors like
VMware
,Red Hat
,NetApp
etc. are making their own distributions. - Widest adoption and community.
- Flexible: Covers widest set of use cases.
Kubernetes First
vendor support.No one ever got fired for buying IBM
.- i.e. Picking solution isn't 100% rational.
- Trendy, will benefit our career.
- CIO/CTO Checkbox.
- Clouds will deploy/manage
+ Kubernetes Installation
- Docker Desktop:
- Best one! It provides many things out of the box.
- Enable
Kubernetes
in Docker's settings and installation is done! - Sets up everything inside Docker's existing
Linux VM
. - Runs/Configures
Kubernetes Master Containers
. - Manages
kubectl
install andcerts
. Docker Desktop Enterprise (paid)
allows us to swap between different versions ofKubernetes
on the fly.
- Docker Toolbox on Windows:
MiniKube
- If we're using
Docker Toolbox on Windows
then we should useMiniKube
. - We don't even need
Docker Toolbox
installed. We can runMiniKube
separately. - Download
MiniKube
windows installerminikube-installer.exe
from GitHub. - Type
minikube start
in shell after installation. MiniKube
has similar todocker-machine
experience.- Creates a
VirtualBox VM
withKubernetes
master setup. - We separately have to install
kubectl
for Windows.
- If we're using
- Linux or Linux VM in Cloud:
MicroK8s
- If we are using
Linux OS
or anyVM
withLinux
on it, we should useMicroK8s
. MicroK8s
is made byUbuntu
.MicroK8s
installsKubernetes
right on the OS.MicroK8s
installsKubernetes
(without Docker Engine) orlocalhost
(Linux).- Uses
snap
(rather thanapt
oryum
) for install. So we have to installsnap
first on our Linux OS.snap
can be installed usingapt-get
oryum
. - Control the
MicroK8s service
viamicrok8s.
commands. For e.g.microk8s.enable
command. kubectl
accessible viamicrok8s.kubectl
. It's better to add alias for this command in ourbash/zsh
profile:alias kubectl=microk8s.kubectl
.
- If we are using
- Kubernetes In A Browser:
- Easy to get started.
- Downside is it doesn't keep our environments. They are not saved.
- Try https://labs.play-with-k8s.com
- Or try https://www.katacoda.com/courses/kubernetes/playground
+ Kubernetes Architecture Terminology
Kubernetes
:- The whole orchestration system.
- Shortly mentioned as
K8s
orKube
.
Kubectl
:Kubectl
stands forKube Control
.- It's a CLI to configure
Kubernetes
and manage apps.
Node
:Node
is a single server in theKubernetes Cluster
.
Kubelet
:Kubelets
are theKubernetes Agent
running on nodes.
Control Plane
:- Sometimes called as
master
. Control Plane
are the set of Containers that manage thecluster
.Control Plane
includesAPI Server
,Scheduler
,Controller Manager
,etcd
,coreDNS
and more..
- Sometimes called as
Kube-Proxy
:- It's for networking in
Control Plane
..
- It's for networking in
+ Kubernetes Container Abstractions
- Pod: One or more Containers running together on one
Node
.Pod
is the basic unit of deployment.- Containers are always in
Pod
. - We don't deploy Containers independently. Instead, Containers are inside
Pods
and we deployPods
.
- Controller: For creating/updating
Pods
and other objects.Controllers
are on top ofPods
and we use them for creating/updatingPods
and other objects.- It's a differencing engine that has many types.
- There are many types of
Controllers
such as:Deployment Controller
.ReplicaSet Controller
.StatefulSet Controller
.DemonSet Controller
.Job Controller
.CronJob Controller
.- And lot more..
- Service: The
Service
is a little bit different inKubernetes
than it is inSwarm
.- A
Service
is specifically the endpoint that we give to a set ofPods
, often when we use aController
For e.g.deployment controller
to deploy a set ofreplica pods
, we would then set aservice
on that. Service
means a persistent endpoint in thecluster
so that everything else can acess that set ofPods
at a specificDNS name
andport
.
- A
- Namespace: Filtered group of objects in a
cluster
.- It's an optional, advanced feature.
- It's a simply a filter for our views when we are using
kubectl
command line. - For e.g. When we are using Docker desktop, it defaults to the
default namespace
and filters out all of the system containers runningKubernetes
in the background. Because normally, we don't want to see them containers when working withkubectl
command line.
- There are many other things to
Kubernetes
such as:Secrets
.ConfigMaps
.- And more..
+ Kubernetes Run, Create and Apply
kubectl run
: This command is changing to be only forPod
creation.kubectl create
: Create some resources via CLI or YAML.kubectl apply
: Create/update anything via YAML.
+ Creating First Pods - nginx
- Create:
kubectl version kubectl run my-nginx --image nginx kubectl get pods kubectl get all
- Cleanup
kubectl get pods kubectl get all kubectl delete deployment my-nginx kubectl get all
+ Scaling Replica Sets - Apache Httpd
- Create:
kubectl run my-apache --image httpd # 'run' gives us a single 'pod' or 'replica' kubectl get all
- Scale
# Use either below command to scale: kubectl scale deploy/my-apache --replicas 2 # Or use following command: # kubectl scale deployment my-apache --replicas 2 # Both above and this command are same. kubectl get all
+ Inspecting Kubernetes Objects - Apache Httpd
- Create:
kubectl run my-apache --image httpd # 'run' gives us a single 'pod' or 'replica' kubectl scale deploy/my-apache --replicas 2 # Scale it to 2 replica sets. kubectl get all
- Inspect Kubernetes Objects Commands
kubectl get pods # Get Container logs kubectl logs deployment/my-apache kubectl logs deployment/my-apache --follow --tail 1 kubectl logs -l run=my-apache # '-l' is for label. # Get bunch of details about an object, including events! kubectl get pods kubectl describe pod/my-apache-<pod id> # Watch a command (without needing 'watch') kubectl get pods -w # Run this command in one terminal window. kubectl delete pod/my-apache-<pod id> # Run this command in another terminal window. kubectl get pods # Run this command in another terminal window.
- Cleanup
kubectl delete deployment my-apache
- A
service
is a stable address forpod(s)
. - If we want to connect to
pod(s)
, we need aservice
. kubectl expose
command creates aservice
for existingpods
.CoreDNS
allows us to resolveservices
by theirnames
.- But this only works for services in a same
namespace
. To get allnamespaces
, runkubectl get namespaces
.
- But this only works for services in a same
Services
also hasFQDN (Fully Qualified Domain Name)
.- We can do CURL hit service with
FQDN
as below:
curl <hostname>.<namespace>.svc.cluster.local
- We can do CURL hit service with
- There are four different types of
services
inKubernetes
:ClusterIP
NodePort
LoadBalancer
ExternalName
ClusterIP
andNodePort
are theservices
which are always available inKubernetes
.- There's one more way, external traffic can get inside our
Kubernetes
- It is calledIngress
. - Following 3 service types are additive, each one creates the ones above it:
ClusterIP
NodePort
LoadBalancer
+ Kubernetes Services - ClusterIP (default)
- It's only available in the
cluster
. - This is about one set of
Kubernetes Pods
talking to another set ofKubernetes Pods
. - It gets it's own
DNS
stress. That's going to be theDNS
address in thecore DNS
control plane. - Single, internal virtual IP allocated. In other words, it's going to get an IP address in that virtual IP address space inside the cluster. And that allows our other
pods
running in the cluster to talk to thisservice
using the port of theservice
. - Only reachable from within
cluster (nodes and pods)
. Pods
can reach service on apps port number.- Following commands are useful for creating
ClusterIP
service:kubectl get pods -w kubectl create deployment httpenv --image=bretfisher/httpenv kubectl scale deployment/httpenv --replicas=5 kubectl expose deployment/httpenv --port 8888 kubectl get service kubectl run --generator run-pod/v1 tmp-shell --rm -it --image bretfisher/netshoot -- bash curl httpenv:8888 curl [ip of service]:8888 kubectl get service
+ Kubernetes Services - NodePort
- When we create a
NodePort
, we're going to get aHigh Port
on eachnode
that's assigned to thisservice
. - Port is open on every
node
's IP. - Anyone can connect (if they can reach
node
). NodePort
service also creates aClusterIP
internally.
+ Kubernetes Services - LoadBalancer
- This
service
is mostly used inClouds
. - It controls a
LB
endpoint external to the cluster. - When we create
LoadBalancer
service, it will automatically createClusterIP
andNodePort
services internally. - Only available when
infra
provider gives us aLB (e.g. AWS ELB etc)
. - Creates
ClusterIP
andNodePort
services and then tellsLB
to send toNodePort
. - Following commands are useful for creating a
NodePort
andLoadBalancer
service:kubectl get all kubectl expose deployment/httpenv --port 8888 --name httpenv-np --type NodePort kubectl get services curl localhost:32334 kubectl expose deployment/httpenv --port 8888 --name httpenv-lb --type LoadBalancer kubectl get services curl localhost:8888 kubectl delete service/httpenv service/httpenv-np kubectl delete service/httpenv-lb deployment/httpenv
+ Kubernetes Services - ExternalName
- This
service
is used less often. - Adds
CNAME DNS
record toCoreDNS
only. - Not used for
Pods
, but for givingPods
aDNS Name
to use for something outsideKubernetes
.
+ Run, Create, Expose Generators
Generators
are like templates. They essentially create thespec
or specification to apply toKubernetes Cluster
based on our command line options.- Commands like
Run
,Create
,Expose
etc. use helper templates calledGenerators
. - Every resource in
Kubernetes
has a specification orspec
. For e.g.kubectl create deployment aditest --image nginx --dry-run -o yaml
- We can output these templates with
--dry-run -o yaml
. We can use theseYAML default
s as a startin point. - Generators are
opinionated defaults
.
+ Generators Example
- Using dry-run with yaml output we can see the generators.
- Examples:
kubectl create deployment aditest --image nginx --dry-run -o yaml kubectl create job aditest --image nginx --dry-run -o yaml # We need the deployment "aditest " to exist before below command works. kubectl expose deployment/aditest --port 80 --dry-run -o yaml
+ Imperative vs. Declarative
- Imperative: Focus on how a program operates.
- Declarative: Focus on what a program should accomplish.
- For e.g. Coffee
- Imperative: I will boil water, scoop out 42 grams of medium-fine grounds, pour over 700g of water, etc.
- Declarative: Barista, I would like a cup of coffee.
- Barista is a engine that works through the steps, including retrying to make a cup of coffee, and is only finished when I have a cup of coffee.
+ Imperative Kubernetes
- Examples:
kubectl run
,kubectl create deployment
,kubectl update
.- We start with a state we know (no deployment exist).
- We ask
kubectl run
to create a deployment.
- Different commands are required to change that deployment.
- Different commands are required per object.
- Imperative is easier to get started.
- Imperative is easier for humans at the CLI.
- Imperative is easier when we know the state.
- Imperative is not easy to automate
+ Declarative Kubernetes
Declarative
means we don't know thestate
, we just know theend result
that we want.- Example:
kubectl apply -f my-resources.yml
- We don't know the current state.
- We only know what we want the end result to be (yaml contents).
- Same command each time (tiny exception for delete).
- Resources can be in a single file, or multiple files (apply a whole dir).
- Requires understanding the YAML keys and values.
- More work than
kubectl run
for just starting aPOD
. - The easiest way to automate our orchestration.
- The eventual path to GitOps happiness.
+ Three Management Approaches
- Imperative Commands: run, expose, scale, edit, create deployment etc.
- Best for dev/learning/personal projects.
- Easy to learn, hardest to manage over time.
- Imperative Objects:
create -f file.yml
,replace -f file.yml
,delete...
- Good for
prod
of small environments, single file per command. - Store our changes in git-based yaml files.
- Hard to automate.
- Good for
- Declarative Objects:
apply -f file.yml
ordir\
ordiff
etc.- Best for prod, easier to automate.
- Harder to understand and predict changes.
- MOST IMPORTANT RULES:
- Don't mix 3 approaches when we have true production dependency.
- Store yaml in Git, Git Commit each change before we apply.
+ Using kubectl apply
- Create/Update resource in a file:
kubectl apply -f myfile.yml
- Create/Update a whole directory of yaml:
kubectl apply -f adiYamls/
- Create/Update from a URL:
kubectl apply -f https://aditya.io/pod.yml
+ Kubernetes Configuration YAML
- Kubernetes Configuration File (YAML or JSON).
- A full description of a resource in Kubernetes is a
manifest
. - Each file contains one or more
manifests
. - Each
manifest
describes anAPI Object
(for e.g. deployment, job, secret). - Each
manifest
needs four parts (root/mainkey:values
in a file). They are:apiVersion
kind
metadata
spec
+ How To Build YAML File
- kind: We can get a list of resources the cluster supports:
kubectl api-resources
- apiVersion: We can get a list of api versions the cluster supports:
kubectl api-versions
- metadata: Only name is required.
- spec: Where all the action is at!
- We can get all the
keys
forspec
by running following command:kubectl explain services.spec
- We can get all the
keys
for a specifickey
inspec
by running following command:kubectl explain services.spec.<TYPE>
- We can get all the
keys
eachkind
supports:kubectl explain services --recursive
- We can get all the
- sub spec: Can have sub spec of other resources.
- We can get all the
keys
for subspec
of any resource by running following command:kubectl explain deployment.spec.template.spec.volumes.nfs.server
- We can get all the
+ Dry Runs With Apply YAML
- Client Side Only dry run:
kubectl apply -f app.yml --dry-run
- Server Side dry run:
kubectl apply -f app.yml --server-dry-run
- To See Diff Visually:
kubectl diff -f app.yml
+ Labels And Annotations
Labels
goes undermetadata
in YAML.- They are optional.
- Simple list of
key:value
for identifying our resource later byselecting
,grouping
orfiltering
for it. - Common examples:
env: prod
tier: frontend
app: api
customer: aditya.com
+ What is Kubernetes
- Kubernetes is a container orchestration tool that is used for automating the tasks of managing, monitoring, scaling and deployment of containerized applications.
- It creates groups of containers that can be logically discovered and managed for easy operations on containers.
+ Difference between Docker Swarm and Kuberentes
- Docker Swarm is a default container orchestration tool that comes with Docker.
- Docker Swarm can only orchestrate simple Docker Containers.
- Kuberenetes helps manage much more complex software application containers.
- Kuberenetes offers support for larger demand production environment.
- Docker Swarm can't do auto-scaling.
- Docker Swarm doesn't have a GUI.
- Docker Swarm can deploy rolling updates but can't deploy automatic rollbacks.
- Docker Swarm requires third-party tools like ELK stack for logging and monitoring, while Kuberenetes has integrated tools for the same.
- Docker Swarm can share storage volumes with any containers easily, while Kubernetes can only share storage volumes with containers in the same pod.
+ What is a Heapster?
- The Heapster lets us do the container cluster monitoring.
- It lets us do cluster-wide monitoring and event data aggregation.
- It has native support for Kubernetes.
+ What is a kubelet?
- The kubelet is a service agent that controls and maintains a set of pods by watching for pod specs through the Kubernetes API server.
- It preserves the pod lifecycle by ensuring that a given set of containers are all running as they should.
- The kubelet runs on each node and enables the communication between the master and slave nodes.
+ What is a kubectl?
- Kubectl is a Kuberentes command line tool that is used for deplying and managing applications on Kubernetes.
- Kubectl is specially useful for inspecting the cluster resources, and for creating, updating and deleting the components.
+ What are the different types of Kubernetes services?
- Cluster IP Service.
- It is a default type of service whener we create a service and not specify what it would be.
- Cluster IP type service can only be reached internally within the cluster.
- It is not exposed to the outside world.
- Node Port Service.
- It exposes the service on each node's ip at a static port.
- Cluster IP service is created automatically and a Node Port service will route to it.
- External Name.
- Maps the service to the contents of the external name field.
- It does it by returning the value for CNAME record.
- Load Balancer Service.
- It exposes the service externally using the load balancer of our cloud provider.
- External load balancer routes the Node Port and Cluster IP service which are created automatically.
+ How to set a static IP for Kubernetes Load Balancer?
- Kubernetes Master assigns a new IP address.
- We can set a static IP for Kubernetes load balancer by changing the DNS records whenever Kuberenetes Master assigns a new IP address.
+ What is ETCD?
- Kubernetes uses ETCD as a distributed key-value store for all of its data, including metadata and configuration data, and allows nodes in Kuberenetes clusters to read and write data.
- ETCD represents the state of a cluster at a specific moment in time and is a center for state management and cluster coordination of a Kubernetes cluster.
+ Can we use many claims out of a persistent volume?
- Answer = NO!
- The mapping between
persistentVolume
andpersistentVolumeClaim
is always one to one. - Even when we delete the claim,
persistentVolume
still remains as we setpersistentVolumeReclaimPolicy
is set toRetain
and it will not be resused by any other claim.
+ How do you deploy a feature with zero downtime in Kuberenetes?
- Short answer: We can apply rolling updates.
- In Kuberenetes, we can define update strategy in deployment.
- We should put
RollingUpdate
as a strategy to ensure no down time.
+ What is a difference between replication controller and replica sets?
- Replication controllers are obsolute now in the latest versions of Kuberenetes.
- The only difference between the Replication Controller and Replica Sets is the Selectors.
- Replication Controller don't have Selectors in their specs.
+ What is a a Kube-Proxy?
- Kube-Proxy runs on each of the nodes.
- It is responsible for directing traffic to the right container based on IP and the port number of incoming request.
+ What is a Headless Service?
- It is similar to normal service but it doesn't have a Cluster IP.
- It enables us to directly reach the pods without the need of accessing it through a proxy.
+ What is a PVC (Persistent Volume Claim)?
- It's a storage requested by Kubernetes for pods.
- The user is not required to know the underlying provisioning.
- This claim should be created in the same namespace where the pod is created.
+ What are the different components in Kuberenetes architecture?
- Master Node:
- API Server:
- REST API used to manage and manipulate the cluster.
- Controller Manager:
- Daemon responsible for regulating the cluster in Kubernetes and manage non-terminating control loops.
- Scheduler:
- Responsible for scheduling tasks on worker node. It also keeps resources utilization data for each of the slave nodes.
- ETCD:
- Distributed Key-Value storage where we have shared configurations. It is also used for service discovery. It stores all the information about current situation of what the cluster looks like.
- API Server:
- Worker Node:
- Kubelet:
- It's job is to get the configuration of pods from the API server and ensure everything is running according to that.
- Kube-Proxy:
- Behaves like a network proxy as well as the load balancer for service on a worker node. It directs traffic to particular node based on the ip and port number on incoming request.
- Pod:
- Smallest unit in the Kubernetes eco-system. It can have one or more containers which can logically run on different nodes.
- Container:
- Runs on Pod.
- Kubelet:
+ How to pass sensitive information in cluster?
- We can pass sensitive information in Kubernetes using Secrets.
- Secrets can be created using YAML and TXT files.
+ What is a Sematext Docker Agent?
- It is a log collection agent with events and metrics.
- It runs as a small container in each Docker host.
- These agents gather metrics, events, and logs for all cluster nodes and containers.
+ Can we make sure that pod gets scheduled on a specific worker node?
- Pod gets scheduled on worker nodes automatically.
- To spawn a pod on a particular worker node, we can use taints and tolerations.
+ Running 3 Containers: nginx (80:80), mysql (3306:3306), httpd (Apache Server - 8080:80)
- MySQL:
- To run
MySQL
:docker container run -d -p 3306:3306 --name db -e MYSQL_RANDOM_ROOT_PASSWORD=yes mysql
- View logs to see generated random password for the
root
user. To view logs:docker container logs db # 'db' is the name we have given in above command.
- Look for something like below line for the generated random password:
2020-03-14 12:20:33+00:00 [Note] [Entrypoint]: GENERATED ROOT PASSWORD: ChooHxafdsasd2dsx1ouovo7aegha
- To run
- httpd (Apache Server):
- To run
httpd
:docker container run -d -p 8080:80 --name webserver httpd
- To run
- nginx:
- To run
nginx
:docker container run -d -p 80:80 --name proxy nginx
- To run
- All containers are running by now. To list all running containers:
docker container ls # OR docker ps # Old command
- To clean these containers up:
docker container stop # Press TAB to get a list of all running containers # OR docker container stop proxy webserver db
- To remove these containers:
docker container ls -a # This will give list of all containers, even stopped ones. # To remove containers, specify their ids like below: docker container rm b520f9b00f89 5eaa2a2b09c6 c782914b7c66
- To remove
Images
as well:# To remove images, specify their ids like below: docker image rm b520f4389 5eaa22b09c6 c782432b7c66
+ To clean up apt-get cache
- By cleaning up
apt-get
cache in Containers, we get to keep our Image size small. - It's a best practice to clear up
apt-get
cache once the required packages are installed. - Following command cleans up
apt-get
cache and it is used across many popular Images same way:rm -rf /var/lib/apt/lists/*
- FOR EXAMPLE: After installing
git
in Image, we should clean upcache
to save almost10mb
space:# Below command installs 'git' and clears cache after installation. RUN apt-get update && apt-get install -y git \ && rm -rf /var/lib/apt/lists/*
+ To get a Shell inside Container
- To start new Container interactively:
docker container run -it
- To run additional command in existing container:
docker container exec -it
- For e.g. To start a
httpd
container interactively and get abash
shell inside it:docker container run -it --name webserver httpd bash
- For e.g. To get a
bash
shell inside already runningnginx
Container named asproxy
:docker container exec -it proxy bash
+ To create a temp POD in cluser and get an interactive shell in it
- This command will create a temporary
POD
in a running cluser and launch an interactive shell inside it. - NOTE: This temporary
POD
will be deleted once we exit out of the shell.kubectl run --generator run-pod/v1 tmp-shell --rm -it --image bretfisher/netshoot -- bash
+ Docker Swarm - Create Our First Service and Scale it Locally
- To create a Docker Swarm service and scale it locally, following are the useful commands:
docker info docker swarm init docker node ls docker node --help docker swarm --help docker service --help docker service create alpine ping 8.8.8.8 docker service ls docker service ps frosty_newton docker container ls docker service update TAB COMPLETION --replicas 3 docker service ls docker service ps frosty_newton docker update --help docker service update --help docker container ls docker container rm -f frosty_newton.1.TAB COMPLETION docker service ls docker service ps frosty_newton docker service rm frosty_newton docker service ls docker container ls
+ Creating a 3-Node Swarm Cluster
- To create a 3-Node Swarm Cluster, following are the useful commands:
# http://play-with-docker.com docker info docker-machine docker-machine create node1 docker-machine ssh node1 docker-machine env node1 docker info # http://get.docker.com docker swarm init docker swarm init --advertise-addr TAB COMPLETION docker node ls docker node update --role manager node2 docker node ls docker swarm join-token manager docker node ls docker service create --replicas 3 alpine ping 8.8.8.8 docker service ls docker node ps docker node ps node2 docker service ps sleepy_brown
+ Scaling Out with Overlay Networking
- Following set of commands orchestrate scaling out with overlay networking:
docker network create --driver overlay mydrupal docker network ls docker service create --name psql --netowrk mydrupal -e POSTGRES_PASSWORD=mypass postgres docker service ls docker service ps psql docker container logs psql TAB COMPLETION docker service create --name drupal --network mydrupal -p 80:80 drupal docker service ls watch docker service ls docker service ps drupal docker service inspect drupal
+ Scaling Out with Routing Mesh
- Following set of commands orchestrate scaling out with Routing Mesh:
docker service create --name search --replicas 3 -p 9200:9200 elasticsearch:2 docker service ps search
+ Create a Multi-Service Multi-Node Web App
- Following set of commands orchestrate creation of a Multi-Service Multi-Node Web App:
docker node ls docker service ls docker network create -d overlay backend docker network create -d overlay frontend docker service create --name vote -p 80:80 --network frontend -- replica 2 COPY IMAGE docker service create --name redis --network frontend --replica 1 redis:3.2 docker service create --name worker --network frontend --network backend COPY IMAGE docker service create --name db --network backend COPY MOUNT INFO docker service create --name result --network backend -p 5001:80 COPY INFO docker service ls docker service ps result docker service ps redis docker service ps db docker service ps vote docker service ps worker cat /etc/docker/ docker service logs worker docker service ps worker
+ Swarm Stacks and Production Grade Compose
- Following set of commands demonstrate Swarm Stacks and Production Grade Compose:
docker stack deploy -c example-voting-app-stack.yml voteapp docker stack docker stack ls docker stack ps voteapp docker container ls docker stack services voteapp docker stack ps voteapp docker network ls docker stack deploy -c example-voting-app-stack.yml voteapp
+ Using Secrets in Swarm Services
- Useful commands:
docker secret create psql_usr psql_usr.txt echo "myDBpassWORD" | docker secret create psql_pass - TAB COMPLETION docker secret ls docker secret inspect psql_usr docker secret create --name psql --secret psql_user --secret psql_pass -e POSTGRES_PASSWORD_FILE=/run/secrets/psql_pass -e POSTGRES_USER_FILE=/run/secrets/psql_user postgres docker service ps psql docker exec -it psql.1.CONTAINER NAME bash docker logs TAB COMPLETION docker service ps psql docker service update --secret-rm
+ Using Secrets with Swarm Stacks
- Useful commands:
vim docker-compose.yml docker stack deploy -c docker-compose.yml mydb docker secret ls docker stack rm mydb
+ Create A Stack with Secrets and Deploy
- Useful commands:
vim docker-compose.yml docker stack deploy - c docker-compose.yml drupal echo STRING |docker secret create psql-ps - VALUE docker stack deploy -c docker-compose.yml drupal docker stack ps drupal
+ Service Updates: Changing Things In Flight
- Useful commands:
docker service create -p 8088:80 --name web nginx:1.13.7 docker service ls docker service scale web=5 docker service update --image nginx:1.13.6 web docker service update --publish-rm 8088 --publish-add 9090:80 web docker service update --force web
+ Healthchecks in Dockerfile
- Useful commands:
docker container run --name p1 -d postgres docker container ls docker container run --name p2 -d --health-cmd="pg_isready -U postgres || exit 1" postgres docker container ls docker container inspect p2 docker service create --name p1 postgres docker service create --name p2 --health-cmd="pg_isready -U postgres || exit 1" postgres