If you are using a released version of Kubernetes, you should refer to the docs that go with that version.
The latest release of this document can be found [here](http://releases.k8s.io/release-1.1/docs/getting-started-guides/docker-multinode/worker.md).Documentation for other releases can be found at releases.k8s.io.
These instructions are very similar to the master set-up above, but they are duplicated for clarity.
You need to repeat these instructions for each node you want to join the cluster.
We will assume that the IP address of this node is ${NODE_IP}
and you have the IP address of the master in ${MASTER_IP}
that you created in the master instructions. We'll need to run several versioned Kubernetes components, so we'll assume that the version we want
to run is ${K8S_VERSION}
, which should hold a value such as "1.0.7".
For each worker node, there are three steps:
- Set up
flanneld
on the worker node - Start Kubernetes on the worker node
- Add the worker to the cluster
As before, the Flannel daemon is going to provide network connectivity.
Note: There is a bug in Docker 1.7.0 that prevents this from working correctly. Please install Docker 1.6.2 or Docker 1.7.1 or Docker 1.8.3.
As previously, we need a second instance of the Docker daemon running to bootstrap the flannel networking.
Run:
sudo sh -c 'docker -d -H unix:///var/run/docker-bootstrap.sock -p /var/run/docker-bootstrap.pid --iptables=false --ip-masq=false --bridge=none --graph=/var/lib/docker-bootstrap 2> /var/log/docker-bootstrap.log 1> /dev/null &'
Important Note: If you are running this on a long running system, rather than experimenting, you should run the bootstrap Docker instance under something like SysV init, upstart or systemd so that it is restarted across reboots and failures.
To re-configure Docker to use flannel, we need to take docker down, run flannel and then restart Docker.
Turning down Docker is system dependent, it may be:
sudo /etc/init.d/docker stop
or
sudo systemctl stop docker
or it may be something else.
Now run flanneld itself, this call is slightly different from the above, since we point it at the etcd instance on the master.
sudo docker -H unix:///var/run/docker-bootstrap.sock run -d --net=host --privileged -v /dev/net:/dev/net quay.io/coreos/flannel:0.5.5 /opt/bin/flanneld --ip-masq --etcd-endpoints=http://${MASTER_IP}:4001
The previous command should have printed a really long hash, copy this hash.
Now get the subnet settings from flannel:
sudo docker -H unix:///var/run/docker-bootstrap.sock exec <really-long-hash-from-above-here> cat /run/flannel/subnet.env
You now need to edit the docker configuration to activate new flags. Again, this is system specific.
This may be in /etc/default/docker
or /etc/systemd/service/docker.service
or it may be elsewhere.
Regardless, you need to add the following to the docker command line:
--bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}
Docker creates a bridge named docker0
by default. You need to remove this:
sudo /sbin/ifconfig docker0 down
sudo brctl delbr docker0
You may need to install the bridge-utils
package for the brctl
binary.
Again this is system dependent, it may be:
sudo /etc/init.d/docker start
it may be:
systemctl start docker
Again this is similar to the above, but the --api-servers
now points to the master we set up in the beginning.
sudo docker run \
--volume=/:/rootfs:ro \
--volume=/sys:/sys:ro \
--volume=/dev:/dev \
--volume=/var/lib/docker/:/var/lib/docker:rw \
--volume=/var/lib/kubelet/:/var/lib/kubelet:rw \
--volume=/var/run:/var/run:rw \
--net=host \
--privileged=true \
--pid=host \
-d \
gcr.io/google_containers/hyperkube:v${K8S_VERSION} /hyperkube kubelet --api-servers=http://${MASTER_IP}:8080 --v=2 --address=0.0.0.0 --enable-server --hostname-override=$(hostname -i) --cluster-dns=10.0.0.10 --cluster-domain=cluster.local
The service proxy provides load-balancing between groups of containers defined by Kubernetes Services
sudo docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v${K8S_VERSION} /hyperkube proxy --master=http://${MASTER_IP}:8080 --v=2
Move on to testing your cluster or add another node