The project implements a load balancer controller that will provide a high available and load balancing access to HTTP and TCP kubernetes applications. It also provide SSL support for http apps.
Our goal is to have this controller listen to ingress events, rather than config map for generating config rules. Currently, this controller watches for configmap resources to create and configure backend. Eventually, this will be changed to watch for ingress resource instead. This feature is still being planned in kubernetes since the current version of ingress does not support layer 4 routing.
This controller is designed to easily integrate and create different load balancing backends. From software, hardware to cloud loadbalancer. Our initial featured backends are software loadbalacing (with keepalived and nginx), hardware loadbalancing with F5 and cloud loadbalancing with Openstack LBaaS v2 (Octavia).
In the case of software loadbalancer, this controller works with loadbalancer-controller daemons which are deployed across nodes and will server as high available loadbalancers. The loadbalancer controller will communicate with the daemons via a configmap resource. These daemon controllers use keepalived and nginx to provide the high availability loadbalancing via the use of VIPs. VIPS are allocated to every service that's being loadbalanced. This will allow multiple services that bind to the same ports to work. For F5 and Openstack LBaaS, the loadbalancer controllers talk to the appropriate servers via their APIs. So loadbalancer-controller daemons are not needed.
Note: The daemon needs to run in priviledged mode and with hostNetwork: true
so that it has access to the underlying node network. This is needed so that the VIP can be assigned to the node interfaces so that they are accessible externally.
Difference between this and service-loadbalancer or nginx.
Service-loadbalancer is a great option but it is only tailored for software loadbalancer using HAProxy and it is not designed in a way that it can be easily decoupled. The nginx ingress controller is only for nginx and only works for layer 7 applications. This project is intended to provide support for many different backends and work with all kubernetes applications (layer 7 and layer4).
Service-loadbalancer support for L4 is very limited. The binding-port needs to be open and specified as a hostPort during the controller creation. This forces the users to specify and open the ports at the beginning. This will also prevent two different services to loadbalance on the same port (ie running two mysql services). This projects uses VIPs to resolve this limitation.
- First we need to create the loadbalancer controller. Make sure you provide the VIP allocation range. They must be reachable from outside. (Usually they are VIPs on the same subnet as the node network.)
$ kubectl create -f examples/kube-loadbalancer-rc.yaml
- The loadbalancer daemon pod will only start in nodes that are labeled
type: loadbalancer
. Label the nodes you want the daemon to run on
$ kubectl label node my-node1 type=loadbalancer
- Create our sample app, which consists of a service and replication controller resource:
$ kubectl create -f examples/coffee-app.yaml
- Create configmap for the sample app service. This will be used to configure the loadbalancer backend:
$ kubectl create -f coffee-configmap.yaml
- Get the bind IP generated by the loadbalancer controller from the configmap.
$ kubectl get configmap configmap-coffee-svc -o yaml
apiVersion: v1
data:
bind-ip: "10.0.0.10"
namespace: default
target-service-name: coffee-svc
kind: ConfigMap
metadata:
creationTimestamp: 2016-06-17T22:30:03Z
labels:
app: loadbalancer
name: configmap-coffee-svc
namespace: default
resourceVersion: "157728"
selfLink: /api/v1/namespaces/default/configmaps/configmap-coffee-svc
uid: 08e12303-34db-11e6-87da-fa163eefe713
- To get coffee:
$ curl http://10.0.0.10
<!DOCTYPE html>
<html>
<head>
<title>Hello from NGINX!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Hello!</h1>
<h2>URI = /coffee</h2>
<h2>My hostname is coffee-rc-mu9ns</h2>
<h2>My address is 10.244.0.3:80</h2>
</body>
</html>
- First we need to create the loadbalancer controller. You can specify the type of backend used for the loadbalancer via an argument. See
example/kube-loadbalancer-rc-f5.yaml
. The credential for authenticating against the F5 server are provded as environment variables. The password is supplied via a secret resource.
$ kubectl create -f example/kube-loadbalancer-rc-f5.yaml
- Create our sample app, which consists of a service and replication controller resource. Since F5 needs to access your apps, make sure your application is deployed with
type: NodePort
:
$ kubectl create -f examples/coffee-app.yaml
- Create configmap for the sample app service. This will be used to configure the loadbalancer backend:
$ kubectl create -f coffee-configmap.yaml
- Get the bind IP generated by the loadbalancer controller from the configmap. The bind IP should be the VIP allocated by the controller
$ kubectl get configmap configmap-coffee-svc -o yaml
apiVersion: v1
data:
bind-ip: "10.0.0.60"
namespace: default
target-service-name: coffee-svc
kind: ConfigMap
metadata:
creationTimestamp: 2016-06-17T22:30:03Z
labels:
app: loadbalancer
name: configmap-coffee-svc
namespace: default
resourceVersion: "157728"
selfLink: /api/v1/namespaces/default/configmaps/configmap-coffee-svc
uid: 08e12303-34db-11e6-87da-fa163eefe713
- Curl the VIP to access the coffee app
$ curl http://10.0.0.60
<!DOCTYPE html>
<html>
<head>
<title>Hello from NGINX!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Hello!</h1>
<h2>URI = /coffee</h2>
<h2>My hostname is coffee-rc-auqj8</h2>
<h2>My address is 172.18.99.3:80</h2>
</body>
</html>
- First we need to create the loadbalancer controller. You can specify the type of backend used for the loadbalancer via an argument. Provide your Openstack information as an environment variables. The password is supplied via a secret resource.
$ kubectl create -f example/kube-loadbalancer-rc-openstack.yaml
- Create our sample app, which consists of a service and replication controller resource. Since Openstack LBaaS needs to access your apps, make sure your application is deployed with
type: NodePort
:
$ kubectl create -f examples/coffee-app.yaml
- Create configmap for the sample app service. This will be used to configure the loadbalancer backend:
$ kubectl create -f coffee-configmap.yaml
- Get the bind IP generated by the loadbalancer controller from the configmap. The bind IP should be the VIP generated by Openstack LBaaS
$ kubectl get configmap configmap-coffee-svc -o yaml
apiVersion: v1
data:
bind-ip: "10.0.0.81"
namespace: default
target-service-name: coffee-svc
kind: ConfigMap
metadata:
creationTimestamp: 2016-06-17T22:30:03Z
labels:
app: loadbalancer
name: configmap-coffee-svc
namespace: default
resourceVersion: "157728"
selfLink: /api/v1/namespaces/default/configmaps/configmap-coffee-svc
uid: 08e12303-34db-11e6-87da-fa163eefe713
- Curl the VIP to access the coffee app
$ curl http://10.0.0.81
<!DOCTYPE html>
<html>
<head>
<title>Hello from NGINX!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Hello!</h1>
<h2>URI = /coffee</h2>
<h2>My hostname is coffee-rc-auqj8</h2>
<h2>My address is 172.18.99.3:80</h2>
</body>
</html>
- The apps are accessed via a nodePort in the K8 nodes which is in the range of 30000-32767. Make sure they are open in the nodes. Also make sure to open up any ports that bind to the load balancer, such as port 80 in this case.
$ cd loadbalancer
$ make container
$ cd loadbalancer-daemon
$ make container
Note: Implementations are experimental and not suitable for using in production. This project is still in its early stage and many things are still in work in progress.