-
Notifications
You must be signed in to change notification settings - Fork 108
Services FAQ
How should I think of Services and the kube-proxy
binary?
Think of Services and kube-proxy as a distributed multi-tenant load-balancer. Each node load balances traffic from clients on that node. The way it is implemented is with iptables and userspace (for now). The portal IPs are virtual and should never hit a physical network EVER.
From what I read, the kube-proxy service is used mostly (only?) for the service piece of things. Is that accurate? If so, does this service act like, provide, the iptables service on the host?
The kube proxy generates iptables rules which map portal_ips such that the traffic gets to the local kube-proxy daemon. The kube-proxy then does the equivalent of a NAT to the actual pod address.
The docker network config seems a little hit and miss. My docker install was through yum so it was a base install. Should I need to tell docker not to install the default masquerade rule to hide the bridge space behind the host IP? If I tell docker to not run iptables wont that break some of the port mapping required by kubernetes?
The rule should be off, but in most cases it doesn't matter. Connecting via services means that the pod IP will get (equivalent of) SNAT to the ip of the minion. So you never hit the docker rule. If you actually were connecting directly from a pod to a pod, that docker/iptables translation might bad... If you are trying to talk from a pod to the outside of the cluster, you may or may not want the rule...
We turn Docker iptables networking off with
--iptables=false --ip-masq=false
on the Docker dameon, but it should not trigger anyway.
To summarize some of this, is iptables required? If so, where do the rules come from? Maybe Im missing something but I cant find the iptables service on the host so Im not sure where it even comes from.
Yes, kube-proxy
How should the service (portal) routing work?
Do not configure any portal routing
Is the kube-proxy service only used for services?
Yes.
Am I correct in saying that any of the minions running the proxy service can answer for portal IPs?
Yes.
I think part of what Im missing is the distinction between portal and public IPs. If you configure a publicIP in a service, that address must be assigned to a real computer and must be routed to that machine which will respond to that address. The kube proxy must also be running on that machine.
Are portal IPs meant to be used for inter container access?
Yes
AKA - I shouldnt be hitting them from outside the kubernetes cloud.
Yes
If that's the case then I dont need to be routing them anywhere on the physical switch layer.
You don't even need them ON the switch layer, the portal_ips never leave a single host.
A container running on a minion can try and hit that pod through the portal IP (ENV variable too? Does this mean I need the service up before the container?).
In kube itself, yes. Although there is some DNS integration options (skydns), which should allow for non-ENV and thus non-runtime dependent changes to services.
By default, the container will hit its default gateway (the host) when looking for the portal IP. The portal IP is referenced in the iptables rule which sends that traffic to the host IP on a particular port. The kube-proxy (running locally) is listening on that port, picks up that traffic, and then load balances it round robin to a pod IP address. That pod can be running on any other host across the kubernetes cluster. Is that right? Am I close?
Sounds good.
Is the random port used by iptables and the kube-proxy just a means to track different portals? AKA, I'm redirecting traffic with iptables to the same destination so I need a means to know which back end pods I should l be load balancing to?
Pretty much, yes, you got it. The kube proxy opens one random port for each portal. Traffic on that port will only go to the associated backends.
I'm now looking to use external IPs. AKA - I'd like to provide an IP accessible from outside the kubernetes cluster that users can hit to access a service. It looks like there's a 'publicIP' variable in the service definition but I think think it works the way that I think it does. I assumed I could set the public IP I wanted to use for the service (AKA, users use 10.20.30.119). It would then be my job to make sure that the IP of 10.20.30.119 got routed to a minion. That doesnt seem to be how it works. Sounds like you specify the minion IP address in the publicIP field but Im not sure what that buys me...
You can specify any ip you want. But it is up to you to make sure that ip routes to a machine running kube proxy and that ip is assigned to an interface on that machine. Using minion ip means these requirement are obviously met, but you can add a second ip to a minion or to some other machine running the proxy....
If you have a "real" load balancer that can forward packets to the machine, you can use that as your PublicIP. If your minions have a public IP.
Basically, every networking install is a special snowflake, so we have to defer some of it to users. We will happily receive traffic on PublicIPs, but you need to run the plumbing from "outside" to that IP.