- 部署Webhook server
- 定义LoadBalancer
- 查看LoadBalancer状态
- 定义BackendGroup
- 查看BackendRecord
- 强制删除BackendRecord
- 强制更新不可修改字段
在LBCF中,Webhook server负责调用负载均衡api。由于每种负载均衡都有自己的Webhook server,所以使用LBCF的第一步就是把Webhook server的部署信息告知LBCF,这一操作是通过向K8S中提交LoadBalancerDriver对象完成的。
从LoadBalancerDriver的定义可知,其中最重要的信息为Webhook server地址,即spec.url
部分。
LBCF对webhook server的部署位置没有限制,既可以作为容器部署在集群内,也可以部署在集群外部的节点上。
clb-driver是本人开发的用于对接共有云CLB的webhook server,该项目使用了容器化部署,即使用Deployment部署Webhook server,为Deployment创建Service并将Service地址告知LBCF。
下述YAML为部署clb-driver使用的Service,Service的80端口即webhook server用来提供服务的端口
apiVersion: v1
kind: Service
metadata:
labels:
name: lbcf-clb-driver
namespace: kube-system
spec:
ports:
- name: insecure
port: 80
targetPort: 80
selector:
lbcf.tkestack.io/component: lbcf-clb-driver
sessionAffinity: None
type: ClusterIP
从下面的LoadBalancerDriver中可以看到,Webhook server的地址为Service地址,该地址可以被K8S集群内部DNS(kube-dns或core-dns)解析
apiVersion: lbcf.tkestack.io/v1beta1
kind: LoadBalancerDriver
metadata:
name: lbcf-clb-driver
namespace: kube-system
spec:
driverType: Webhook
url: "http://lbcf-clb-driver.kube-system.svc"
LoadBalancer描述了被操作的负载均衡的信息,其中主要包含以下内容:
- 应当由哪个Webhook server执行操作(spec.lbDriver)
- 负载均衡在外部系统中的唯一标识是什么(spec.lbSpec)
- 负载均衡有哪些与标识无关的属性(spec.attributes)
一旦LoadBalancer对象创建成功,1和2中的内容就会被禁止修改,只有3中的内容可以被随时修改。
LBCF作为一个开放框架,没有对2和3中的内容进行限制,其中的信息完全由Webhook server定义和解析,下述YAML展示了clb-driver定义的部分参数:
apiVersion: lbcf.tkestack.io/v1beta1
kind: LoadBalancer
metadata:
name: test-clb-load-balancer
namespace: kube-system
spec:
lbDriver: lbcf-clb-driver
lbSpec:
vpcID: vpc-b5hcoxj4
loadBalancerType: "OPEN"
listenerPort: "9999"
listenerProtocol: "HTTP"
domain: "mytest.com"
url: "/index.html"
ensurePolicy:
policy: Always
向K8S提交LoadBalancer对象后,LBCF在本地对LoadBalancer进行基本校验后,会先后调用Webhook server的validateLoadBalancer与createLoadBalancer方法。
在validateLoadBalancer中,Webhook server可以校验并拒绝本次LoadBalancer对象的创建。
例如,clb-driver支持使用已有的CLB实例,当LoadBalancer中指定的CLB不存在时,clb-driver会拒绝该次创建。
apiVersion: lbcf.tkestack.io/v1beta1
kind: LoadBalancer
metadata:
name: a-lb-that-not-exist
namespace: kube-system
spec:
lbDriver: lbcf-clb-driver
lbSpec:
loadBalancerID: "lb-notexist"
listenerPort: "9999"
listenerProtocol: "HTTP"
domain: "mytest.com"
url: "/index.html"
使用上述YAML时,clb-driver会调用云API检查loadBalancerID中的lb-notexist
是否存在,如果不存在,会拒绝创建:
kubectl apply -f lb-not-exist.yaml
Error from server: error when creating "lb-not-exist.yaml": admission webhook "lb.lbcf.tkestack.io" denied the request: invalid LoadBalancer: clb instance lb-notexist not found
若validateLoadBalancer校验返回成功,则LoadBalancer对象会被写入K8S集群,此时,LBCF会调用createLoadBalancer进行负载均衡的创建(webhook server可直接返回成功以跳过此流程)
依旧以clb-driver为例,这次我们使用下述YAML临时创建一个新的七层CLB:
apiVersion: lbcf.tkestack.io/v1beta1
kind: LoadBalancer
metadata:
name: test-clb-load-balancer
namespace: kube-system
spec:
lbDriver: lbcf-clb-driver
lbSpec:
vpcID: vpc-b5hcoxj4
loadBalancerType: "OPEN"
listenerPort: "9999"
listenerProtocol: "HTTP"
domain: "mytest.com"
url: "/index.html"
使用kubectl describe
命令查看一下LoadBalancer的状态,可得以下结果:
[root@10-0-3-16 clb-driver]# kubectl describe loadbalancer -n kube-system test-clb-load-balancer
Name: test-clb-load-balancer
Namespace: kube-system
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"lbcf.tkestack.io/v1beta1","kind":"LoadBalancer","metadata":{"annotations":{},"name":"test-clb-load-balancer","nam...
API Version: lbcf.tkestack.io/v1beta1
Kind: LoadBalancer
Metadata:
Creation Timestamp: 2019-06-13T12:48:44Z
Finalizers:
lbcf.tkestack.io/delete-load-loadbalancer
Generation: 1
Resource Version: 8574359
Self Link: /apis/lbcf.tkestack.io/v1beta1/namespaces/kube-system/loadbalancers/test-clb-load-balancer
UID: 94518f90-8dd9-11e9-b3e1-525400d96a00
Spec:
Ensure Policy:
Policy: Always
Lb Driver: lbcf-clb-driver
Lb Spec:
Domain: mytest.com
Listener Port: 9999
Listener Protocol: HTTP
Load Balancer Type: OPEN
URL: /index.html
Vpc ID: vpc-b5hcoxj4
Status:
Conditions:
Last Transition Time: 2019-06-13T12:49:19Z
Status: True
Type: Created
Last Transition Time: 2019-06-13T12:49:19Z
Status: True
Type: AttributesSynced
Lb Info:
Domain: mytest.com
Listener Port: 9999
Listener Protocol: HTTP
Load Balancer ID: lb-6xm34m0z
URL: /index.html
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal RunningCreateLoadBalancer 33s lbcf-controller msg: creating CLB instance
Normal RunningCreateLoadBalancer 23s lbcf-controller msg: creating listener
Normal RunningCreateLoadBalancer 12s lbcf-controller msg: creating forward rule
Normal SuccCreateLoadBalancer 2s lbcf-controller Successfully created load balancer
在返回结果的Events部分,我们可以看到clb-driver依次创建了CLB实例、监听器与7层转发规则并最后返回了成功。
注:此处之所以有4个Event,是因为clb-driver实现createLoadBalancer时使用了异步操作,LBCF一共调用了4次createLoadBalancer
另一方面,从Status中的Lb Info中可以看到原本lbSpec中的vpcID: vpc-b5hcoxj4
与loadBalancerType: "OPEN"
被替换成了Load Balancer ID lb-6xm34m0z
,替换的原因在于Load Balancer ID在云API是负载均衡实例的唯一标识,但完成创建之前,我们无法预先获取该ID,因此lbSpec中填写的是创建实例所需的参数,而lbInfo中的ID是由clb-driver在创建完成后才写入的。
当LoadBalancer被删除时,绑定在其下的所有backend都会解绑。
BackendRecord会被解绑
BackendGroup描述了被绑定backend的信息,主要包含如下内容:
- backend需要被绑定在哪个LoadBalancer(spec.lbName)
- 哪些backend需要被绑定(spec.service, spec.pods, spec.static)
- 绑定backend时需要使用哪些参数(spec.parameters)
与LoadBalancer类似,3中的内容也是完全由webhook server自定义的,下述YAML展示了clb-driver对权重的支持:
apiVersion: lbcf.tkestack.io/v1beta1
kind: BackendGroup
metadata:
name: web-svc-backend-group
namespace: kube-system
spec:
lbName: test-clb-load-balancer
service:
name: svc-test
port:
portNumber: 80
parameters:
weight: "36"
当用户修改spec.parameters.weight的值时,CLB中对应backend的权重会产生相应改变。
BackendGroup目前支持了3种backend类型,除上面YAML使用的service类型外,还有pods与static类型。如果是pods类型,LBCF会将Pod直接绑定至CLB(数据面由网络自行保证),下面的YAML为clb-driver项目支持的pod类型BackendGroup:
apiVersion: lbcf.tkestack.io/v1beta1
kind: BackendGroup
metadata:
name: web-pod-backend-group
namespace: kube-system
labels:
lbcf.tkestack.io/lb-name: test-clb-load-balancer
spec:
lbName: test-clb-load-balancer
pods:
port:
portNumber: 80
byLabel:
selector:
app: nginx
parameters:
weight: "18"
BackendRecord由LBCF自动创建并管理,其中记录了被绑定的backend的信息(1个backend对应1个BackendRecord),用户应避免手动操作此类对象。
BackendRecord的Status中记录了backend的当前状态,包括backend地址、是否已完成绑定以及每次调用webhook的结果。
[root@10-0-3-16 clb-driver]# kubectl describe backendrecord -n kube-system
Name: dea99df137c5b3d94d5e858a7c3ca778
Namespace: kube-system
Labels: lbcf.tkestack.io/backend-group=web-svc-backend-group
lbcf.tkestack.io/backend-service=svc-test
lbcf.tkestack.io/lb-driver=lbcf-clb-driver
lbcf.tkestack.io/lb-name=test-clb-load-balancer
Annotations: <none>
API Version: lbcf.tkestack.io/v1beta1
Kind: BackendRecord
Metadata:
Creation Timestamp: 2019-06-13T13:23:05Z
Finalizers:
lbcf.tkestack.io/deregister-backend
Generation: 1
Owner References:
API Version: lbcf.tkestack.io/v1beta1
Block Owner Deletion: true
Controller: true
Kind: BackendGroup
Name: web-svc-backend-group
UID: 46f7f7b5-8daf-11e9-b3e1-525400d96a00
Resource Version: 8580045
Self Link: /apis/lbcf.tkestack.io/v1beta1/namespaces/kube-system/backendrecords/dea99df137c5b3d94d5e858a7c3ca778
UID: 60aee0ff-8dde-11e9-b409-525400b94ff4
Spec:
Lb Attributes: <nil>
Lb Driver: lbcf-clb-driver
Lb Info:
Domain: mytest.com
Listener Port: 9999
Listener Protocol: HTTP
Load Balancer ID: lb-7wf394rv
URL: /index.html
Lb Name: test-clb-load-balancer
Parameters:
Weight: 36
Service Backend:
Name: svc-test
Node Name: 10.0.3.3
Node Port: 30200
Port:
Port Number: 80
Protocol: TCP
Status:
Backend Addr: {"instanceID":"ins-ddyckir3","eIP":"","port":30200}
Conditions:
Last Transition Time: 2019-06-13T13:23:24Z
Message:
Status: True
Type: Registered
Injected Info: <nil>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccGenerateAddr 64s lbcf-controller addr: {"instanceID":"ins-ddyckir3","eIP":"","port":30200}
Normal RunningEnsureBackend 59s lbcf-controller msg: requestID: f5291b17-122d-406d-917a-6f48bfc8b9b4
Normal SuccEnsureBackend 46s lbcf-controller Successfully ensured backend
Status中的Backend Addr是被绑定backend的地址,该地址由generateBackend返回。本例中,我们绑定的是Service Node,但由于云API中只能填写instanceID,因此clb-driver通过查询API把节点IP转换为instanceID,并使用instanceID作为backend地址。
与LoadBalancer类似,clb-driver实现ensureBackend也使用了异步操作,所以Events中有2次ensureBackend的调用结果
当LoadBalancer或BackendGroup被删除时,BackendRecord会被自动解绑并删除
通常情况下,删除BackendRecord会触发backend的解绑,但在某些情况下,运维人员可能需要在不解绑backend的前提下删除BackendRecord。
若需强制删除BackendRecord,需按下述步骤进行操作:
- 删除所有BackendRecord中的Finalizer
lbcf.tkestack.io/deregister-backend
- 删除BackendGroup或LoadBalancer
LBCF为每种CRD规定了"不可修改"的字段,比如LoadBalancerDriver中的spec.url,如果想强制更新这些字段,则需要暂停K8S对该种资源的validating webhook功能,操作方式如下:
- 使用
kubectl edit
编辑LBCF的validating webhook配置
kubectl edit ValidatingWebhookConfiguration lbcf-validate
- 在validating webhook中找到要修改的资源,我们现在要修改的是loadbalancerdriver,所以需要在yaml中找到下面这段内容:
- admissionReviewVersions:
- v1beta1
clientConfig:
caBundle: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURORENDQWh3Q0NRQ0grMkVFYnFlL09UQU5CZ2txaGtpRzl3MEJBUXNGQURCY01Rc3dDUVlEVlFRR0V3SkQKVGpFTE1Ba0dBMVVFQ0F3Q1Frb3hGakFVQmdOVkJBb01EWFJsYm1ObGJuUXNJRWx1WXk0eEtEQW1CZ05WQkFNTQpIMnhpWTJZdFkyOXVkSEp2Ykd4bGNpNXJkV0psTFhONWMzUmxiUzV6ZG1Nd0hoY05NVGt3TlRFMU1EWXdNVFE1CldoY05Nakl3TXpBME1EWXdNVFE1V2pCY01Rc3dDUVlEVlFRR0V3SkRUakVMTUFrR0ExVUVDQXdDUWtveEZqQVUKQmdOVkJBb01EWFJsYm1ObGJuUXNJRWx1WXk0eEtEQW1CZ05WQkFNTUgyeGlZMll0WTI5dWRISnZiR3hsY2k1cgpkV0psTFhONWMzUmxiUzV6ZG1Nd2dnRWlNQTBHQ1NxR1NJYjNEUUVCQVFVQUE0SUJEd0F3Z2dFS0FvSUJBUURuCnJoZFVqRHJGQ2ZaVFI3QkxNOHNpcTNaSDFraGNiSmpGMnIxaWtoNUtrOERaTTRndWxQSFhyZkNZbTFPUUIwb3cKOXluSTNSRXEwY2trUVAzSGZnck1hWHhLVEtjYWs0dlBHdGlROVhWSC8wR2E4ODhhbTdQQVBvYklzS3hTc1g5UQowTi9GdlJtWXZSK2tZRUNwS2VVNWhON0l1QUZlZ3JCOHd3eDBjbzVSN085cklZU0MvVHFpSytibW1SaDRBcHlGClc2QWlvVTFJWmNsUDZYQlUxbkRrRVVPYk5LTUdDbDhsYUV0NHc3eC9uVlB4eUFYZUJpNmNpYk0zdXFETzB1MjIKMFZDUXNJRjBpTUlWWWk1eVR4NTNCMWNjS0xOeUlaYXRmOHhvRmNLdHJqN1FISlBtYWhPcnVIbjkzYlV4MzduZAptYm9EbExqclZpejhWY0Y4TklwOUFnTUJBQUV3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQUJtckE2Q3IrQ1cyCldxeHZXNDVFcEx2WnByY3lVbGNGTGFBdGo0Qit0QkVCemdMb2FmWlZUd0ZlK25TOWhCRTEwUUlCZFhVNnFkT1YKKzZMT1VibTZoU0tEb1hXUThya3llZEZPQmNoWUkzZDhUOW1Kek91NlM5aFBCYk1RdkJxSE9HOW4rUnlNOUU2NQoxeEQweVYwZzRvaXo0QUFuaWF3VHZhUlZrNWNteHlzZlhLQkFRbDJPOEFLTit2VnRBR3BaYnJYVkNzR3NMWTdyCml1RHhqNjBhTnVSNjZGTjcrWXcyMWVZUDFhd2NuUkZGRHkvbStWUE9VV0pBc3lQb0gwR2QwYXBZWUxwaTQzODMKVTlHU0NrZHNNczFNOHhLM0Zhb0QrYTJFUm9Ed1A5a2REaTI3c002bXVtbE05S2JaN3dWaWxMVXNJSU41VDYxbwpEU3dYd0Nmak01OD0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQ==
service:
name: lbcf-controller
namespace: kube-system
path: /validate-load-balancer-driver
port: 443
failurePolicy: Fail
matchPolicy: Exact
name: driver.lbcf.tkestack.io
namespaceSelector: {}
objectSelector: {}
rules:
- apiGroups:
- lbcf.tkestack.io
apiVersions:
- v1beta1
operations:
- CREATE
- UPDATE
- DELETE
resources:
- loadbalancerdrivers
scope: '*'
sideEffects: Unknown
timeoutSeconds: 30
- 在上一步的yaml中可以看到一个
operations
数组,该数组定义了针对指定资源的哪些行为会触发LBCF的校验,我们在这里需要暂停LBCF对UPDATE
行为的校验,因此我们需要删除operations
中的UPDATE
。修改后的yaml如下:
- admissionReviewVersions:
- v1beta1
clientConfig:
caBundle: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURORENDQWh3Q0NRQ0grMkVFYnFlL09UQU5CZ2txaGtpRzl3MEJBUXNGQURCY01Rc3dDUVlEVlFRR0V3SkQKVGpFTE1Ba0dBMVVFQ0F3Q1Frb3hGakFVQmdOVkJBb01EWFJsYm1ObGJuUXNJRWx1WXk0eEtEQW1CZ05WQkFNTQpIMnhpWTJZdFkyOXVkSEp2Ykd4bGNpNXJkV0psTFhONWMzUmxiUzV6ZG1Nd0hoY05NVGt3TlRFMU1EWXdNVFE1CldoY05Nakl3TXpBME1EWXdNVFE1V2pCY01Rc3dDUVlEVlFRR0V3SkRUakVMTUFrR0ExVUVDQXdDUWtveEZqQVUKQmdOVkJBb01EWFJsYm1ObGJuUXNJRWx1WXk0eEtEQW1CZ05WQkFNTUgyeGlZMll0WTI5dWRISnZiR3hsY2k1cgpkV0psTFhONWMzUmxiUzV6ZG1Nd2dnRWlNQTBHQ1NxR1NJYjNEUUVCQVFVQUE0SUJEd0F3Z2dFS0FvSUJBUURuCnJoZFVqRHJGQ2ZaVFI3QkxNOHNpcTNaSDFraGNiSmpGMnIxaWtoNUtrOERaTTRndWxQSFhyZkNZbTFPUUIwb3cKOXluSTNSRXEwY2trUVAzSGZnck1hWHhLVEtjYWs0dlBHdGlROVhWSC8wR2E4ODhhbTdQQVBvYklzS3hTc1g5UQowTi9GdlJtWXZSK2tZRUNwS2VVNWhON0l1QUZlZ3JCOHd3eDBjbzVSN085cklZU0MvVHFpSytibW1SaDRBcHlGClc2QWlvVTFJWmNsUDZYQlUxbkRrRVVPYk5LTUdDbDhsYUV0NHc3eC9uVlB4eUFYZUJpNmNpYk0zdXFETzB1MjIKMFZDUXNJRjBpTUlWWWk1eVR4NTNCMWNjS0xOeUlaYXRmOHhvRmNLdHJqN1FISlBtYWhPcnVIbjkzYlV4MzduZAptYm9EbExqclZpejhWY0Y4TklwOUFnTUJBQUV3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQUJtckE2Q3IrQ1cyCldxeHZXNDVFcEx2WnByY3lVbGNGTGFBdGo0Qit0QkVCemdMb2FmWlZUd0ZlK25TOWhCRTEwUUlCZFhVNnFkT1YKKzZMT1VibTZoU0tEb1hXUThya3llZEZPQmNoWUkzZDhUOW1Kek91NlM5aFBCYk1RdkJxSE9HOW4rUnlNOUU2NQoxeEQweVYwZzRvaXo0QUFuaWF3VHZhUlZrNWNteHlzZlhLQkFRbDJPOEFLTit2VnRBR3BaYnJYVkNzR3NMWTdyCml1RHhqNjBhTnVSNjZGTjcrWXcyMWVZUDFhd2NuUkZGRHkvbStWUE9VV0pBc3lQb0gwR2QwYXBZWUxwaTQzODMKVTlHU0NrZHNNczFNOHhLM0Zhb0QrYTJFUm9Ed1A5a2REaTI3c002bXVtbE05S2JaN3dWaWxMVXNJSU41VDYxbwpEU3dYd0Nmak01OD0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQ==
service:
name: lbcf-controller
namespace: kube-system
path: /validate-load-balancer-driver
port: 443
failurePolicy: Fail
matchPolicy: Exact
name: driver.lbcf.tkestack.io
namespaceSelector: {}
objectSelector: {}
rules:
- apiGroups:
- lbcf.tkestack.io
apiVersions:
- v1beta1
operations:
- CREATE
- DELETE
resources:
- loadbalancerdrivers
scope: '*'
sideEffects: Unknown
timeoutSeconds: 30
-
现在可以随意修改LoadBalancerDriver中的url了,由于我们暂停了LBCF的校验,因此需要由操作人员保证url的合法性
-
手动修改完成后,重新使用
kubectl edit
将UPDATE
加回去