A Kubernetes
operator for edge devices
(This project also includes arhat
's source, which is the agent for edge device to communicate with aranya
)
- Deploy and manage edge devices with ease.
- Remove the boundry between
Edge
andCloud
. - Integrate every device with container runtime into your
Kubernetes
cluster. - Help everyone to be able to share
Kubernetes
clusters for edge devices. (see docs/Multi-tenancy.md)
Simplify Kubernetes
EXPERIMENTAL, USE AT YOUR OWN RISK
- Pod modeled container management in edge device
- Support
Pod
creation withEnv
,Volume
- Sources: plain text,
Secret
,ConfigMap
- Sources: plain text,
- Support
- Remote device management with
kubectl
(both container and host)log
exec
attach
port-forward
NOTE: For details of the host management, please refer to Maintenance #Host Management
Kubernetes
cluster network not working for edge devices, see Roadmap #Networking
see docs/Build.md
Kubernetes
cluster withRBAC
enabled- Minimum cluster requirements: 1 master (must have) with 1 node (to deploy
aranya
)
- Minimum cluster requirements: 1 master (must have) with 1 node (to deploy
-
Deploy
aranya
to yourKubernetes
cluster for evaluation with following commands (see docs/Maintenance.md for more deployment tips)# set the namespace for edge devices, aranya will be deployed to this namespace $ export NS=edge # create the namespace $ kubectl create namespace ${NS} # create custom resource definitions (CRDs) used by aranya $ kubectl apply -f https://raw.githubusercontent.com/arhat-dev/aranya/master/cicd/k8s/crds/aranya_v1alpha1_edgedevice_crd.yaml # create service account for aranya (we will bind both cluster role and namespace role to it) $ kubectl -n ${NS} create serviceaccount aranya # create cluster role and namespace role for aranya $ kubectl apply -f https://raw.githubusercontent.com/arhat-dev/aranya/master/cicd/k8s/aranya-roles.yaml # config role bindings for aranya $ kubectl -n ${NS} create rolebinding aranya --role=aranya --serviceaccount=${NS}:aranya $ kubectl create clusterrolebinding aranya --clusterrole=aranya --serviceaccount=${NS}:aranya # deploy aranya to your cluster $ kubectl -n ${NS} apply -f https://raw.githubusercontent.com/arhat-dev/aranya/master/cicd/k8s/aranya-deploy.yaml
-
Create
EdgeDevice
resource objects for each one of your edge devices (see sample-edge-devices.yaml for example)aranya
will create a node object with the same name for everyEdgeDevice
in your cluster- Configure the connectivity between
aranya
and your edge devices, depending on the connectivity method set in the spec (spec.connectivity.method
):grpc
- A gRPC server will be created and served by
aranya
according to thespec.connectivity.grpcConfig
,aranya
also maintains an according service object for that server. - If you want to access the newly created gRPC service for your edge device outside the cluster, you need to setup
Kubernetes Ingress
using applications likeingress-nginx
,traefik
etc. at first. Then you need to create anIngress
object (see sample-ingress-traefik.yaml for example) for the gRPC service. - Configure your edge device's
arhat
to connect the gRPC server accoding to yourIngress
's host
- A gRPC server will be created and served by
mqtt
(WIP, see Roadmap #Connectivity)aranya
will try to talk to your mqtt broker accoding to thespec.connectivity.mqttConfig
.- You need to configure your edge device's
arhat
to talk to the same mqtt broker or one broker in the same mqtt broker cluster depending on your own usecase, the config optionmessageNamespace
must match to getarhat
able to communicate witharanya
.
- Deploy
arhat
with configuration to your edge devices, start and wait to get connected toaranya
- You can get
araht
by downloading from latest releases or build you own easily (see docs/Build.md). - For configuration references, please refer to config/arhat for configuration samples.
- Run
/path/to/arhat -c /path/to/arhat-config.yaml
- You can get
-
aranya
will create a virtual pod with the name of theEdgeDevice
in the same namespace,kuebctl log/exec/attach/port-froward
to thevirtual pod
will work in edge device host if allowed. (see design reasons at Maintenance #Host Management) -
Create workloads with tolerations (taints for edge devices) and use label selectors or node affinity to assign to specific edge devices (see sample-workload.yaml for example)
-
Common Node Taints
Taint Key Value arhat.dev/namespace
Name of the namespace the edge device deployed to -
Common Node Labels
Label Name Value arhat.dev/role
EdgeDevice
arhat.dev/name
The edge device name
-
Every EdgeDevice
object needs to setup a kubelet
server to serve kubectl
commands which could execute into certain pods, thus we need to provision node certifications for each one of EdgeDevice
s' virtual node in cluster, which would take a lot of time for lage scale deployment. The performance test was taken on my own Kubernetes
cluster described in my homelab after all the required node certifications has been provisioned.
-
Test Workload
- 1000 EdgeDevice using
gRPC
(generated with./scripts/gen-deploy-script.sh 1000
)- each requires a
gRPC
andkubelet
server - each requires a
Node
andService
object
- each requires a
- 1000 EdgeDevice using
-
Resuts
--- Deployment Speed: ~ 5 devices/s Memory Usage: ~ 280 MB CPU Usage: ~ 3 GHz --- Delete Speed: ~ 6 devices/s
However, after 1000 devices and node objects deployed and serving, my cluster shuts me out due to the kube-apiserver
unable to handle more requests, but it's farely good result for my 4 core virtual machine serving both etcd
and kube-apiserver
.
see ROADMAP.md
- Why not
k3s
?k3s
is really awesome for some kind of edge devices, but still requires lots of work to be done to serve all kinds of edge devices right. One of the most significant problems is the splited networks with NAT or Firewall (such as homelab), and we don't think problems like that should be resolved ink3s
project which would totally change the wayk3s
works.
- Why not using
virtual-kubelet
?virtual-kubelet
is designed for cloud providers such asAzure
,GCP
,AWS
to run containers at network edge. However, most edge device users aren't able to or don't want to setup such kind of network infrastructure.- A
virtual-kubelet
is deployed as a pod on behalf of a contaienr runtime, if this model is applied for edge devices, then large scale edge device cluster would claim a lot of pod resource, which requires a lot of node to serve, it's inefficient.
- Why
arhat
andaranya
(why notkubelet
)?kubelet
is heavily dependent on http, maybe it's not a good idea for edge devices with poor network to communicate with each other via http.aranya
is the watcher part inkubelet
, lots ofkubelet
's work such as cluster resource fetch and update is done byaranya
,aranya
resolves everything in cluster forarhat
before any command was delivered toarhat
.arhat
is the worker part inkubelet
, it's an event driven agent and only tend to command execution.- Due to the design decisions above, we only need to transfer necessary messages between
aranya
andarhat
such as pod creation command, container status update, node status update etc. Keeping the edge device's data usage for management as low as possible.
Kubernetes
- Really eased my life with my homelab.
virtual-kubelet
- This project was inspired by its idea, which introduced an cloud agent to run containers in network edge.
Buddhism
- Which is the origin of the name
aranya
andarhat
.
- Which is the origin of the name
- Jeffrey Stoke
- I'm seeking for career opportunities (associate to junior level) in Deutschland
Copyright 2019 The arhat.dev Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.