$ helm repo add es-operator https://raw.githubusercontent.com/upmc-enterprises/elasticsearch-operator/master/charts/
$ helm repo add banzaicloud-stable https://kubernetes-charts.banzaicloud.com
$ helm repo update
$ helm install --name elasticsearch-operator es-operator/elasticsearch-operator --set rbac.enabled=True
$ helm install --name elasticsearch es-operator/elasticsearch --set kibana.enabled=True --set cerebro.enabled=True
$ kubectl port-forward svc/cerebro-elasticsearch-cluster 9001:80
$ kubectl port-forward svc/kibana-elasticsearch-cluster 5601:80
Create a namespace for logging
kubectl create ns logging
You can install
logging
resource via Helm chart with built-in TLS generation.
Create logging
resource
cat <<EOF | kubectl apply -f -
apiVersion: logging.banzaicloud.io/v1beta1
kind: Logging
metadata:
name: default-logging-simple
spec:
fluentd: {}
fluentbit: {}
controlNamespace: logging-system
EOF
Note:
ClusterOutput
andClusterFlow
resource will only be accepted in thecontrolNamespace
Create an ElasticSearch output definition
cat <<EOF | kubectl apply -f -
apiVersion: logging.banzaicloud.io/v1beta1
kind: ClusterOutput
metadata:
name: es-output
namespace: logging-system
spec:
elasticsearch:
host: elasticsearch-elasticsearch-cluster.default.svc.cluster.local
port: 9200
scheme: https
ssl_verify: false
ssl_version: TLSv1_2
buffer:
path: /tmp/buffer
timekey: 1m
timekey_wait: 30s
timekey_use_utc: true
EOF
Note: For production set-up we recommend using longer
timekey
interval to avoid generating too many object.
The following snippet will use tag_normaliser to re-tag logs and after push it to ElasticSearch.
cat <<EOF | kubectl apply -f -
apiVersion: logging.banzaicloud.io/v1beta1
kind: ClusterFlow
metadata:
name: es-flow
namespace: logging-system
spec:
filters:
- tag_normaliser: {}
selectors: {}
outputRefs:
- es-output
EOF