This is a complete example of implementation of a microservices-based architecture available for studying. This source code was forked and adapted from the repository course.
For more details about the application, please see this link.
In this forked version, you will find the following features:
I created the deployment code using:
- Deployment to a local Kubernetes instance (Minikube), using Helm charts.
- Installation of Istio as a service Mesh solution.
- Using Lens for cluster management.
- Using Ingress controller to expose the application from the outside Kubernetes cluster.
The following tools are availble using this deployment code:
- Elasticsearch and Kibana: Kibana is a data visualization and exploration tool used for log and time-series analytics and application monitoring. It uses Elasticsearch as search engine.
- Healthchecks implemented in each microservices using AspNet Core health checks features.
- Kiali : observability console for Istio with service mesh configuration and validation capabilities. It helps you understand the structure and health of your service mesh by monitoring traffic flow to infer the topology and report errors.
- Jaeger : open source software for tracing transactions between distributed services. It's used for monitoring and troubleshooting complex microservices environments.
- Prometheus and Grafana: Prometheus is free and an open-source event monitoring tool for containers or microservices. Grafana is a multi-platform visualization software available since 2014.
- HPA : Horizontal Pod Autoscaler automatically scales the number of Pods based on observed CPU utilization or on some other application-provided metrics.
- Keda - (To be done)
In order to run the application on the local machine, follow the original repository documentation.
Minikube is local Kubernetes, focusing on making it easy to learn and develop for Kubernetes. Follow the installation documentation below:
After the installation is finished with success, you should be able to see the Minikube pods running like this:
A container registry is a repository, or collection of repositories, used to store container images for Kubernetes, DevOps, and container-based application development.
I decided to create my own local container registry, but you can use whatever you want to host your container images.
I followed this documentation to create my Registry using Minikube.
Once you have the addon enabled, you should be able to connect to it. When enabled, the registry addon exposes its port 5000 on the minikube’s virtual machine.
On your local machine you should now be able to reach the minikube registry by doing "port foward":
kubectl port-forward --namespace kube-system service/registry 5000:80
and run the curl below:
curl http://localhost:5000/v2/_catalog
You will need to install Helm locally to be able to run the deployment script available in this repo.
Please see the official documentation here.
Lens is a a Kubernetes IDE — open source project. Available for Linux, Mac, and Windows, Lens gives you a powerful interface and toolkit for managing, visualizing, and interacting with multiple Kubernetes clusters.
It will make your life easier, but you will also start forgeting all the kubectl commands you used to use.
The installation link is here.
Assuming that you already have your container images built, you should now push them to the registry. To do that, you should (example):
kubectl port-forward --namespace kube-system service/registry 5000:80
Tag the image:
docker tag ocelotapigw localhost:5000/ocelotapigw
Push to the registry:
docker push localhost:5000/ocelotapigw
Verify if the images are available on the registry:
After that, you are ready to start the application deployment into the Kubernetes cluster. The next step is to run the script below, that will create the pods, services and other K8S resources needed to run the application.
You will need Powershell installed. If you are using Linux like me, take a look on this link.
Go to the folder /run-aspnetcore-microservices/deployment/k8s/helm and run:
pwsh
Run the script below:
./deploy-all.ps1
You should see the pods running after some seconds:
At this point, the application should be available. You can access using one of the following options:
Throught the node port 8089 configured on the file deployment/k8s/helm/aspnetrunbasics/values.yaml.
You can do a port forward to web application service exposed on this 8089 port:
kubectl port-forward --namespace default service/aspnetrun-aspnetrunbasics [YOUR_LOCAL_PORT]:8089
If you are using Lens, go to PODS, click on the aspnetrunbasics POD and click on Ports link:
You can access using your [cluster IP]:[Service Port] exposed by the Web application service. To identify the cluster IP, you can you use:
minikube Ip
or kubectl cluster-info
In my case, my cluster IP is 192.168.49.2:
To identify the web application service port, you can you use:
kubectl get svc | grep aspnetrunbasics
In my case, my service port is 31293:
And, using the browser:
You can follow the same options (1 and 2) explained above, but accessing the webstatus POD. The option (3) is not available, because this POD is not available outside the cluster.
The microservices APIs are only available within the cluster. You can also use port-forward or access via LENS (/swagger/).
- catalog
- basket
- discount
- ordering
The Kibana is only accessible within the cluster. You can also use port-forward or access via LENS. In the first access, you will need to configure the elasticsearch Index to be able to see the application logs. The configuration is beyond the scope of this documentation, but all the microservices are configure to send logs to the Elasticsearch container also running in the cluster.
Istio manages traffic flows between services, enforces access policies, and aggregates telemetry data, all without requiring changes to application code.
The configuration files below will generate the resources (pods, services, service accounts, CRD, etc) needed to install Istio on your Minikube cluster:
kubectl apply -f 1-istio-init.yaml
kubectl apply -f 2-istio-minikube.yaml
kubectl apply -f 3-kiali-secret.yaml
It will also install Kiali, Prometheus, Grafana and Jaeger.
After you run it, you should see the containers running in the istio-system namespace:
kubectl get po -n istio-system
Once you have Istio installed, you can enable it. To do that, we will create a label on the namespace used by the application (Default
in this case).
kubectl label namespace default istio-injection=enabled
To confirm the label creation:
kubectl describe ns default
This label will be used to determine whether Istio should be injected on the desired containers. By default, all the containers will be using Istio, except the ones explicitly configured to not use it. This configuration is done throught the deployment.yaml. For example, we will not inject Istio on Mongodb container, as shown below:
In order to inject Istio in the application, you need to redeploy it.
Go to the folder /run-aspnetcore-microservices/deployment/k8s/helm and run the Powershell script:
./deploy-all.ps1
Now you should see the Sidecar containers injected in some of the Pods:
These tools are only accessible within the cluster. You can either use port-forward or access via LENS:
This tool is running on container [Cluster IP] / port 31001:
So far we have exposed the web application using a Node port defined on the aspnetrunbasics
helm chart (values.yaml). Using this configuration, Kubernetes will allocate a specific port on each Node to the web application service, and any request to your cluster on that port will be forwarded to the service.
It works, but there is a more decouple/better way to implement it. See this documentation here.
Run the following command on Minikube:
minikube addons enable ingress
You should see the Ingress controller pods running:
Go to the folder: deployment/k8s/ingress
and run the command:
kubectl apply -f ingress.yaml
Now you should see the Ingress created(*):
kubectl get ingress -n default
Note that I am using my-minikube
as HOST name. You should configure your /etc/hosts
file to be able to resolve this name to your Minikube cluster IP.
In my case, now I can access the web application using:
The official Ingress documentation is here.
(*) If you receive this message when creating the ingress: Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io", run the following work around (for studying purpose only):
kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission
HPA (Horizontal Pod Autoscaler) automatically scales the number of Pods based on observed CPU utilization or on some other application-provided metrics.
In this demo, the Helm charts already implement HPA for some of the Microservices. For example, fot the Basket microservices, take a look on the helm files:
- /deployment/k8s/helm/basket/values.yaml
The parameters defined above are used on the file /deployment/k8s/helm/basket/templates/hpa.yaml. You should change the parameters according to your context (cpu/memory).
If you take a look on the HPA section of LENS you will see the autoscalers configured:
You can add load to the application to test the HPA configuration using the application in the folder /deployment/hpa-load-test:
kubectl apply -f stress-basket.yaml
To increase the number of requests, create more instances of this container:
kubectl scale deployment/stress-basket --replicas 5
More information about HPA can be found here.
- minikube 1.19.0
- Helm : 3.7.0
- ISTIO 1.10.3