The master node in Kubernetes is responsible for managing and coordinating the cluster's resources, scheduling applications, and monitoring the overall health of the cluster. Here is an overview of the process of the master node in Kubernetes:
- API Server: The API server provides the interface for managing the Kubernetes cluster. It exposes the Kubernetes API, which can be used by users and applications to interact with the cluster.
- etcd: etcd is a distributed key-value store that is used to store the configuration data for the Kubernetes cluster.
- Controller Manager: The controller manager is responsible for managing and controlling the state of the Kubernetes cluster. It watches the state of the cluster stored in etcd and takes action to ensure that the desired state of the cluster matches the actual state of the cluster.
- Scheduler: The scheduler is responsible for scheduling the application workloads to the worker nodes based on the available resources and application requirements. When a new pod is created, the scheduler selects a suitable worker node based on resource requirements, node affinity, and other policies.
- Node Controller: The node controller is responsible for monitoring the state of the worker nodes. It detects when a node becomes unavailable and takes action to ensure that the pods running on the node are rescheduled to other nodes.
- Cloud Controller Manager: The cloud controller manager is a collection of controllers that interact with the cloud provider's APIs to manage the underlying infrastructure.
Overall, the master node in Kubernetes performs a wide range of tasks to ensure that the cluster is running smoothly, and the applications are deployed and managed efficiently.
A node in Kubernetes is a worker machine that runs containerized applications. Each node is managed by the control plane and can run multiple containers. The containers run on a container runtime, such as Docker, which is installed on each node. The process of a node in Kubernetes involves communication between the Kubelet, which runs on each node, and the control plane to receive PodSpecs and ensure the containers are running and healthy. The Kubernetes control plane continuously monitors the health of each node and can reschedule containers from a failed node to a healthy one.
The process of a node in Kubernetes involves:
- The Kubelet, which runs on each node, communicates with the control plane to receive the PodSpecs and ensures the containers described in the PodSpecs are running and healthy.
- Each node runs a container runtime, such as Docker, to manage the containers.
- The kube-proxy runs on each node and is responsible for managing network communication between Pods and services within the cluster.
A Kubernetes configuration file, also known as a manifest file, is a YAML or JSON file that defines the desired state of a Kubernetes object. The configuration file specifies the object's properties, such as the container image to use, the number of replicas to create, and the desired state of the object.
Here are the major parts of a Kubernetes configuration file:
apiVersion
: Specifies the version of the Kubernetes API that the object uses. For example,apiVersion: v1
is used for most core Kubernetes objects, whileapiVersion: apps/v1
is used for higher-level objects such as deployments and statefulsets.kind
: Specifies the type of Kubernetes object being defined. For example,kind: Pod
defines a Pod object, whilekind: Deployment
defines a Deployment object.metadata
: Specifies metadata for the object, such as the object's name and labels. Labels are used to group related objects together and can be used to filter and select objects.spec
: Specifies the desired state of the object, such as the number of replicas, container image, and resource requirements. Thespec
section is specific to each object type and contains different properties depending on the object.status
: Specifies the current status of the object. Thestatus
section is typically generated by Kubernetes and is read-only, so it is not typically specified in the configuration file. Self healing process of k8s is work based on the status flag holed in etcd. if prev_status != cur_status , it will update the component.
Here's an example of a Kubernetes configuration file for a Deployment object with annotations and comments:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment # name of the deployment object
labels:
app: my-app # label to identify the deployment
annotations:
owner: john # annotation to provide additional information
spec:
replicas: 3 # number of replicas to create
selector:
matchLabels:
app: my-app # label to match the pods created by the deployment
template:
metadata:
labels:
app: my-app # label for the pod template
spec:
containers:
- name: my-container # name of the container
image: my-image:latest # container image to use
ports:
- containerPort: 8080 # port to expose on the container
Communication between containers in a pod in Kubernetes is done through the local network stack of the pod, as if they were running on the same host. This means that containers in the same pod can communicate with each other using localhost
or the pod's IP address.
Kubernetes creates a virtual network interface for each pod, which is shared by all the containers in the pod. Each container gets its own network namespace and can see the same virtual network interface, so they can communicate with each other using standard networking protocols like TCP/IP.
Here's an example of how containers in the same pod can communicate with each other using localhost
:
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: container-1
image: my-image:latest
ports:
- containerPort: 8080
- name: container-2
image: my-image:latest
ports:
- containerPort: 8081
In this example, the pod has two containers, each running on a separate port. The containers can communicate with each other using localhost
and the appropriate port number. For example, container-1
can communicate with container-2
using localhost:8081
.
Note that communication between containers in different pods is done through the Kubernetes service discovery mechanism, which allows pods to discover and communicate with each other using service names and DNS.
In a Kubernetes cluster, pods can communicate with each other through the network. Kubernetes provides a virtual network called a "pod network," which allows pods to communicate with each other as if they were on the same host, even if they are running on different nodes in the cluster.
There are different ways that pods can communicate with each other in a Kubernetes cluster:
- Pod-to-pod communication: Pods can communicate with other pods in the same namespace using their IP addresses. Kubernetes assigns each pod a unique IP address, which allows them to communicate with other pods directly over the network.
- Service-to-pod communication: Kubernetes Services provide a stable IP address and DNS name for a set of pods, allowing them to communicate with each other using the Service's IP address or DNS name. When a pod sends a request to a Service, the request is load-balanced across all the pods that are part of the Service.
- Pod-to-Service communication: Pods can communicate with a Service by using its DNS name or IP address. When a pod sends a request to a Service, Kubernetes uses the Service's IP address to load-balance the request across all the pods that are part of the Service.
In addition to these methods, Kubernetes also provides other networking features that enable secure and efficient communication between pods, such as network policies and ingress controllers.
- Namespace:
In Kubernetes, a namespace is a logical boundary that separates and isolates different objects and resources within a cluster.
Namespaces help to avoid naming conflicts and ensure that resources are properly isolated, making it easier to manage and secure the cluster. For example, a team might create a separate namespace for each project or environment, such as development, staging, or production.
To create a Kubernetes namespace using a YAML file, You can also specify the namespace for other resources using the metadata.namespace
field in their YAML definition. For example:
apiVersion: v1
kind: Pod
metadata:
name: my-pod
namespace: my-namespace
spec:
containers:
- name: nginx
image: nginx:latest
- Labels:
In Kubernetes, labels are key-value pairs that you can attach to objects such as pods, services, and deployments. Labels provide a way to organize and select objects based on their characteristics, such as their role, version, or environment.
In Kubernetes, labels are key-value pairs that you can attach to objects such as pods, services, and deployments. Labels provide a way to organize and select objects based on their characteristics, such as their role, version, or environment.
Here's an example of a Pod with labels:
apiVersion: v1
kind: Pod
metadata:
name: my-pod
labels:
app: my-app
tier: frontend
spec:
containers:
- name: nginx
image: nginx:latest
In this example, the Pod has two labels, app: my-appand tier: frontend. These labels can be used to select and manipulate the Pod in various ways. For example, you could use the kubectl labelcommand to add a new label to the Pod:
kubectl label pod my-pod version=1.0
Labels are also commonly used to select objects for deployment or service creation. For example, you could use labels to select all Pods with a certain label and deploy a new version of the application:
kubectl apply -f my-app-v2.yaml -l app=my-app
This would deploy the new version of the application defined in my-app-v2.yaml to all Pods with the app: my-applabel.
- ReplicaSet
In Kubernetes, a ReplicaSet is an object that is used to ensure that a specified number of replicas (identical copies) of a Pod are running at all times.
A ReplicaSet is responsible for managing and maintaining the desired number of replicas of a Pod. It monitors the state of each replica and if any of the replicas go down, it automatically replaces it with a new one to ensure that the desired number of replicas is always maintained.
ReplicaSets can be defined using a declarative YAML file or a JSON file, which specifies the desired state of the replicas. The YAML or JSON file includes details such as the number of replicas, the Pod template to use, and any other required settings.
here's an example of a ReplicaSet definition in Kubernetes YAML code:
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: my-replicaset
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-container
image: my-image
ports:
- containerPort: 80
This YAML code creates a ReplicaSet named my-replicaset
with a desired state of three replicas. The selector
field specifies that the ReplicaSet should manage all Pods with the app=my-app
label. The template
field specifies the Pod template that should be used for each replica managed by the ReplicaSet. In this example, the template contains a single container named my-container
that uses the my-image
image and exposes port 80.
When this YAML code is applied to a Kubernetes cluster, the ReplicaSet will create and manage three replicas of the specified Pod template. If any of the replicas go down, the ReplicaSet will automatically replace them to maintain the desired state of three replicas.
- Deployment set
In Kubernetes, a DeploymentSet (also known as Deployment) is a resource object that provides declarative updates for Pods and ReplicaSets. It manages the creation, scaling, and updating of a set of identical Pods, ensuring that the desired number of replicas are always available and that they are replaced or updated when necessary.
A DeploymentSet is defined using a YAML or JSON configuration file, which specifies the desired state of the deployment. This includes the number of replicas to create, the container images to use, and any other relevant information about the deployment.
When a DeploymentSet is created, it creates a ReplicaSet, which is responsible for managing the Pods. The ReplicaSet ensures that the desired number of replicas are running and replaces them if they fail or are terminated. The DeploymentSet then monitors the ReplicaSet and makes updates to it as necessary, such as scaling it up or down, rolling out a new version of the application, or rolling back to a previous version.
One of the key benefits of using a DeploymentSet is that it allows for seamless updates and rollbacks. When updating an application, a new ReplicaSet is created with the updated configuration, and the old ReplicaSet is gradually scaled down as the new one is scaled up. If there are any issues with the new configuration, the DeploymentSet can quickly roll back to the previous version. Here is an example YAML configuration file for a basic DeploymentSet:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-container
image: my-image:latest
ports:
- containerPort: 80
There are two main update strategies available in Kubernetes: RollingUpdate and Recreate.
- RollingUpdate strategy: This strategy is the default for DeploymentSets. When a new version of the container image or configuration is deployed, the RollingUpdate strategy updates the replicas gradually, one at a time. This ensures that there is always a certain number of available replicas during the update process, reducing the risk of downtime. The RollingUpdate strategy also allows for a configurable percentage of maxUnavailable and maxSurge pods, which control how many pods can be unavailable during the update process and how many additional pods can be created during the update process.
- Recreate strategy: This strategy updates all the replicas at once, creating new replicas and terminating the old ones simultaneously. This can result in downtime during the update process, but it can be faster and simpler than the RollingUpdate strategy, especially when there are only a few replicas.
Slowing down rollouts in Kubernetes is a good practice to ensure the health and stability of your service. Here are some tips for slowing down rollouts:
- Configure your deployment strategy: Kubernetes offers various deployment strategies, such as rolling updates and blue-green deployments, that allow you to control how fast and how safely your updates are rolled out. Choose a deployment strategy that fits your needs and configure it accordingly.
- Use readiness and liveness probes: Kubernetes provides readiness and liveness probes that allow you to define health checks for your containers. These probes can be used to ensure that your service is healthy before the next set of containers are rolled out. By configuring these probes, you can avoid rolling out updates to containers that are not ready to serve traffic.
- Set maxUnavailable and maxSurge values: When configuring your deployment, you can set the maxUnavailable and maxSurge values to control the number of pods that are unavailable and the number of new pods that are created during a rollout. By setting these values appropriately, you can control the rate at which the new containers are rolled out.
- Monitor your service: Set up monitoring for your service to detect any issues that might arise during a rollout. Use tools like Prometheus or Grafana to monitor the health and performance of your service. This can help you catch any issues early and prevent them from affecting your users.
- Use canary deployments: Canary deployments allow you to roll out updates to a small percentage of users before rolling out to everyone. This allows you to test your updates in a safe environment before rolling out to everyone. If any issues are detected during the canary deployment, you can roll back the update before it affects everyone.
By following these best practices, you can ensure that your Kubernetes deployments are safe and stable, and that your users are not affected by any issues that might arise during a rollout.
https://www.youtube.com/watch?v=T4Z7visMM4E&ab_channel=TechWorldwithNana
https://www.youtube.com/watch?v=EF6c7MkOhkw&ab_channel=PavanElthepu
CNI (Container Network Interface) plugins are used in Kubernetes to enable communication between containers running on different nodes within the cluster. CNI plugins provide a standardized interface for network providers to integrate their network solutions with Kubernetes.To use a CNI plugin in Kubernetes, you need to install the plugin binary on each node in the cluster and configure Kubernetes to use the plugin. This can be done by setting the --network-plugin
flag to **cni
**when starting the Kubernetes API server and setting the **--cni-conf-dir
**and **--cni-bin-dir
**lags to specify the location of the CNI configuration and binary files.
Kindnet is the default CNI (Container Network Interface) plugin used by KIND (Kubernetes IN Docker) to provide networking for the containers running in a Kubernetes cluster. Here is how kindnet works:
- Kindnet creates a virtual network interface on each node in the cluster. This interface is called "kindnet0" and is used to provide connectivity between the containers on the node.
- Each container in the cluster is assigned a unique IP address from a private IP address range (10.244.0.0/16 by default) by the kindnet plugin.
- Kindnet creates a virtual bridge device called "kind" on each node in the cluster. This bridge device is used to connect the "kindnet0" interface on each node to the other nodes in the cluster.
- Kindnet also creates a "kube-proxy" pod in the "kube-system" namespace. This pod is responsible for configuring the iptables rules on each node in the cluster to enable inter-pod communication.
- When a pod is created in the cluster, Kubernetes assigns it an IP address from the "kindnet" network range. Kindnet then configures the network interface of the containers in the pod and sets up the necessary routes and iptables rules to enable communication between the containers within the pod and with other pods in the cluster.
In summary, kindnet creates a virtual network that connects the containers running in the Kubernetes cluster and provides network connectivity for the containers to communicate with each other. It also ensures that the networking configuration is consistent across all nodes in the cluster.
In Kubernetes, a NetworkPolicy is a specification that allows you to define rules for how pods can communicate with each other and with other network endpoints. There are several types of NetworkPolicy resources available in Kubernetes:
PodSelector
: This is the most basic type of NetworkPolicy. It allows you to define rules based on the labels assigned to pods. You can specify which pods are allowed to communicate with each other based on their labels.NamespaceSelector
: This type of NetworkPolicy allows you to define rules based on the namespace in which the pods are located. You can specify which pods in a particular namespace are allowed to communicate with each other.Ingress
: This type of NetworkPolicy allows you to define rules for incoming traffic to a pod. You can specify which sources are allowed to connect to the pod, and what ports they are allowed to use.Egress
: This type of NetworkPolicy allows you to define rules for outgoing traffic from a pod. You can specify which destinations the pod is allowed to connect to, and what ports it is allowed to use.Peer
: This type of NetworkPolicy allows you to define a specific pod as a peer to another pod. Peers can communicate with each other regardless of their label selectors or namespaces.
It's important to note that not all Kubernetes network plugins support all types of NetworkPolicy resources, so you should check the documentation for your particular network plugin to see which types are supported.
In Kubernetes, DNS (Domain Name System) is used to provide service discovery between different components within a cluster. Each pod in a Kubernetes cluster is assigned a unique IP address, but these addresses may change frequently due to scaling, upgrades, or other changes in the cluster. Using DNS names instead of IP addresses allows for more flexible communication between different components within the cluster.
Kubernetes has a built-in DNS service that provides name resolution for all resources within the cluster. This service is called CoreDNS and is deployed as a Kubernetes deployment. CoreDNS can be used to resolve service names, pod names, and other resources within the cluster.
To use DNS in Kubernetes, pods can simply use the DNS name of the service they want to communicate with instead of the IP address. For example, if a pod wants to communicate with a service named "my-service" in the same namespace, it can simply use the DNS name "my-service" to access it.
Additionally, Kubernetes allows for custom DNS configurations to be used in the cluster. This can be useful in scenarios where a specific DNS server or configuration is required for certain applications or services within the cluster. Custom DNS configurations can be specified in the Kubernetes configuration file or through the use of annotations on specific resources within the cluster.
A DNS resolver is a software component that is responsible for resolving domain names to IP addresses. In the context of Kubernetes, a DNS resolver is used by pods to resolve the DNS names of other pods or services within the cluster.
When a pod sends a request to another pod or service within the cluster, it typically uses the DNS name of the destination rather than the IP address. The DNS resolver is responsible for translating this DNS name to the appropriate IP address so that the pod can establish a connection with the destination.
https://www.youtube.com/watch?v=eth7osiCryc&ab_channel=SrinathChalla
https://www.youtube.com/watch?v=gPAP6JtBSR4&ab_channel=SudheerDevOps