Kubernetes Architecture, Installation and Configuration

Kubernetes Architecture, Installation and Configuration

Table of contents

  1. What is Kubernetes? why do we call it k8s?

Kubernetes is an open-source container orchestration platform used to automate the deployment, scaling, and management of containerized applications. It was initially developed by Google and is now maintained by the Cloud Native Computing Foundation (CNCF).

With Kubernetes, you can manage containerized applications across multiple hosts, scale applications up or down as needed, and automate rollouts and rollbacks, among other things. Kubernetes is highly flexible and can work with various container runtimes, such as Docker, containerd, and CRI-O.

The name "Kubernetes" comes from the Greek word for "helmsman" or "pilot," which reflects its role in guiding and managing containerized applications. The term "k8s" is a shorthand way of writing "Kubernetes," where "8" represents the number of letters between "K" and "s." It's a common abbreviation used in the Kubernetes community and in command-line tools.

  1. What are the benefits of using k8s?

There are several benefits to using Kubernetes (k8s) for container orchestration:

  1. Scalability: Kubernetes allows you to easily scale your containerized applications up or down based on demand. It can automatically add or remove containers as needed to maintain the desired level of performance and availability.

  2. Portability: Kubernetes provides a consistent platform for deploying and managing containerized applications, regardless of the underlying infrastructure. This makes it easier to move applications between different environments, such as development, testing, and production.

  3. Resilience: Kubernetes includes features such as self-healing and automatic rollbacks, which help ensure that your applications are always available and running correctly.

  4. Flexibility: Kubernetes supports a wide range of container runtimes, storage systems, and networking plugins, allowing you to choose the tools that work best for your specific needs.

  5. Automation: Kubernetes automates many aspects of container deployment and management, reducing the need for manual intervention and allowing you to focus on more important tasks.

  6. Community: Kubernetes has a large and active community of contributors and users, providing a wealth of resources and support for those using the platform.

Overall, Kubernetes provides a powerful and flexible platform for managing containerized applications at scale, making it a popular choice for organizations of all sizes

  1. Explain the architecture of Kubernetes.

The architecture of Kubernetes is designed to provide a highly scalable and resilient platform for managing containerized applications. At a high level, the Kubernetes architecture can be broken down into two main components: the control plane and the worker nodes.

The control plane is responsible for managing the overall state of the Kubernetes cluster, including scheduling and orchestrating container deployments, monitoring and maintaining application health , and managing networking and storage resources. The control plane consists of the following components:

  1. API Server: The API server is the central control point for the Kubernetes cluster. It exposes the Kubernetes API, which is used by other components to communicate with the cluster and manage its resources.

  2. etcd: etcd is a distributed key-value store used by Kubernetes to store configuration data, such as cluster state and configuration settings.

  3. Controller Manager: The controller manager is responsible for monitoring the state of the cluster and making changes to bring the desired state into compliance. It includes various controllers that manage resources such as nodes, pods, and services.

  4. Scheduler: The scheduler is responsible for assigning containers to nodes based on resource requirements and availability.

The worker nodes are responsible for running containerized applications and providing the resources they need to operate. Each worker node typically runs one or more containers, and communicates with the control plane to receive instructions and report status. The worker node consists of the following components:

  1. Kubelet: The kubelet is responsible for communicating with the control plane and ensuring that containers are running and healthy on the node.

  2. Container Runtime: The container runtime is the software responsible for running containers, such as Docker or CRI-O.

  3. Kube-proxy: The kube-proxy is responsible for managing network connectivity between containers and services within the cluster.

Overall, the Kubernetes architecture provides a powerful and flexible platform for managing containerized applications at scale, allowing organizations to easily deploy, scale, and manage their applications with minimal manual intervention.

  1. What is Control Plane?

In Kubernetes, the control plane is the set of components that are responsible for managing the overall state of the cluster, including scheduling and orchestrating container deployments, monitoring and maintaining application health, and managing networking and storage resources.

The control plane consists of several key components, including the API server, etcd, the controller manager, and the scheduler. These components work together to ensure that the cluster is running as desired and that applications are deployed and managed according to their desired state.

The API server is the central control point for the Kubernetes cluster. It exposes the Kubernetes API, which is used by other components to communicate with the cluster and manage its resources. The etcd is a distributed key-value store used by Kubernetes to store configuration data, such as cluster state and configuration settings. The controller manager is responsible for monitoring the state of the cluster and making changes to bring the desired state into compliance, and includes various controllers that manage resources such as nodes, pods, and services. Finally, the scheduler is responsible for assigning containers to nodes based on resource requirements and availability.

Overall, the control plane is a critical component of the Kubernetes architecture, providing a powerful and flexible platform for managing containerized applications at scale.

  1. Write the difference between kubectl and kubelets.

Kubectl and kubelet are two different components of the Kubernetes architecture, each with a specific role to play.

  1. Kubectl: Kubectl is a command-line tool used to interact with the Kubernetes API server, allowing users to deploy, manage, and monitor applications and resources within the Kubernetes cluster. Kubectl is typically used by administrators and developers to manage the cluster, create and modify Kubernetes objects such as pods, services, and deployments, and troubleshoot issues within the cluster.

  2. Kubelet: Kubelet is an agent that runs on each worker node in the Kubernetes cluster, responsible for managing the state of individual nodes and ensuring that containers are running as desired. Kubelet communicates with the Kubernetes API server to receive instructions and report status, and is responsible for starting, stopping, and monitoring containers on the node, as well as managing local storage resources.

In summary, while kubectl is used to manage the overall state of the Kubernetes cluster and interact with the API server, kubelet is responsible for managing the state of individual worker nodes and ensuring that containers are running as desired on each node. Both tools are essential components of the Kubernetes architecture and work together to provide a powerful and flexible platform for managing containerized applications at scale.

  1. Explain the role of the API server.

The API server is a critical component of the Kubernetes architecture that serves as the central control point for the Kubernetes cluster. Its main role is to expose the Kubernetes API, which is used by other components to communicate with the cluster and manage its resources.

Specifically, the API server provides a RESTful interface that allows users and automated processes to create, modify, and delete Kubernetes resources such as pods, services, and deployments. It acts as a gateway to the cluster, receiving requests from kubectl, kubelet, and other components, and coordinating responses to those requests.

In addition to exposing the Kubernetes API, the API server is also responsible for several other important functions, including:

  • Authenticating and authorizing requests: The API server ensures that requests to the Kubernetes API are valid and authorized, based on user or system credentials.

  • Storing cluster state: The API server uses etcd, a distributed key-value store, to store configuration data and cluster state, such as resource definitions, replication controllers, and other metadata.

  • Enforcing resource quotas: The API server enforces resource quotas and limits, ensuring that the cluster is not overwhelmed by excessive resource usage.

Overall, the API server plays a critical role in the Kubernetes architecture, providing a unified interface for managing the cluster and ensuring that all components can communicate with each other and with the cluster as a whole.

  1. Launch 2 instances master and worker, in this installation guide we will be installing kubeadm so we need atleast t2.medium for master meanwhile worker node can be t2.micro.

  1. Install docker on both master and worker node.

sudo apt install docker.io -y

sudo systemctl start docker

sudo systemctl enable docker

  1. To install kubeadm run below commands and fetch its package from the internet on both master and worker node.

    1.   sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
      
        echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
      
        sudo apt update -y
      

      master :-

      master

      worker :-

      4. Install Kubeadm,Kubectl and Kebelet in both master and worker node.

        sudo apt install kubeadm=1.20.0-00 kubectl=1.20.0-00 kubelet=1.20.0-00 -y
      

      master :-

      worker :-

      5. To connect master and worker run following commands on master only.

      1.   sudo su
        
          kubeadm init
        

      2. To start using your cluster, you need to run the following on master as a regular user (i.e no root access)

          mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
        

        Alternatively, if you have root access, you can run the following.

          export KUBECONFIG=/etc/kubernetes/admin.conf
        

      3. Finish the master Setup using the following Command.

          kubectl apply -f https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s.yaml
        

      4. Use the following command on master to create token for connection with worker node.

          kubeadm token create --print-join-command
        

      5. Now in worker node use the reset the checks command as following.

          kubeadm reset pre-flight checks
        

      6. Now add port 6443 in edit inbound rules in worker node before connecting.

      7. Copy the token generated in master and paste it in worker node with --v=5 in the end.

      8. Verify the connection by running kubectl get nodes in master.

         kubectl gets nodes
        

      9. If u get this error The connection to the server localhost:8080 was refused – did you specify the right host or port? use below command to fix(on master).

         cp /etc/kubernetes/admin.conf $HOME/
         chown $(id -u):$(id -g) $HOME/admin.conf
         export KUBECONFIG=$HOME/admin.conf
        

Creating first NGINX pod

  • Use below command to create the nginx pod on master node.

        kubectl run nginx --image=nginx
    

  • Verify if container is created in worker node using docker ps command.

  • To delete this pod use the following command in master node.

        kubectl delete pod nginx
    

  • To verify deletion again use docker ps in worker node.