Containerization is the latest and greatest method for deploying and operating software on an efficient and massive scale. It provides a solution to making software run more reliably when moving from one computing environment to another.
However, containers need oversight and control, and that’s why container management systems exist. Kubernetes is one of the most well-known and widely used container management systems, so let’s take a closer look at it. In this Kubernetes cheat sheet, we will present a Kubernetes definition, a Kubernetes overview, the architecture of Kubernetes, plus its components and commands, and rounding things out with a cheat sheet.
First, let’s take a quick look at containerization itself and get an idea of why it matters in this Kubernetes cheat sheet.
Containerization Explained
Containers permit software to reliably run when moving it from one computing environment to another, like moving an application from a developer's laptop to a test environment or from a data center’s physical machine over to a virtual machine in a cloud environment.
Different environments have different supporting software environments, network topologies, storage policies, and security protocols. But containers are made up of an entire runtime environment bundled into one package — an application and its dependencies, libraries and other binaries, and configuration files required to run it.
So, containers are an excellent way to bundle and run applications.
Here’s how containerization compares to traditional systems and virtualization, as shown by Kubernetes.io.
What is Kubernetes?
Now that we know what containerization is, let’s see how Kubernetes makes it easier to implement.
Read more: Guide to Getting Started With Kubernetes [2022]
When you have containers in a production environment, you need to manage them to run applications and smoothly prevent downtime. For instance, if one container goes down, another one needs to start up. Container management is the sort of task that would be better handled automatically by the system.
That's where Kubernetes comes in. Kubernetes is defined as an open-source container orchestration platform, providing organizations with a framework to run distributed systems resiliently. It handles application scaling and failover, providing deployment patterns and other benefits.
Kubernetes gives you:
- Automated rollouts and rollbacks. When you use Kubernetes, you can describe the state you want for your deployed containers, and the framework will change the actual state to the desired state at a controlled rate. So, you can automate Kubernetes to create new containers needed for your deployment, eliminate the existing containers, and move all their resources to the new containers.
- Automatic bin packing. Kubernetes fits containers onto your nodes with excellent optimization, subject to the parameters you set up. For instance, you inform Kubernetes how much RAM and CPU each container requires, and you allocate the node clusters needed to execute containerized tasks.
- Service discovery and load balancing. Kubernetes can expose a container using their IP address or DNS name. If container traffic runs high, Kubernetes can load balance and distribute network traffic to stabilize the deployment.
- Secret and configuration management. You can store and manage sensitive information like OAuth tokens, passwords, and SSH keys. Kubernetes lets you deploy and update secrets and application configurations without requiring you to rebuild container images or exposing secrets in your stack configuration.
- Storage orchestration. You can automatically mount your choice of storage systems such as local storage, public or private cloud providers, and others.
- Self-healing. Kubernetes restarts failed containers, replaces useless containers, and removes containers that don't respond to user-defined health checks. Kubernetes also doesn't advertise them to clients until they’re accessible and ready to use.
Kubernetes Architecture
The Kubernetes environment consists of a control plane (or the master), a distributed storage system (etcd) that keeps the cluster state consistent, and any number of cluster nodes, or Kubelets. Kubernetes conforms to a client-server architecture, with the master installed on one machine and the nodes on separate machines.
Read more: Understanding Kubernetes Architecture and Its Use Cases
While it’s possible to have multiple masters due to high availability demands, the accepted default setup features one master server that works as a point of contact and controlling node.
Kubernetes architecture must follow these three principles:
- It’s secure, abiding by the most current security best-practices at multiple levels (application, cluster, and network)
- It’s easy to use, operable with only a few simple commands
- It’s highly portable, running on any mainstream Linux distribution, either bare metal or virtual machine processors and different cloud providers (AWS, Azure, Google Cloud). It also allows new container runtimes and supports workloads across multi-cloud and hybrid environments.
Kubernetes’ architecture consists of various components, which we will devote more time to in the next section.
Here is an architectural illustration of Kubernetes, shown courtesy of Platform9.
Here’s a review of the fundamental concepts of Kubernetes architecture.
- Pod. A group of containers
- Labels. Used to identify pods
- Kubelet. Container agents responsible for maintaining pod sets
- Proxy. Pod load balancers that help distribute tasks
- Etcd. A metadata service
- CAdvisor. Monitors resource performance and usage
- Replication controller. Manages pod replication
- Scheduler. Schedules pods in worker nodes
Kubernetes Components
A working Kubernetes deployment is known as a cluster. The cluster consists of at least one worker machine (or nodes) that runs containerized applications. The nodes host the pods — components of the application workload.
The control plane manages all worker nodes and pods in the cluster. The control plane runs typically across multiple computers in production environments, while the cluster typically runs multiple nodes, providing high availability and fault-tolerance.
Here is a Kubernetes cluster diagram provided by Kubernetes.io.
Let’s look at the individual components.
Control Plane Components
These components make global decisions affecting the cluster, plus detecting and responding to cluster events, like starting a new pod.
- Kube-apiserver. This component is the control plane’s front end.
- Etcd. This component is a highly available and consistent key-value store. Etcd acts as the backing store for all Kubernetes cluster data.
- Kube-scheduler. This component watches out for newly created pods that lack assigned nodes and chooses nodes for them to run on.
- Kube-controller-manager. This component runs the controller process, including node controllers, endpoints controllers, replication controllers, service accounts, and token controllers.
- Cloud-controller-manager. The manager links your cluster into your cloud provider's API. It separates the components that interact with the chosen cloud platform from the components that only interact with your cluster.
Node Components
Node components, unsurprisingly, run on each node, where they maintain running pods and provide the Kubernetes runtime environment.
- Kubelet. This agent ensures that containers are running in a pod.
- Kube-proxy. This proxy maintains network rules on nodes. These rules allow network communication from sessions inside or outside of your cluster to your pods.
- Container runtime. This software runs containers. Kubernetes supports various container runtimes like Containerd, CRI-O, Docker, or any Kubernetes Container Runtime Interface (CRI) implementation.
Addons
This list covers just some of the addons that use Kubernetes resources to initiate cluster features.
- Cluster DNS. Cluster DNS serves DNS records for Kubernetes services. Although addons aren’t strictly mandatory, Kubernetes clusters should have a DNS because many examples depend on it.
- Web UI. The Web UI is a general-purpose dashboard for Kubernetes clusters. It lets users manage and troubleshoot the cluster plus any applications running in it.
- Container Resource Monitoring. This addon records generic time-series metrics regarding the central database’s containers and provides a UI for browsing the data.
- Cluster-Level Logging. This mechanism saves container logs to a central log store with a searching and browsing interface.
Kubectl Commands
Here is a chart with the most common kubectl commands. If you’re going for an interview for a Kubernetes-related position, you should make yourself acquainted with them all.
Pods and Container Introspection Commands
Function |
Command |
Lists all current pods |
Kubectl get pods |
Describes pod names |
Kubectl describe pod<name> |
Lists all replication controllers |
Kubectl get rc |
Lists replication controllers in a namespace |
Kubectl get rc –namespace=”namespace” |
Shows the replication controller name |
Kubectl describe rc <name> |
Lists services |
Kubectl get cvc |
Shows a service name |
Kubectl describe svc<name> |
Deletes a pod |
Kubectl delete pod<name> |
Watches nodes continuously |
Kubectl get nodes -w |
Debugging Commands
Function |
Command |
Executes the command on service by choosing a container |
Kubectl exec<service><commands>[-c< $container>] |
Gets logs from the service for a container |
Kubectl logs -f<name>>[-c< $container>] |
Shows metrics for a node |
Kubectl top node |
Shows metrics for a pod |
Kubectl top pod |
|
|
Cluster Introspection Commands
Function |
Command |
To get version-related information |
Kubectl version |
To get cluster-related information |
Kubectl cluster-info |
To get configuration details |
Kubectl config g view |
To get information about a node |
Kubectl describe node<node> |
Quick Commands
Function |
Command |
Launching a pod with a name and image. |
Kubectl run<name> — image=<image-name> |
To create a service detailed in <manifest.yaml> |
Kubectl create -f <manifest.yaml> |
To scale the replication counter, counting the number of instances. |
Kubectl scale –replicas=<count>rc<name> |
Mapping the external port to the internal replication port. |
Expose rc<name> –port=<external>–target-port=<internal> |
Stopping all pods in <n> |
Kubectl drain<n>– delete-local-data–force–ignore-daemonset |
To create a namespace. |
Kubectl create namespace <namespace> |
To let the master node run pods. |
Kubectltaintnodes –all-node-role.kuernetes.io/master- |
Do You Want a Career in DevOps?
DevOps Engineers are in high demand, and Simplilearn has the means to train you in all necessary aspects of this fascinating method of software development.
Read more: How to Become a DevOps Engineer?
The Professional Certificate Program in Cloud Computing and DevOps prepares you for a DevOps career, the fast-growing field that bridges the gap between software developers and operations. You’ll learn the principles of continuous development and deployment, automation of configuration management, inter-team collaboration, and IT service agility, using DevOps tools such as Git, Docker, Jenkins, and more.
Don’t delay! Check out Simplilearn today and start a new career in either DevOps or Kubernetes administration!