Docker and Kubernetes for Java Developers
上QQ阅读APP看书,第一时间看更新

Kubernetes' role

While Docker provides the lifecycle management of containers, Kubernetes takes it to the next level by providing orchestration and managing clusters of containers. As you know, your application created using the microservice architecture will contain a couple of separated, independent services. How do we orchestrate and manage them? Kubernetes is an open-source tool that's perfect for this scenario. It defines a set of building blocks which provide mechanisms for deploying, maintaining, and scaling applications. The basic scheduling unit in Kubernetes is called a pod. Containers in a pod run on the same host, share the same IP address, and find each other via localhost. They can also communicate with each other using standard inter-process communications, such as shared memory or semaphores. A pod adds another level of abstraction to containerized components. A pod consists of one or more containers that are guaranteed to be co-located on the host machine and can share resources. It's same as a logical collection of containers that belong to an application.

For traditional services, such as a REST endpoint together with the corresponding database (our complete microservice, in fact), Kubernetes provides a concept of service. A service defines a logical group of pods and also enforces rules for accessing such logical groups from the outside world. Kubernetes uses the concept of Labels for pods and other resources (services, deployments, and so on). These are simple the key-value pairs that can be attached to resources at creation time and then added and modified at any time. We will be using labels later on, to organize and to select subsets of resources (pods, for example) to manage them as one entity.

Kubernetes can place your container or a group of containers in the specific host automatically. To find a suitable host (the one with the smallest workload), it will analyze the current workload of the hosts and different colocation and availability constraints. Of course, you will be able to specify the host manually, but having this automatic feature can make the best of the processing power and resources available. Kubernetes can monitor the resource usage (CPU and RAM) at the container, pod, and cluster level. The resource usage and performance analysis agent runs on each node, auto-discovers containers on the node, and collects CPU, memory, filesystem, and network usage statistics.

Kubernetes also manages the lifecycle of your container instances. If there are too many of them, some of them will be stopped. If the workload increases, new containers will be started automatically. This feature is called container auto-scaling. It will automatically change the number of running containers, based on memory, CPU utilization, or other metrics you define for your services, as the number of queries per second, for example.

As you remember from Chapter 2, Networking and Persistent Storage, Docker operates volumes to persist your application data. Kubernetes also supports two kinds of volume: regular which has the same lifecycle as the pod, and persistent with a lifecycle independent of any pod. Volume types are implemented the same way as in Docker in the form of plugins. This extensible design enables you to have almost any type of volume you need. It currently contains storage plugins such as Google Cloud Platform volume, AWS elastic block storage volume, and others.

Kubernetes can monitor the health of your services, it can do it by executing a specified HTTP method (the same as GET for example) for the specified URL and analyzing the HTTP status code given back in response. Also, a TCP probe can check if a specified port is open which can also be used to monitor the health of your service. Last, but not least, you can specify the command that can be executed in the container, and some actions that could be taken based on the command's response. If the specified probe method signals that something is wrong with the container, it can be automatically restarted. When you need to update your software, Kubernetes supports rolling updates. This feature allows you to update a deployed, containerized application with minimal downtime. The rolling update feature lets you specify the number of old replicas that may be down while they are being updated. Upgrading containerized software with Docker is especially easy, as you already know, it will just be a new image version for the container. I guess you are now getting the complete picture. Deployments can be updated, rolled out, or rolled back. Load balancing, service discovery, all the features you would probably need when orchestrating and managing your herd of microservices running from within Docker containers are available in Kubernetes. Initially made by Google for big scale, Kubernetes is nowadays widely used by organizations of various sizes to run containers in production.