The DevOps 2.1 Toolkit:Docker Swarm
上QQ阅读APP看书,第一时间看更新

Scaling services

We should always run at least two instances of any given service. That way they can share the load and, if one of them fails, there will be no downtime. We'll explore Swarm's failover capability soon and leave load balancing for the next chapter.

We can, for example, tell Swarm that we want to run five replicas of then go-demo service:

docker service scale go-demo=5

With the service scale command, we scheduled five replicas. Swarm will make sure that five instances of go-demo are running somewhere inside the cluster.

We can confirm that, indeed, five replicas are running through the, already familiar, service ls command:

docker service ls

The output is as follows (IDs are removed for brevity):

NAME       MODE       REPLICAS IMAGE
go-demo replicated 5/5 vfarcic/go-demo:1.0
go-demo-db replicated 1/1 mongo:3.2.10

As we can see, five out of five REPLICAS of the go-demo service are running.

The service ps command provides more detailed information about a single service:

docker service ps go-demo

The output is as follows (IDs and ERROR PORTs columns are removed for brevity):

NAME      IMAGE               NODE   DESIRED STATE CURRENT STATE            
go-demo.1 vfarcic/go-demo:1.0 node-3 Running Running 1 minute ago
go-demo.2 vfarcic/go-demo:1.0 node-2 Running Running 51 seconds ago
go-demo.3 vfarcic/go-demo:1.0 node-2 Running Running 51 seconds ago
go-demo.4 vfarcic/go-demo:1.0 node-1 Running Running 53 seconds ago
go-demo.5 vfarcic/go-demo:1.0 node-3 Running Running 1 minute ago

We can see that the go-demo service is running five instances distributed across the three nodes. Since they all belong to the same go-demo SDN, they can communicate with each other no matter where they run inside the cluster. At the same time, none of them is accessible from outside:

Figure 2-9: Docker Swarm cluster with go-demo service scaled to five replicas

What happens if one of the containers is stopped or if the entire node fails? After all, processes and nodes do fail sooner or later. Nothing is perfect, and we need to be prepared for such situations.