The DevOps 2.1 Toolkit:Docker Swarm
上QQ阅读APP看书,第一时间看更新

Requirements of secured and fault tolerant services running with high availability

Let us quickly go over the internals of the go-demo application. It consists of two services. Data is stored in a MongoDB. The database is consumed by a backend service called go-demo. No other service should access the database directly. If another service needs the data, it should send a request to the go-demo service. That way we have clear boundaries. Data is owned and managed by the go-demo service. It exposes an API that is the only access point to the data.

The system should be able to host multiple applications. Each will have a unique base URL. For example, the go-demo path starts with /demo. The other applications will have different paths (example:  /users, /products, and so on). The system will be accessible only through ports 80  for HTTP and 443 HTTPS. Please note that there can be no two processes that can listen to the same port. In other words, only a single service can be configured to listen to port 80.

To meet load fluctuations and use the resources effectively, we must be able to scale (or de-scale) each service individually and independently from the others. Any request to any of the services should pass through a load balancer that will distribute the load across all instances. As a minimum, at least two instances of any service should be running at any given moment. That way, we can accomplish high availability even in case one of the instances stops working. We should aim even higher than that and make sure that even a failure of a whole node does not interrupt the system as a whole.

To meet performance and fail-over needs services should be distributed across the cluster.

We'll make a temporary exception to the rule that each service should run multiple instances. Mongo volumes do not work with Docker Machine on OS X and Windows. Later on, when we reach the chapters that provide guidance towards production setup inside major hosting providers (example: AWS), we'll remove this exception and make sure that the database is also configured to run with multiple instances.

Taking all this into account, we can make the following requirements:

  1. A load balancer will distribute requests evenly (round-robin) across all instances of any given service (proxy included). It should be fault tolerant and not depend on any single node.
  2. A reverse proxy will be in charge of routing requests based on their base URLs.
  3. The go-demo service will be able to communicate freely with the go-demo-db service and will be accessible only through the reverse proxy.
  4. The database will be isolated from any but the service it belongs to go-demo.

A logical architecture of what we're trying to accomplish can be presented with the diagram that follows:

Figure 3-1: A logical architecture of the go-demo service

How can we accomplish those requirements?

Let us solve each of the four requirements one by one. We'll start from the bottom and move towards the top.

The first problem to tackle is how to run a database isolated from any but the service it belongs to.