Github.com/oopsguy/mic… Translator: oopsguy.com

This is the sixth chapter of the book on how to build applications using microservices. Chapter 1 introduces the microservices architecture pattern and discusses the advantages and disadvantages of using microservices. Subsequent chapters discuss various aspects of microservices architecture: using API gateways, interprocess communication, service discovery, and event-driven data management. In this chapter, we introduce a strategy for deploying microservices.

6.1, motivation

Deploying a singleton application means running a single, larger application with one or more copies of the same. You typically configure N servers (physical or virtual) and run M application instances on each server. Deploying a monolithic application is not always easy, but it is much simpler than deploying a microservice application.

Microservice applications consist of dozens or even hundreds of services. Services are written in different languages and frameworks. Each is a mini-application with its own specific deployment, resource, extension, and monitoring requirements. For example, you need to run a certain number of instances of each service based on the requirements of that service. In addition, CPU, memory, and I/O resources must be provided for each service instance. Even more challenging is that despite this complexity, deployment services must be fast, reliable, and cost effective.

There are several different microservice deployment patterns. Let’s first look at the single-host multi-service instance pattern.

6.2. Single-host multi-service Instance Mode

One way to deploy microservices is to use the Multiple Service Instances per Host pattern. When using this pattern, you can provide one or more physical or virtual hosts and run multiple service instances on each. In many ways, this is the traditional way applications are deployed. Each service instance runs on a standard port on one or more hosts. Mainframes are usually treated like pets.

Figure 6-1 shows the structure of this pattern:

Figure 6-1 hosts can support multiple service instances

There are several variations on this model. One variation is that each service instance is a process or group of processes. For example, you can deploy a Java service instance as a Web application on the Apache Tomcat server. A Node.js service instance may contain a parent process and one or more child processes.

Another variation of this pattern is to run multiple service instances in the same process or group of processes. For example, you can deploy multiple Java Web applications on the same Apache Tomcat server, or run multiple OSGI packages in the same OSGI container.

The single-host multi-service instance pattern has both advantages and disadvantages. The main advantage is its relatively high resource utilization. Multiple service instances share a server and its operating system. It is more efficient if the process or group runs multiple service instances (for example, multiple Web applications sharing the same Apache Tomcat server and JVM).

Another advantage of this pattern is that service instances can be deployed relatively quickly. You simply copy the service to the host and start it. If the service is written in Java, you can copy JAR or WAR files. For other languages, such as Node.js or Ruby, you can copy the source code directly. In either case, the number of bytes copied across the network is relatively small.

In addition, it is usually very fast to start a service due to the lack of overhead. If the service is its own process, you just need to start it. If a service is one of several instances running in the same container process or group of processes, it can be dynamically deployed to the container or restarted.

While this is attractive, the single-host multi-service instance model has some obvious drawbacks. A major disadvantage is that there is little or no isolation of service instances, unless each service instance is a separate process. While you can accurately monitor the resource utilization of each service instance, you cannot limit the resources used by each instance. A misbehaving service instance can consume all the memory or CPU of the host.

If multiple service instances are running in the same process, there is no isolation. For example, all instances might share the same JVM heap. A misbehaving service instance can easily destroy other services running in the same process. In addition, you cannot monitor the resources used by each service instance.

Another important issue with this approach is that the operations team deploying the service must understand the specifics of performing this operation. Services can be written in multiple languages and frameworks, so the development team must communicate many details to operations. This complexity undoubtedly increases the risk of errors during deployment.

As you can see, despite the simplicity of this approach, the single-host multi-service instance pattern does have some significant drawbacks. Now let’s look at other ways you can deploy microservices around these issues.

6.3. One service instance mode per host

Another way to deploy microservices is to use the Service Instance per Host pattern. When using this pattern, you can run each service instance separately on the host. There are two different forms of this pattern: one service instance per virtual machine and one service instance per container.

6.3.1. One service instance per VM

When you use the one service instance per virtual machine pattern, package each service as a virtual machine (VM) image (such as Amazon EC2 AMI). Each service instance is a VM started with the VM image (for example, an EC2 instance).

Figure 6-2 shows the structure of this pattern:

Figure 6-2. Each service can run on its own VM

This is the main way Netflix deploys its video streaming service. Netflix uses Aminator to package each service as an EC2 AMI. Each running service instance is an EC2 instance.

You can use a variety of tools to build your own virtual machines. You can configure your continuous integration (CI) server (such as Jenkins) to call Aminator to package the service as an EC2 AMI. Packer is another option for automated virtual machine image creation. Unlike Aminator, it supports a variety of virtualization technologies, including EC2, DigitalOcean, VirtualBox, and VMware.

Boxfuse has a great way of building virtual machine images that overcomes the disadvantages of virtual machines I’ll describe below. Boxfuse packages your Java application into a minimal VM image. These images can be built quickly, launched quickly, and are more secure because they expose a limited attack surface.

CloudNative owns Bakery, a SaaS product for creating EC2 amis. You can configure your CI server to call Bakery after the microservice passes the test. Then Bakery packages your service into an AMI. Using a SaaS product like Bakery means you don’t have to waste valuable time setting up the AMI creation infrastructure.

The one service instance per virtual machine pattern has many advantages. The main advantage of VMS is that each service instance runs in complete isolation. It has a fixed amount of CPU and memory and cannot steal resources from other services.

Another advantage of deploying microservices as virtual machines is the ability to leverage a mature cloud infrastructure. Clouds such as AWS provide useful features such as load balancing and automatic scaling.

Another benefit of deploying a service as a virtual machine is that it encapsulates the implementation technology of the service. Once a service is packaged into a virtual machine, it becomes a black box. The VM’s management API becomes the API for deploying services. Deployment becomes simpler and more reliable.

However, the one service instance per virtual machine pattern also has some disadvantages. One disadvantage is low resource utilization. Each service instance has an entire VM overhead, including the operating system. In addition, in a typical common IaaS, VMS have a fixed size and may be underutilized.

In addition, VMS in a public IaaS are usually charged, whether they are busy or idle. IaaS, such as AWS, provide automatic scalability, but it is difficult to respond quickly to changing requirements. As a result, you often need to overconfigure the VM, increasing deployment costs.

Another disadvantage of this approach is that it is often slow to deploy new versions of the service. VM images are usually slow to build because of their size. Also, VM instantiation is slow, again because of their size. Also, the operating system takes some time to boot up. Note, however, that this is not common, as lightweight VMS built with Boxfuse already exist.

Another disadvantage of the one service instance per virtual machine model is that you (or someone else in the organization) are usually responsible for a lot of undivided burdens. Unless you use a tool like Boxfuse to handle the overhead of building and managing virtual machines, this is your responsibility. This necessary and time-consuming activity distracts from your core business.

Let’s look at an alternative way to deploy a more lightweight microservice that has many of the same advantages as virtual machines.

6.3.2. One service instance pattern per container

When you use the Service Instance per Container pattern, each Service Instance runs in its own Container. The container is an operating system-level virtualization mechanism. A container is made up of one or more processes running in a sandbox. From a process perspective, they have their own port namespace and root file system. You can limit the container’s memory and CPU resources. Some container implementations also have I/O rate limits. Examples of container technologies are Docker and Solaris Zones.

Figure 6-3 shows the structure of this pattern:

Figure 6-3. Each service can run in its own container

To use this pattern, package your service as a container image. A container image is a file system image of the applications and libraries needed to run the service. Some container images consist of a complete Linux root file system. It’s also lighter. For example, to deploy a Java service, you can build a container image that contains the Java runtime, perhaps an Apache Tomcat server and a compiled Java application.

After you package the service as a container image, you start one or more containers. Multiple containers are typically run on each physical or virtual host. You can use cluster management tools, such as Kubernetes or Marathon, to manage containers. Cluster management tools treat hosts as a resource pool. It determines where to place each container based on the resources it needs and the resources available on each host.

One service instance per container Pattern Patterns have advantages and disadvantages. The advantages of containers are similar to those of virtual machines. They isolate service instances from each other. You can easily monitor the resources consumed by each container. In addition, like VMS, containers encapsulate service implementation technologies. The container Management API serves as an API for managing your services.

However, unlike virtual machines, containers are lightweight technologies. Container images can usually be built very quickly. On my laptop, for example, it takes five seconds to package a Spring Boot application into a Docker container. Containers can also start quickly because there is no cumbersome operating system boot mechanism. When a container is started, it runs services.

There are some disadvantages to using containers. While the container infrastructure is rapidly maturing, it is not as mature as the virtual machine infrastructure. In addition, containers are not as secure as VMS because containers share the host OS kernel with each other.

Another disadvantage of containers is that you are responsible for the undivided burden of container image management. In addition, unless you use a managed Container solution, such as Google Container Engine or Amazon EC2 Container Service (ECS), you must manage the Container infrastructure and possibly the VM infrastructure that runs on it yourself.

In addition, containers are typically deployed on an infrastructure that charges per VM. As a result, as mentioned earlier, there may be additional costs associated with overconfiguring VMS to handle load spikes.

Interestingly, the distinction between containers and VMS can get a little fuzzy. As mentioned earlier, the Boxfuse VM can be quickly built and started. The Clear Containers project aims to create lightweight virtual machines. Unikernels is also thriving. Docker acquired Unikernel in early 2016.

There is also the increasingly popular concept of server-less deployment, which is a way to avoid the “deploy services in a container or in a virtual machine” question. So let’s see.

6.4 Serverless Deployment

AWS Lambda is an example of serverless deployment technology. It supports Java, Node.js, and Python services. To deploy the microserver, package it as a ZIP file and upload it to AWS Lambda. You also provide metadata, which includes the name of the function called to handle the request, also known as the event. AWS Lambda automatically runs enough microservice instances to handle requests. You only pay based on time and memory consumption per request. Of course, the devil tends to be in the details, and you quickly noticed the limitations of AWS Lambda. But the fact that you, as a developer, or anyone else in your organization doesn’t have to worry about any aspect of the server, virtual machine, or container is incredibly attractive.

Lambda functions are stateless services. It typically handles requests by invoking AWS services. For example, Lambda functions are called when an image is uploaded to an S3 bucket, a record is inserted into the DynamoDB picture table, and a message is published to the Kinesis stream to trigger image processing. Lambda functions can also invoke third-party Web services.

There are four ways to call a Lambda function:

  • Use the Web service request directly
  • Automatically responds to events generated by an AWS Service such as S3, DynamoDB, Kinesis, or Simple Email Service
  • HTTP requests from application clients are processed automatically through the AWS API gateway
  • Follow a cron-like schedule on a regular basis

As you can see, AWS Lambda is a convenient way to deploy microservices. Request-based pricing means that you only pay for the work that the service actually performs. In addition, because you don’t have any responsibility for the IT infrastructure, you can focus on developing applications.

However, there are some obvious limitations. Lambda functions are not suitable for deploying long-running services, such as services that consume messages from third-party message brokers. The request must be completed within 300 seconds. The service must be stateless because it is theoretically possible for AWS Lambda to run a separate instance for each request. They must be written in one of the supported languages. Services must also start quickly; Otherwise, they may terminate due to timeout.

6.5,

Deploying microservice applications is fraught with challenges. You may have several or even hundreds of services written in a variety of languages and frameworks. Each application is a mini-application with its own specific deployment, resource, extension, and monitoring requirements. There are several microservice deployment patterns, including one service instance per virtual machine and one service instance per container. Another interesting option for deploying microservices is AWS Lambda, a Serverless approach. In the next and final chapter of the book, we will show you how to migrate monolithic applications to microservices architecture.

Microservices: Use NGINX to deploy microservices on different hosts

by Floyd Smith

NGINX has many advantages for all types of deployments — whether monolithic applications, microservice applications, or hybrid applications (described in the next chapter). With NGINX, you can abstract intelligence out of different deployment environments and into NGINX. There are many application functions that work differently if you use tools for different deployment environments, but with NGINX you can work the same way in all environments.

This feature also gives NGINX and NGINX Plus a second advantage: extending the capabilities of applications by running them simultaneously in multiple deployment environments. Suppose you own and manage local servers, but your application usage is growing and is expected to exceed the peak that these servers can handle. If you’re already using NGINX, you have a powerful option: scale to the cloud – for example, to AWS – rather than buying, configuring, and keeping additional servers just in case. That is, when traffic on your local server reaches capacity limits, you can start other microservice instances in the cloud as needed to handle it.

This is just one example of how NGINX has become more flexible. Maintaining separate test and deployment environments, switching infrastructures, and managing application portfolios in various environments have become more realistic and achievable.

The NGINX Microservices reference architecture is explicitly designed to support this flexible deployment, with the assumption that container technology is used during development and deployment. If you haven’t tried it already, consider moving to containers, as well as NGINX or NGINX Plus for the ease of moving to micro-services, as well as future-oriented applications, development and deployment flexibility, and people.

This series is all translated

Github.com/oopsguy/mic…