background

As we all know, Docker has brought profound changes to the IT industry in recent years, from development to testing to operation and maintenance. It also goes hand in hand with microservices architecture.

In the latest version of Docker(CE 17.03), swarm Mode has matured, eliminating the need for specialized infrastructure management, service choreography, service discovery, health check, load balancing, etc., in simpler scenarios.

But an API Gateway is still needed. Perhaps with a log collection, your microservices architecture is complete. We know Nginx Plus works well with API Gateway, but it’s commercial software. Nginx we do not say authentication, traffic limiting, statistics, and other functions, just request forwarding this most basic problem.

As we know, Docker uses DNS to balance service requests with the same name to different nodes. However, Nginx has an irrevocable DNS Cache during reverse proxy for speed, so Docker dynamically updates DNS according to container expansion or contraction. Nginx, however, is unimpressed and insists on sending requests to a fixed IP address, which may even be invalid, let alone balancing.

There is a little Hack on the configuration file that enables Nginx to query DNS every time. I was going to write an article about it, but now we have found a more elegant API gateway, Caddy. I wrote a brief introduction of it in my last article.

All the rest of the code is in this demo, so you can clone it and do your own experiments with it.

application

Let’s start with golang, the simplest HTTP API you can write in any language you know. It returns Hello World with its own hostname for GET requests.

package main import ( "io" "log" "net/http" "os" ) // HelloServer the web server func HelloServer(w http.ResponseWriter,  req *http.Request) { hostname, _ := os.Hostname() log.Println(hostname) io.WriteString(w, "Hello, world! I am "+hostname+" :)\n") } func main() { http.HandleFunc("/", HelloServer) log.Fatal(http.ListenAndServe(":12345", nil)) }Copy the code

Docker,

We need to make the above application a Docker image, exposing port 12345. Then it is possible to start a cluster using Docker Swarm. It should have been pretty easy, but TO make it faster for you to pull the mirror directly, I built it in two steps, compiling the app and adding it to the small Alpine mirror. You don’t have to worry about the details. Let’s take a look at the final docker-comemage. yml choreography file first.

version: '3'
services:
    app:
        image: muninn/caddy-microservice:app
        deploy:
            replicas: 3
    gateway:
        image: muninn/caddy-microservice:gateway
        ports:
            - 2015:2015
        depends_on:
            - app
        deploy:
            replicas: 1
            placement:
                constraints: [node.role == manager]
Copy the code

This is the latest version of the docker-compose file, which is no longer started with the Docker-compose command, but with the Docker stack deploy command. Anyway, the current version of the layout is not fully integrated, a little dizzy, but it works. Now we see that there are two images in the choreography:

  • Muninn/Caddy-MicroService :app This is the app mirror we talked about in the previous section, and we will launch three instances to test the upper layer load balancing.
  • Muninn/caddy-MicroService :gateway This is our next gateway, which listens on port 2015 and forwards requests to the app.

Use Caddy as the gateway

To use Caddy as the gateway, let’s focus on Caddyfile:

:2015 {
    proxy / app:12345
}
Copy the code

Well, it’s too easy. It listens on the machine’s port 2015 and forwards all requests to app 12345. The app, which is actually a domain name, is resolved to a random instance of the name service in docker Swarm’s network.

In the future, if there are many apps, it would be nice to forward different request prefixes to different apps. So remember to write the specification so that the endpoint prefix of an app is the same as possible.

Then caddy also needs to be containerized. For those interested, see dockerfile.gateway.

Running the server

With that in mind, you’re ready to run the server. Just use the image I uploaded to the cloud. The three images used in this article are about 26M in total when downloaded, which is not big. Clone the library mentioned in my background section into the project directory, or simply copy the compose file mentioned above and save it as docker-comemage. yml, then execute the following command.

docker-compose pull
docker stack deploy -c docker-compose.yml caddy
Copy the code

The second stack command requires that you have switched docker to Swarm mode. If not, you can switch to swarm mode as prompted. If successful, let’s check the status:

docker stack ps caddy
Copy the code

If there is no problem, we can see that three apps and a Gateway have been launched. We then test whether the Gateway can distribute requests to three back ends.

test

We can test the service by visiting http://{your-host-ip}:2015, using a browser or curl. Then you realize that it doesn’t matter how you refresh the content, you don’t get access to a random back end like you thought.

Don’t worry, this is not because Caddy caches DNS as Nginx does, but for another reason. Caddy maintains a connection pool with the back end for reverse proxy speed. When there is only one client, it is always the first connection. To prove this, we need to access our service concurrently to see if it meets our expectations.

Similarly, I have prepared an image for you to test, which can be used directly by Docker.

docker run --rm -it muninn/caddy-microservice:client
Copy the code

For those interested, the code in the Client folder makes 30 simultaneous requests and prints out three back-end hits.

In addition, I also made a shell version, just need sh test.sh, but can only see the output pull, there is no automatic check results.

Well, now we know that Caddy is well suited to the API Gateway in the microservices architecture.

API Gateway

What? You said you didn’t realize it was an API Gateway. We just solved the problem of the API Gateway and DNS-style service discovery in the container project. Now we can write n apps, each app is a micro service, and route different urls to different apps in the Gateway.

The advanced

Caddy can also easily do the authentication center by the way. Microservices suggest using JWT for authentication, and carry the permission in the token. Caddy can do it with a little configuration. I’ll do tutorials and demos later on. Auth2.0 I don’t think is a good fit for microservices architecture, but it still has a complex architecture solution, a topic for another day.

Caddy can also do API status monitoring, caching, flow limiting, and other API Gateway duties, but these require some development on your part. Do you have any more ideas? Welcome to leave a message.