preface

Service discovery has always been a complex issue in microservices architecture. Moreover, service discovery architectures are poorly handled and tend to lead to centralization. At the same time, the provision of micro-services inevitably requires some load balancing schemes to achieve high availability and scalability of services, which undoubtedly increases a lot of complexity.

In my opinion, an asynchronous, message-based approach may be more suitable for microservices architecture.

Message-based microservices architecture, the deployment conditions for all microservices are very simple, just need access to the messaging service. At the same time, the removal and addition of micro-service nodes will not affect the service provision. Compared to the architecture of service discovery, simplicity is beauty.

In this practice, Seneca, a NodeJS microservices framework, was used. Seneca, using the Seneca-AMqp-Transport plug-in, makes it easy to build message-based microservices.

Here is the architecture diagram:







www.processon.com/view/link/5…

In this architecture, we are using the standard Seneca defined command specification, which is probably the one specification that all microservices need to follow, as well as other languages. Encapsulate a library of Seneca command specifications. Do not know the official development, development difficulty will not be too big.

The interface layer is flexible, and can decide how to encapsulate the transport protocol according to the characteristics of the upper-layer application, and finally convert it into a standard command and send it to the message service. Direct access to messaging services is not recommended, and upper-layer applications should remain flexible.

Full practice code: github.com/luaxlou/mic…

1 Preparation

Create a VIRTUAL machine using docker-machine.

For some basic docker usage, read the previous article: Docker + Consul’s Minimalist Web Architecture Practices based on service Discovery, which won’t be covered here.

Create three VMS in sequence:

$dm create -d "virtualbox" node1 $dm create -d "virtualbox" node2 $dm create -d "virtualbox" node3Copy the code

2 Starting Construction

Set up the Rabbitmq message service

Message queue service has become a necessary basic service for high concurrency applications. We use Rabbitmq, you can change it to anything you want, just follow the AMQP protocol.

Docker installation is convenient, but is not recommended for production environments. It is recommended to use cloud services to ensure high availability and scalability. It’s a little more expensive, but it’s the only order that’s worth the money.

Install directly on the host:

$ docker search rabbitmq NAME DESCRIPTION STARS OFFICIAL AUTOMATED rabbitmq RabbitMQ is an open source multi-protocol . 1466 [OK] tutum/rabbitmq Base docker image to run a RabbitMQ server 11 frodenas/rabbitmq A Docker Image for RabbitMQ 11 [OK] sysrun/rpi-rabbitmq RabbitMQ Container for the Raspberry Pi 2 ... 6 aweber/rabbitmq-autocluster RabbitMQ with the Autocluster Plugin 5 gonkulatorlabs/rabbitmq DEPRECATED: See maryville/rabbitmq 5 [OK] letsxo/rabbitmq RabbitMQ with Management and MQTT plugins. 4 [OK] bitnami/rabbitmq Bitnami  Docker Image for RabbitMQ 3 [OK]Copy the code
$ docker run -d --name rabbit -p   5672:5672  rabbitmq
Copy the code

This starts a message queue service and opens port 5672

Install Jenkins

Jenkins is used for automatic integration, otherwise each build would be a hassle.

The following practice was done by the author after a lot of mistakes, Jenkins will have a lot of trouble in the installation process, but also under the MAC installation will have a lot of trouble.

Install Jenkins on Node1

$ dm ssh node1
 
$ mkdir /mnt/sda1/var/jenkins_home
$ sudo chown 1000 /mnt/sda1/var/jenkins_home
$ sudo chown 1000 /var/run/docker.sock

$ docker run -d -v /var/run/docker.sock:/var/run/docker.sock \
                -v /mnt/sda1/var/jenkins_home:/var/jenkins_home \
                -v $(which docker):/usr/bin/docker -p 8080:8080 jenkins
Copy the code

Check the initial password: $cat/MNT/sda1 / var/jenkins_home/secrets/initialAdminPassword

Install the private Registry

Install it on a MAC

$ docker run -d -p 5000:5000 registry
Copy the code

The document reference: docs.docker.com/registry/sp…

Ready to code

The code uses the official Seneca example and the complete Dockerfile has been written.

FROM node:alpine

RUN npm install pm2 -g
WORKDIR /usr/src/app

COPY package.json ./
RUN npm install
COPY . .

CMD ["pm2-docker","process.yml"]
Copy the code

To enable NodeJS to use multi-core cpus, Dockerfile integrates PM2 to manage Node processes.

Full code: github.com/luaxlou/mic…

Configuring automatic integration

The latest version of Jenkins is used here, and the new version of Jenkins uses Pipline. A new way to build, using groovy syntax.

It’s elegant to write, but expensive to learn. Because the documents are incomplete and some documents are invalid, the author has to decompilate the pipeline plug-in to be tuned.

The use of pipeline script

node { stage('Preparation') { def r = git('https://github.com/luaxlou/micro-service-practice.git') } stage('Build') { Dir (' Seneca - the listener ') {withEnv ([" DOCKER_REGISTRY_URL = http://192.168.99.1:5000 "]) { docker.build("seneca-listener").push("latest") } } } }Copy the code

Start building and, with luck, you should see something like this:





image.png

This is a feature of pipeline, you can visualize the execution of each stage, which is not a small improvement.

Access the private Registy API to see the generated tags.

The curl http://192.168.99.1:5000/v2/seneca-listener/tags/list

Finally, try our program

Publishing messages on the host:

$ git clone https://github.com/luaxlou/micro-service-practice.git
Copy the code

Seneca-clinet code is an illustration of interface layer code that can be wrapped to your liking. At the same time, the command code is sent directly for testing.

Enter the Seneca-Clinet directory

$AMQP_URL = 192.168.99.1:5672 node index. JsCopy the code

The program will send a command every two seconds:

#! /usr/bin/env node 'use strict'; const client = require('seneca')() .use('seneca-amqp-transport') .client({ type: 'amqp', pin: 'cmd:salute', url: process.env.AMQP_URL }); setInterval(function() { client.act('cmd:salute', { name: 'World', max: 100, min: 25 }, (err, res) => { if (err) { throw err; } console.log(res); }); }, 2000);Copy the code

Although you keep issuing commands, you will soon find that all commands have timed out. This is because there are no consumers, and of course the commands are not lost, but the interface layer is not returned in time. If the application layer supports asynchronous mode, each command has an independent ID. You can reserve the ID and fetch it later. This is flexible enough to encapsulate the interface layer as needed.

Enter 2

$docker run 192.168.99.1:5000 / Seneca - the listener: latest 0 | Seneca - l | {" kind ":" notice ", "notice" : "hello, Seneca Fwunhukrcmzn / 1507605332382/16/3.4.2 / - ", "level" : "info", "Seneca" : "fwunhukrcmzn / 1507605332382/16/3.4.2 / -", "when" : 1507605332 661}Copy the code

After starting, I went back to Seneca-Clinet and found that the previous timeout commands were received.

{ id: 86, message: 'Hello World! ', from: { pid: 16, file: 'index.js' }, now: 1507605332699 } { id: 44, message: 'Hello World! ', from: { pid: 16, file: 'index.js' }, now: 1507605332701 } { id: 56, message: 'Hello World! ', from: { pid: 16, file: 'index.js' }, now: 1507605332703 } { id: 57, message: 'Hello World! ', from: { pid: 16, file: 'index.js' }, now: 1507605332706 } { id: 58, message: 'Hello World! ', from: { pid: 16, file: 'index.js' }, now: 1507605332707 }Copy the code

At this point, the complete architecture is complete.

Some unfinished business

1. Automatic integration: You only need to configure Webhook. 2. Automatic deployment. Due to the way Docker works, the Docker process needs to be restarted when the service is upgraded. There are many ways to do this, the rougher ones being direct control of the host, or tools like Salt. So far, no good open source solutions have been found. Individuals tend to develop agents by themselves, release limited apis for routine deployment or other tasks, and periodically collect server information for monitoring. This could be my next open source project.

conclusion

This article is a new milestone, and the results of the practice will be used in later architectures. Docker took me out of the traditional architectural mold, and it gave me a hard time. But it was worth it.

It’s also a fresh start, finally getting out of the old company. There are many unknowns about the future, but I believe they are all beautiful.

This may be the charm of life.

Hello World!!!!!