preface

I believe that many people are very headache with the deployment of Docker, I am also.

There is a very interesting phenomenon recently discovered: when a person wants to learn a certain technology, after he has learned it, but there is a problem to learn another technology, no matter how hard he has studied before, he will give up 99% of the time. I would like to call this phenomenon “learning Windows”.

Writing a website and learning vue. js is a “learning window” for many people. Once they leave the “learning window”, they will not want to learn: I have learned so much, how can I finally learn deployment?

So, in this article, I’m going to share with you a little bit about the Docker deployment.

demand

In accordance with international practice, start with a very simple requirement that accomplishes only a few things:

  • Show to-do list + Add a to-do
  • Record the number of visits to your website

Above is a classic Todo List app that couldn’t be more classic.

Analyze the requirements: a database is used for to-do lists, and a cache is used to record site visits.

Technology selection

Currently my front-end technology stack is react.js, so use react.js for the front-end.

Since Express has its own scaffolding, the back end uses Express.

In terms of database, because I am using M1 Mac, so mysql image can not be pulled, temporarily use mariadb instead.

We’re all familiar with caching, so just use Redis.

The front-end implementation

The implementation of the front end is very simple, making requests using Axios.

interface Todo {
  id: number;
  title: string;
  status: 'todo' | 'done';
}

const http = axios.create({
  baseURL: 'http://localhost:4200',})const App = () = > {
  const [newTodoTitle, setNewTodoTitle] = useState<string> (' ');
  const [count, setCount] = useState(0);
  const [todoList, setTodoList] = useState<Todo[]>([]);

  / / add a todo
  const addTodo = async() = > {await http.post('/todo', {
      title: newTodoTitle,
      status: 'todo',})await fetchTodoList();
  }

  // Get a page view and add a page view
  const fetchCount = async() = > {await http.post('/count');
    const { data } = await http.get('/count');
    setCount(data.myCount);
  }

  // Get the todo list
  const fetchTodoList = async() = > {const { data } = await http.get('/todo');
    setTodoList(data.todoList);
  }

  useEffect(() = >{ fetchCount().then(); fetchTodoList().then(); } []);return (
    <div className="App">
      <header>Website visits: {count}</header>

      <ul>
        {todoList.map(todo => (
          <li key={todo.id}>{todo.title} - {todo.status}</li>
        ))}
      </ul>

      <div>
        <input value={newTodoTitle} onChange={e= > setNewTodoTitle(e.target.value)} type="text"/>
        <button onClick={addTodo}>submit</button>
      </div>
    </div>
  );
}
Copy the code

The backend implementation

The back end is a little more troublesome, to solve the problems are:

  • Cross domain
  • Database connection
  • Redis connection

Configure the route in main.ts:

var cors = require('cors')

var indexRouter = require('./routes/index');
var usersRouter = require('./routes/count');
var todosRouter = require('./routes/todo');

var app = express();

// Resolve cross-domain
app.use(cors());

// Service routing
app.use('/', indexRouter);
app.use('/count', usersRouter);
app.use('/todo', todosRouter); .module.exports = app;
Copy the code

Traffic routing requires redis to achieve high-speed read and write:

const express = require('express');
const Redis = require("ioredis");

const router = express.Router();

/ / connect to redis
const redis = new Redis({
  port: 6379.host: "127.0.0.1"}); router.get('/'.async (req, res, next) => {
  const count = Number(await redis.get('myCount')) || 0;

  res.json({ myCount: count })
});

router.post('/'.async (req, res) => {
  const count = Number(await redis.get('myCount'));
  await redis.set('myCount', count + 1);
  res.json({ myCount: count + 1})})module.exports = router;
Copy the code

The sequelize library is used in the todo route to implement database connection and initialization:

const { Sequelize, DataTypes} = require('sequelize');
const express = require("express");

const router = express.Router();

// Connect to the database
const sequelize = new Sequelize({
  host: 'localhost'.database: 'docker_todo'.username: 'root'.password: '123456'.dialect: 'mariadb'});// Define todo Model
const Todo = sequelize.define('Todo', {
  id: {
    type: Sequelize.INTEGER,
    autoIncrement: true.primaryKey: true
  },
  title: { type: DataTypes.STRING },
  status: { type: DataTypes.STRING }
}, {});

// Synchronize the database structure
sequelize.sync({ force: true }).then(() = > {
  console.log('Synchronized');
});

router.get('/'.async (req, res) => {
  // Get todo list
  const todoList = await Todo.findAll();
  res.json({ todoList });
})

router.post('/'.async (req, res, next) => {
  const { title, status } = req.body;

  // Create a todo
  const newTodo = await Todo.create({
    title,
    status: status || 'todo'}); res.json({todo: newTodo })
});

module.exports = router;
Copy the code

Run locally

The following command could have been used to run the local application:

# the front-end
cd client && npm run start

# the back-end
cd server && npm run start
Copy the code

However, we don’t have mariadb and Redis locally, which makes it a little harder.

Start the container

In the old days, I would have installed mariadb and redis on my Mac with the following command:

brew install mariadb

brew install redis
Copy the code

Then in their own computer a configuration (username, password…) Finally, to run the project in the local, very troublesome. And once the configuration is wrong, grass, it has to be reinstalled…

One of the functions of Docker is to divide mariadb and Redis into different images and use DockerHub for unified management. With Docker, you can quickly configure a service.

I used to have only one MySQL container per computer, but now I can run 8 MySQL containers (different ports) at the same time, deleting whomever I want and installing whomever I want. Not determined, the first container restart, restart no, and then use the mirror to build a container, build no, and then pull a latest mirror, and build again, very exciting.

Without further ado, let’s get Redis started:

docker run --name docker-todo-redis -p 6379:6379 -d redis
Copy the code

Then start mariadb again:

Docker run -p 127.0.0.1:3306:3306 --name docker-todo-mariadb -e MARIADB_ROOT_PASSWORD=123456 MARIADB_DATABASE=docker_todo -d mariadbCopy the code

The -p parameter is port mapping: local: container, -e specifies environment variables, and -d indicates running in the background.

Run again:

# the front-end
cd client && npm run start

# the back-end
cd server && npm run start
Copy the code

You can see the page at http://localhost:3000:

Everything seems to be fine

docker-compose

Imagine, if you were given a machine now, how would you deploy it? You need to run the top two docker commands first, then the next two NPM commands, please.

Can I pull mariadb and Redis containers with one click? That’s where docker-compose. Yml comes in. Create a file called dev-docker-compose. Yml:

version: '3'
services:
  mariadb:
    image: mariadb
    container_name: 'docker-todo-mariadb'
    environment:
      MARIADB_ROOT_PASSWORD: '123456'
      MARIADB_DATABASE: 'docker_todo'
    ports:
      - '3306:3306'
    restart: always
  redis:
    image: redis
    container_name: 'docker-todo-redis'
    ports:
      - '6379:6379'
    restart: always
Copy the code

The contents of this YML file are actually equivalent to the above two Docker commands. There are two benefits:

  • You don’t have to write a long, long, long, long, long, long, long, long, long, long, long, long, long, long, long, long, long, long, long, long, long, long, long
  • Write down the deployment commands in a notebookdocker-compose.ymlIn the file. Q: How do I deploy it? A: See for yourselfdocker-compose.yml
  • Click to pull up related services

From now on, mariadb and Redis can be started with one click when running local services:

docker-compose -f dev-docker-compose.yml up -d
Copy the code

Dockerfile

For example, for example, docker-compose, for example, docker-compose, for example, docker-compose, for example, docker-compose, for example, docker-compose, for example, docker-compose, for example, docker-compose, for example, docker-compose, for example, docker-compose, for example

Note: The production environment should use NPM Run build to build the application, and then run the built JS is the normal development process, here to simplify the process, NPM run start as an example.

Docker-compose: Docker-compose: Docker-compose: Docker-compose: Docker-compose: Docker-compose: Docker-compose: Docker-compose: Docker-compose: Docker-compose: Docker-compose

Building a container is essentially what we call a “CICD” or building pipeline, except that the key to this “pipeline” is a single NPM run start. What describes a “pipeline” is called a Dockerfile (note that this is not a hump).

Note: Normal mirror build and startup should be part of the overall project CICD, just metaphorically. In addition to running commands and building applications, CICD of a project also includes code inspection, desensitization inspection, release and push of messages, which is a more complex process.

React Dockerfile:

# Use the Node image
FROM node

Prepare the working directory
RUN mkdir -p /app/client
WORKDIR /app/client

# copy package. Json
COPY package*.json /app/client/

# Install directory
RUN npm install

# Copy files
COPY . /app/client/

# open Dev
CMD ["npm"."run"."start"]
Copy the code

It is very simple, but it is important to note that the container can also be viewed as a computer within a computer, so it is necessary to copy the files from your computer into the “container computer”.

The Express App Dockerfile looks pretty much the same:

# Use the Node image
FROM node

Initialize the working directory
RUN mkdir -p /app/server
WORKDIR /app/server

# copy package. Json
COPY package*.json /app/server/

# Install dependencies
RUN npm install

# Copy files
COPY . /app/server/

# open Dev
CMD ["npm"."run"."start"]
Copy the code

Now let’s make a prod-docker-compose. Yml file:

version: '3'
services:
  client:
    build:
      context: ./client
      dockerfile: Dockerfile
    container_name: 'docker-todo-client'
    # Expose port
    expose:
      - 3000
    # Expose port
    ports:
      - '3000:3000'
    depends_on:
      - server
    restart: always
  server:
    # Build directory
    build:
      context: ./server
      dockerfile: Dockerfile
    # container name
    container_name: 'docker-todo-server'
    # Expose port
    expose:
      - 4200
    # Port mapping
    ports:
      - '4200:4200'
    restart: always
    depends_on:
      - mariadb
      - redis
  mariadb:
    image: mariadb
    container_name: 'docker-todo-mariadb'
    environment:
      MARIADB_ROOT_PASSWORD: '123456'
      MARIADB_DATABASE: 'docker_todo'
    ports:
      - '3306:3306'
    restart: always
  redis:
    image: redis
    container_name: 'docker-todo-redis'
    ports:
      - '6379:6379'
    restart: always
Copy the code

None of the above configurations should be difficult to understand, however, there are a few details to note:

  • Ports must be exposed and mapped, otherwise ports 3000 and 4200 cannot be accessed locally
  • Depends_on depends_on is used to start the maraiDB and Redis containers together

Then run the following command to start with one click:

docker-compose -f prod-docker-compose.yml up -d --build
Copy the code

Later — Build means to build the image every time you run.

However, Boom:

ConnectionRefusedError: connect ECONNREFUSED 127.0.0.1:3306...Copy the code

Why can’t I connect it?

Solve the problem of not connecting

The reason for the failure is that we used localhost and 127.0.0.1.

Although each container is in our host127.0.0.1On the network, but containers need to communicate and access each other’s IP addresses,Follow the instructions on the websiteThe Container Name indicates the IP address of the peer Container.

Therefore, Express App host can not be 127.0.0.1, but docker-todo-redis and docker-todo-mariadb. The NODE_ENV environment variable is used to distinguish between starting an App with Docker and not.

Modify mariadb connection:

// Connect to the database
const sequelize = new Sequelize({
  host: process.env.NODE_ENV === 'docker' ? 'docker-todo-mariadb' : "127.0.0.1" ,
  database: 'docker_todo'.username: 'root'.password: '123456'.dialect: 'mariadb'});Copy the code

Change the connection to Redis:

const redis = new Redis({
  port: 6379.host: process.env.NODE_ENV === 'docker' ? 'docker-todo-redis' : "127.0.0.1"});Copy the code

Add NODE_ENV=docker to /server/Dockerfile:

# Use the Node image
FROM node

Initialize the working directory
RUN mkdir -p /app/server
WORKDIR /app/server

# copy package. Json
COPY package*.json /app/server/

ENV NODE_ENV=docker

# Install dependencies
RUN npm install

# Copy files
COPY . /app/server/

# open Dev
CMD ["npm"."run"."start"]
Copy the code

Now go ahead and run our “Start” command to start our production environment:

docker-compose -f prod-docker-compose.yml up -d --build
Copy the code

conclusion

In a word, Dockerfile is used to build Docker image, which is similar to CICD or pipeline we usually contact. Docker-compose, on the other hand, is used to “pull up” N containers.

The whole example above is on Github, so you can Clone it and play around with it.