1 Development Stage

1.1 Microservice project architecture design

The user initiates a request through the client, and The Zuul gateway dispatches the request. The Zuul gateway obtains the App service from the Eureka service, and forwards the request result to the Zuul gateway, and then sends it to the client. Zipkin monitors the status of the Zuul gateway and App services.

1.2 Creating a SpringBoot Project

Initialize the project scaffolding: start.spring. IO /

Configure project information and required dependencies: Web, JPA, H2 Database, Thymeleaf, Eureka Discovery Client.

1.3 Creating a Service Registry Service

Initialize the project scaffolding: start.spring. IO /

Configure project information and required dependencies: Eureka Server.

Configure Eureka in the application.yml file

server:
    port: 8761
eureka:
    numberRegistrySyncRetries: 1
    instance:
        preferIpAddress: false
        hostname: eureka-server
Copy the code

Declare EurekaServer before starting the class: @enableEurekaserver

1.4 Creating a Gateway Service

Gateway service is an essential part of microservice architecture. Gateway provides path and routing relationship for microservice, so as to realize unified call entry of microservice. Its main functions include service routing, service authentication, service load balancing scheduling, and security management.

Use of the Spring Cloud Zuul gateway

  1. Service registers with Eureka through the Ribbon. Zuul obtains the Service ID from Eureka and performs the proxy.
zuul:
    routes:
        guestbook:
            path: / * *
            serviceId: guestbook-service
Copy the code
  1. Zuul used URIs for HTTP jumps.
zuul:
    routes:
        guestbook:
            path: / * *
            uri: http://service:8080/guestbook/
Copy the code
  1. Start class setup comments:@EnableZuulProxy
  2. After the gateway is set, requests through the gateway are forwarded to the service specified by the serviceId in the service registry.

1.5 Microservice link tracing

Microservices have long invocation links, usually from the gateway to service A, then to service B, and then to the database. If A service fails, it is difficult to locate the fault point. Common open source components of service link tracking include Zipkin, PinPoint, SkyWalking.

1.5.1 Principles of Zipkin

Spring Cloud Slueth is injected into each service, which is a component for microservice event delivery that pushes service logs, requests, and messages to Zipkin Server via HTTP requests. Zipkin has a trace log collector. It is used to collect messages sent by Spring Cloud Slueth. After receiving the messages, the SPAN component stores them. The Query Server searches for resources stored in the SPAN and displays them to the client in the form of UI.

1.5.2 use Zipkin

  1. Start the Zipkin service with Docker:docker run --name zipkin -d -p 9411:9411 openzipkin/zipkin
  2. Map the zipkin-server address locally to localhost
  3. Configure zipkin’s Base-URL in the APP service
spring:
    application:
        name: app-serivce
    zipkin:
        base-url: http://zipkin-server:9411
Copy the code

2 Packaging Stage

2.1 the Apache Maven

Maven is used for building, dependency management, package distribution and distribution of Java projects. It does not need to put dependencies into liBS directories, which greatly reduces the project size. Compared to Ant packaging, Maven declares dependencies through POM files, and downloads dependencies from the central Maven repository, ensuring consistency of dependencies.

2.1.1 Maven private servers

The developer will declare the version of the dependency package in the POM file. The Maven private server will represent the address of Maven Center, where the official dependency package is stored. When the POM needs a dependency package, Maven private server pulls the dependency package from the central repository and uses it locally as the cache. If the package needs to be used later, Maven private server directly uses the cache without requesting the central repository. Workpacks can also be uploaded to private servers and pulled into production via WGET.

2.1.2 Life Cycle and Commands

  • MVN clean: Clears the cache
  • MVN compile: compiling
  • MVN package: a package
  • MVN test: Executes a test
  • MVN install: upload to a private server
  • MVN Deploy: Deploy the vm on a server

The Maven build creates a target folder, classes for the.class bytecodes generated by the build, Surefire-Reports for the test results, and the jar packages generated by the build

2.1.3 Structure of POM.xml

  • Project: Top-level element
    • Group ID: indicates the ID of a Group
    • Artifact ID: Artifact ID
    • Modules: Modules
    • Dependencies: The content of Dependencies
      • Dependency
        • groupId
        • artifactId

2.2 products

2.2.1 the Snapshot

By default, the version number of Snapshot is uniquely identified with a date. Packages of the same version number can be deployed to Maven private server repeatedly.

2.2.2 Release

If a Maven private server already has a Release, an error will be reported if you try to deploy a package with the same version number. You need to upgrade the version number. If you rely on third-party JAR packages, try to use their Release version.

2.3 Setting up Maven Private Server

Maven private server can be used as a proxy for Maven repositories, speeding up Maven dependency downloads, as a local caching service for other developers, mainstream open source tools include: JFrog Artifactory Open Source edition, Nexus.

2.3.1 Build JFrog Artifactory OSS Open Source version

You can use Docker

Docker search artifactory-oss docker search artifactory-oss Docker pull goodrainapps/artifactory – oss, Start the docker mirror docker run – name artifactory – oss – 6.18.1 – p 8083:8081 docker. Bintary. IO/jfrog/artifactory – oss: 6.18.1, Map to port 8083.

You can use installation packages

https:/jfrog.com/open-source/

Local access: localhost:8083. Default user name and password: admin/password

2.3.2 Maven warehouse structure

Artifactory provides a virtual repository that combines the local repository with the remote repository. The remote repository downloads dependencies from the common Maven source through an agent. Fetch it locally if it exists locally, fetch it remotely if it exists locally. Finally, upload the JAR package to the local repository. It can be pulled from a local warehouse in a production environment.

Creating a remote repository

  1. Remote repositories are used to broker remote dependencies
  2. The default address is jcenter.bintray.com
  3. Maven builds will use this remote repository as a proxy

Creating a local repository

Create a local repository for storing locally constructed JAR packages

Create a virtual repository

The virtual repository is used to aggregate the previous local and remote repositories, and only the virtual repository needs to be accessed later

Download dependencies from the Artifactory agent

  1. Place the configuration file generated by Artifactory into ~/.m2/settings.xml (remember to change the user name and password yourself)
  2. Performing the MVN Package build will be a dependency download from Artifactory.

Generate the Artifactory repository to upload the configuration file

  1. Select the warehouse and click Set Me Up
  2. Viewing Deployment Settings
  3. Copy Settings to poM files
  4. Execute MVN deploy

Upload products

<distributionManagement> <repository> <id>central</id> <name>1692dcdbda59-releases</name> The < url > http://192.168.3.180:8083/artifactory/maven-local < / url > < / repository > < snapshotRepository > < id > snapshots < / id > < name > 1692 dcdbda59 - snapshots < / name > < url > http://192.168.3.180:8083/artifactory/maven-local < / url > < / snapshotRepository > </distributionManagement>Copy the code

3 Continuous Integration

Jenkins is open source continuous integration software, which is used for unified construction of software projects, integrated code scanning, automatic testing, automatic deployment, avoiding unrepeatable tasks such as manual construction and deployment. Jenkins has strong integration ability, with hundreds of plug-ins, pipelined code, and supports multi-platform deployment (Windows, Linux, Docker, etc.).

Jenkins’ core concept

  • Project: You can choose from a variety of Project types, including free-style projects, Maven projects, pipelines, and external task calls.
  • Build
  • Workdspace: The actual working directory for the build task, where code and interim files can be stored.
  • Credentials: Used to manage sensitive user information, including user names, passwords, SSH keys, and certificates.

You can install the necessary plug-ins: Artifactory, SonarQune Scanner for Jenkins, Ansible, Kubernetes.

3.1 Creating a pipeline instance

The advantage of pipelined is that pipelined is code, which can be stored in Git repository, independent of the running environment of pipelined task (not bound to Jenkins Slave node), can be called by interface, and is easy to integrate with a third party.

Pipelining supports two syntax types:

  • Scripted script type
    • You can use people and Groovy scripts to implement functionality, which is very flexible
    • Learning cost is low and easy to popularize
  • The Declarative Declarative
    • Structured writing through prefabricated tags
    • Limited functionality, but more standardized

Write pipeline script

The script type

#Work nodeNode {def mvnHome # define step stage (' Pull the source code ') {git mvnHome - 'https://github.com/xx/xxxxx.git' tool 'maven} # Dir ('notebook ') {stage('Maven build') {# run MVN command sh '' MVN deploy ''}}}Copy the code

declarative

Pipeline {agent {docker 'maven:3-alpine'} # Specifies agent stages {stage('Example Build') {# Steps {sh 'mvn-b clean Verify '# execute MVN command}}}}Copy the code

3.2 Jenkins Pipeline integrates Artifactory

Install the Jenkins Artifactory plugin

Hpi can be installed online or offline. If a timeout occurs during the online installation, download the plug-in by yourself.

Configure the Artifactory plug-in

After Jenkins restarts, enterhttp://192.168.3.155:8888/configureTo complete the configuration of Artifactory. Configuration Artifactory credentials

Use the Artifactory in the pipeline

3.2 Jenkins Pipeline integrates Jira

  1. Install Jenkins Jira plugin
  2. To configure Jira Credentials
  3. Configure the Jira plug-in
  4. When submitting code, manage the Jira task ID:git commit -a -m "TASK-3 XXXX", task-3 is the TASK link corresponding to Jira. Click it to jump to Jira for confirmation.
  5. View the task ID in the build result

3.3 Jenkins integration with Sonarqube

Sonarqube is a source code scanning tool used for software projects to identify code quality, vulnerabilities, and repeatability issues to improve code quality. The principle is to scan the source code by establishing a local scan rule library, and create an issue to prompt the developer to fix if the rule is hit.

  1. Deploy Sonarqube server with Docker, default username and password is admin/admin
docker pull library/sonarqube:lts
docker run -d -p 9000:9000 sonarqube
Copy the code
  1. The project is created in the Sonarqube server to generate tokens that will be used in subsequent calls
  2. Call Sonarqube in the pipeline
Stage ('Code Scanning') {sh '' MVN SONAR :sonar -Dsonar. Host. url=http://127.0.0.1:9000 -Dsonar. Login =${token} ""}Copy the code

3.4 Jenkins integrates YAPI

YAPI is an interface automation test tool, is efficient, easy to use, powerful API management platform.

YAPI environment requirements are nodejs >= 7.6, mongodb >= 2.6, clone code from Github to local, copy config_example.json file, modify related configuration, install NPM dependencies: NPM install – production – the registry at https://registry.npm.taobao.org. Start the installation service: NPM ru install-server, the installation program will initialize the database index and the administrator account, which can be configured in config.json. Start server: node server/app.js to access 127.0.0.1: ${port}.

Declare the command executed in pipeline

stage('API Testing') { sh ''' curl https://localhost:3000/api/open/run_test? id=1&token= ''' }Copy the code

3.5 Jenkins integrates with Selenium

Introduce Selenium dependencies in the SpringBoot project and invoke Selenium WebDriver in unit tests.

4 Continuous testing

4.1 Creating Selenium Test cases

@Test
public void listPageChromeUItest(a) throws InterruptedException {
    WebDriver driver = new ChromeDriver(); / / call the chromeDriver
    WebDriverWait wait = new WebDriverWait(driver, 10);
    try {
        String serverUrl = HTTP_LOCALHOST +port+ PATH_LIST;
        driver.get(serverUrl);
        // Simulate the click event
	driver.findElement(By.id(SIGNUP_BTN_ID)).sendKeys(Keys.ENTER);
        WebElement firstResult = wait.until(
            presenceOfElementLocated(
                By.cssSelector(ADD_NOTE_BTN_ID))); // Get the return result
	String actualBtnValue = firstResult.getAttribute(VALUE_BTN_KEY);
	System.out.println(actualBtnValue);
	Assert.assertEquals(ADD_NOTE_BTN_VALUE, actualBtnValue); // Verify the result
    } finally{ driver.quit(); }}Copy the code

5 Continuous Deployment

5.1 Ansible

Ansible is an automated operation and maintenance tool developed based on Python for configuration management and application deployment. Ansible solves the problem of deploying applications and configurations to servers in batches. Ansible is a lightweight automation tool with consistency, high reliability, and security. It provides convenient VM group management, convenient variable injection to meet personalized requirements, and provides task scheduling capabilities through Playbook to greatly improve script reuse. Ansible Galaxy offers access to playbooks shared by the open source community, avoiding duplication of wheels.

5.1.1 Core Concepts

  1. The Control node: Control nodes, Running commands and Playbooks on Machines with Ansible, and Just about any machine with Python running Ansible(Laptops, shared desktops, Servers), You cannot use Windows Machine as the controller node.
  2. Managed nodes: devices and machines that use Ansible management. Managed nodes are also called hosts. Ansible is not installed on Managed nodes.
  3. Inventory a list of controlled nodes, also known as hostfile. An Inventory file can specify the IP addresses of controlled machines. An Inventory table can also manage nodes, create groups, and compile multiple machine operations.
  4. Modules: Ansible uses Modules to execute specific commands. Each module has a specific purpose. You can call a module in a task or multiple Modules in the Playbook.
  5. Tasks: A task unit in Ansible where an ad-hoc command can be executed
  6. Playbooks: Task choreography scripts for regularly batch deploying a series of tasks. Playbooks can contain the same variable content as tasks, written in YAML, and is easy to write and share.

5.1.2 Ansible installation

  1. Installing Python
  2. pip3 install –user ansible

5.1.3 Ansible Configuration

Create a basic inventory table (in INI format) using the vi /etc/ansible/hosts command

xxx.xxx.com
[webservers]
xxx.xxx.com
[dbservers]
xxx.xxx.com
Copy the code

Ansible has two default groups:

  • All: contains All hosts
  • Ungrouped: hosts that were not included in any group

Advanced naming of Hosts

[webservers]
www[01:50].example.com
Copy the code

5.1.4 Remote Host Configuration without Login

Once the Python environment and Ansible are installed on the local host, prepare a remote host [email protected] (using master1 as the remote host in the experiment)

  1. Ssh-key generated on the management node:ssh-keygenSSH/to generate SSH key files id_rsa and id_rsa.pub
  2. Copy the SSH key of the current host to the remote host and replace the IP address with the remote one.ssh-copy-id [email protected]
  3. Save the remote host IP address to the “known_hosts” of the current hostssh-keyscan prod.server >> ~/.ssh/known_hosts
  4. ssh [email protected]Encryption-free login to the remote host

5.1.5 Writing Ansible Commands

Add the remote server address cat /etc/ansible/hosts to the ansible inventory table

[prod] # production environment group prod.serverCopy the code

Ansible all -m ping -u root: all indicates that the command is sent to all hosts in the group to ping these hosts as user root.

Ansible prod -m copy -a “SRC = note-service-1.0.jar dest=/ TMP /” -u root: This command is used to copy all hosts in the PROD group and specify the source and destination IP addresses. Batch deployment can be implemented by using this command. All hosts need to be added to the PROD group.

Ansible Ad – hoc command

The Ansible ad-hoc command uses the /usr/bin/ansible command line tool to automate a single task to one or more target machines. Ad-hoc tasks can be used to restart machines, copy files, manage packages and users, and more. The Ansible module can be executed in ad-hoc commands.

  • Managing users and groups:ansible all -m user -a "name=foo state=absent"
  • Management Services:ansible webservers -m servicee -a "name=httpd state=started"
  • Get system variables:ansible all -m setup
  • Restart the server:ansible beijing -a "/sbin/reboot"
  • Management Documents:ansible beijing -m copy -a "src=/etc/hosts dest=/tmp/hosts"
  • Management package:ansible beijing -m yum -a "name=acme state=latest"

5.2 the Playbook

Playbooks is another way to use Ansible for orchestrating and executing repetitive tasks, while supporting complex deployment orchestration scenarios such as variables, loops, and conditions. Playbook can be used to declare configurations. In Playbook, you can orchestrate execution, execute specific steps back and forth between groups of machines, and initiate tasks synchronously or asynchronously.

Playbook is a YAML syntax, a declarative script designed to keep Playbook from becoming a programming language. In Play, a set of machines are mapped to defined roles, and the content of play in Ansible is called Tasks. A task is a call to an Ansible module.

5.2.1 the Playbook basis

  • Hosts and Users
    • The content of the hosts line is the patterns for one or more hosts
    • Each task can define its own remote users
- hosts: webservers
  remote_user: root
  tasks:
    - name: test connecttion
    ping:
    remote_user: root
Copy the code

5.2.2 perform the playbook

ansible-playbook ping-playbook.yml

- hosts: prod
  vars:
    http_port: 80
    max_clients: 200
  remote_user: root
  tasks:
    - name: ping
      ping:
Copy the code

5.2.3 Creating reusable PlayBooks

When your PlayBook application becomes very large, you need to consider modular splitting. There are three ways to split: Import, Include, and Roles. Import and Include are used to reuse playBook modules. Roles allows tasks, variables, handlers, modules, and other plug-ins to be combined. Roles can be uploaded to Ansible Galaxy for sharing on the public network.

Import Playbook

- name: Include a play before this play
  import_playbook: loop-playbook.yml
  
- hosts: prod
  remote_user: root
  tasks:
  - debug:
      msg: "Main entrance of import"
    loop: "{{ groups ['all'] }}"
Copy the code

Loop-playbook. yml: circulates information about all machines

- hosts: prod
  remote_user: root
  tasks: 
    - debug:
        msg: "{{ item }}"
      Loop: "{{ groups['all'] }}"
Copy the code

5.2.4 Using variables

Define variables

- hosts: webservers
  vars:
    http_port: 80
Copy the code

Reference variable: template: SRC =foo.cfg.j2 dest={{remote_install_path}}/foo.cfg

Var =ansible_facts {{ansible_facts[‘devices’][‘xvda’][‘model’]}}

5.2.5 Conditional syntax

tasks:
  - name: 'close all debian os machine'
    command: /sbin/shutdowm -t now
    # conditions when
    when: ansible_facts['os_family'] = = "Debian"
Copy the code

5.2.6 Roles

Roles is used to automatically load some default variables, tasks, and so on. Roles includes tasks, handlers, defaults, vars, files, templates, and meta

If Roles exists in main.yml, the task is loaded into the current Play

5.2.7 Best practices for PlayBook

Using a standard directory structure:

  • Production: Resource library directory of the production environment
  • Stage: indicates the resource library directory of the quasi-production environment
  • group_vars/
    • Group1.yml: Environment variable of a group
  • host_vars/
    • Hostname1. yml: system specific variable
  • Library / : If you have shared modules, you can put them here
  • Filter_plugins: If you have a plugin for filtering, you can put it here
  • Site. yml: The main playbook entry
  • Webservers. yml: Playbook for the Web server
  • Dbservers.yml: Playbook for database server
  • roles/

Use different files to store domain name information of different environments domain name information of production environments:

# file: production
[bj_webservers]
www-bj-1.example.com
www-bj-2.example.com
[sh_webservers]
www-sh-1.example.com
www-sh-2.example.com
Copy the code

Ansible-playbook – I production webServers.yml

Select * from websevrer: ansible-playbook – I production webServers. yml –limit Beijing

Scroll to configure WebServer in Beijing: ansible-playbook -i production webservers.yml –limit beijing[0:9] ansible-playbook -i production webservers.yml –limit beijing[10:19]

Roles is used for isolation in the top-level Playbook

# file: site.yml
- import_playbook: webservers.yml
- import_playbook: dbservers.yml
Copy the code

Load the default role in the WebServers group

# file: webservers.yml
- hosts: webservers
  roles:
    - common
    - webtier
Copy the code

Defining roles variables

mysqlservice:mysqld
mysql_port:3306
dbuser:root
dbname:user
upassword:password
Copy the code

Using the roles variable

- name: create app db
  mysql_db:
    name: "{{ dbname }}"
    state: present
- name: create app db user
  mysql_user:
    name: "{{ dbuser }}"
    password: "{{ upassword }}"
    priv: "*.*:ALL"
    host: '%'
    state: present
Copy the code

5.3 Writing the Playbook for microservices

Copy the JAR package, run the java-jar command, and stop the service 20 seconds later

notebook-playbook.yml

- hosts: prod
  remote_user: root
  tasks:
    - name: Copy the JAR package to the prod group host
      copy: SRC = notebook - service - 1.0. The jar dest=/tmp owner=root group=root mode=0644 
    - name: Start the jars
      shell: nohup java -jar / TMP/notebook - service - 1.0. The jar &
    - name: Sleep 20 s
      shell: sleep 20
    - name: Stop the service
      shell: kill 9 - $(lsof -t -i:1111)
Copy the code

discovery-playbook.yml

 - hosts: prod
   remote_user: root
   tasks:
     - name: Copy the JAR package to the prod group host
       copy: SRC = discovery - service - 1.0. The jar dest=/tmp owner=root group=root mode=0644 
     - name: Start the jars
       shell: nohup java -jar / TMP/discovery - service - 1.0. The jar &
     - name: Sleep 20 s
       shell: sleep 20
     - name: Stop the service
       shell: kill 9 - $(lsof -t -i:1111)
Copy the code

all-playbook.yml

- name: Import discovery playbook
  import_playbook: discovery-playbook.yml
- name: Import notebook playbook
  import_playbook: notebook-playbook.yml
Copy the code

5.4 Integrating Ansible into Pipeline

Add a stage

stage('Ansible Deploy to remote server') { sh 'cp .. / Fianl/notebook. Service/target/notebook - service - 1.0 jar. / the 'sh' ansible - the playbook notebook - the playbook. Yml '}Copy the code

6 containers,

Docker-style application container engine, developers can package applications and dependencies into a portable Docker image, which can then be mass distributed to run on any popular Linux or Windows machine.

Docker is more suitable for microservice architecture, with faster startup speed, convenient horizontal expansion, less system resource occupation, quick destruction, and on-demand use.

Compared to virtual machines, container startup times are in seconds and virtual machines in minutes. Container performance is close to native, while virtual machine performance is weaker than native. Containers occupy only hundreds of MB, while VMS occupy several GB. A physical machine can support hundreds of containers, but only dozens of VMS.

Docker image has all the dependencies required for application operation. It is built once and run everywhere. Docker image storage is based on checksum to de-store, greatly reducing the storage space.

6.1 Docker principle

6.1.1 Docker underlying Linux technology: CGroups

The limitation of CPU, memory, network between Docker containers and the isolation of system resources are all realized by CGroups. If the CPU of the container is not limited to 100%, the container will occupy all computing resources of the host. As a result, other containers have no CPU resources, resulting in resource preemption. CGroups allocates CPU quotas for each process to ensure the maximum amount of resources available to each container.

Docker limit CPU: docker run-it –cpus=”.5″ XXXX /bin/bash

6.1.2 Docker Process Resource Isolation: Namespace

Because a host may start multiple containers, if the processes of multiple containers are not isolated, files may be tampered by other containers, resulting in security problems. Concurrent writing of resources may lead to inconsistency, and resource preemption may affect other containers. So container operations require resource isolation.

The Linux namespace abstracts global operating system resources. For processes within the namespace, they have separate instances of resources, and processes within the namespace can make resources visible. Invisible to external processes, the isolation of resources is widely used in container technology.

The Docker engine uses a few Linux isolation techniques

  • Pid Namespace: indicates a process
  • Net Namespace: network
  • Ipc namespace: interprocess communication
  • MNT namespace: manages the file system mount point namespace
  • Uts Namespace: indicates the Unix time system isolation

In Linux, the system calls the clone() method to create a process. By calling this method, the process obtains an independent process space with pid=1. Other processes on the host cannot be seen.

6.1.3 Docker File System Isolation: Unionfs

Similarly, if file systems are not isolated, files can be tampered with by other containers, causing security problems. Concurrent writes to files cause inconsistencies.

Docker uses Unionfs to mount multiple file directories to a container process so that the container can have its own file system space.

Mount mount -t Unionfs -o dirs=/home/fd1, / TMP /fd2 \> None/MNT /merged-folder from Unionfs via mount command. None indicates that no drivers will be mounted. Finally, the target directory (merged-folder) contains the contents of the fd1 and fd2 directories. Mount fd1 and fd2 to merged-folder.

In /var/lib/docker-aufs:

  • Diff: Manages the content of each layer of the Docker image
  • Layer: Manages the metadata of the Docker image
  • MNT: manages mount points, corresponds to a mirror or layer, and describes the hierarchical relationship of mirrors

6.2 Building a Docker image warehouse

6.2.1 Remote Docker repository

Remote Docker image center: hub.docker.com

For example, docker pull mysql:8.0 can be pulled from the image center

6.2.2 Local Docker repository

JCR — JFrog Container Registry

Start JCR service: docker run – name artifactory JCR – d – p – 8081-8081 – p, 8082:8082 docker. Bintray. IO/jfrog/artifactory – JCR: lastest

Access 192.168.3.168:8081 -> 192.168.3.168:8082/ UI

-v prevents content loss during docker image restart

Add JCR local non-secure registry for Docker client:

Insecure Registry:
    art.local:8081
Copy the code

Run the following command to configure local domain name resolution: 127.0.0.1 art.local Docker login art.local:8081 -uadmin -ppassword

Uploading an image:

  • docker build -t art.local:8081/docker-local/xxxx/yyyy:latest
  • docker push art.local:8081/docker-local/xxxx/yyyy:latest

Download image:

  • docker pull art.local:8081/docker-local/xxxx/yyyy:latest

6.3 Dockerfile

Docker image is created by Dockerfile. Through Dockerfile, executable files necessary for running software are placed in the image and started by Entrypoint.

Related instructions:

  • RUN: Installs applications and software packages and builds an image
  • CMD: Runs a command to execute a task
  • ADD: Adds some files inside the image
  • ENTRYPOINT: the ENTRYPOINT for container startup

Dockerfile instances:

FROM azul/zulu-openjdk-alpine
MAINAINER XX <[email protected]>
ADD target/xxxx.jar xxxx.jar
ENTRYPOINT ["java", "-jar", "/xxxx.jar"]
EXPOSE 1111

Copy the code

Best practices:

  1. Avoid installing unnecessary packages.
  2. There is only one problem per container: front end, back end, database.
  3. Minimize the number of layers.
  4. Dependency libraries are obtained from artifacts to avoid building in containers and improve the speed of image building
  5. Use the official image whenever possible, and use the Debian image as the base image, as it is tightly controlled and kept to a minimum, while being a full distribution.
  6. Instead of adding a remote file, use curl or wget to retrieve the remote file.
  7. Use ENV to inject variables into the container.
  8. Use WORKDIR instead of RUN CD when switching directories in a container.. &&…

6.4 Docker runs multiple microservices

Use Docker Link to manage services:

docker run --name notebook-servcie -d -p 1111:1111
--link discovery-service:eureka-server
--link zipkin-service:zipkin-server
art.local:8081/docker/notebook-k8s/notebook-service:lastest
Copy the code

Run the script to run multiple microservice containers:./ rundocker.sh

docker run --name discovery-service -d -p 8761:8761 art.local:8081/docker/notebook-k8s/discovery-service:latest
docker run --name zipkin-service -d -p 9411:9411 art.local:8081/docker/notebook-k8s/zipkin-service:latest
sleep 10
docker run --name notebook-service -d -p 1111:1111 --link discovery-service:eureka-server --link zipkin-service:zipkin-server art.local:8081/docker/notebook-k8s/notebook-service:latest
sleep 10
docker run --name geteway-service -d -p 8765:8765 --link discovery-service:eureka-server --link zipkin-service:zipkin-server art.local:8081/docker/notebook-k8s/gateway-service:latest
Copy the code

Stop services:./stopDocker.sh

docker stop zipkin-service
docker stop geteway-service
docker stop notebook-service
docker stop discovery-service

sleep 10

docker rm zipkin-service
docker rm geteway-service
docker rm notebook-service
docker rm discovery-service
Copy the code

6.5 Docker image upgrade

You can go from dev to Release by selecting the image in the mirror repository

You can use curl to upgrade a docker image

HTTP POST: http://192.168.3.180:8082/ui/api/v1/ui/artifactions/copy {" repoKey ":" docker - dev - local ", "path" : "notebook-k8s/notebook-service/latest", "targetRepoKey": "docker-release-local", "targetPath": "notebook-k8s/notebook-service/latest" }Copy the code

7 Kubernetes

7.1 build Minikube

Kubernetes cluster construction is complex, resource requirements are high, it is difficult for developers to prepare resources, and developers lack the ability to build.

Minikube is officially maintained Kubernetes practice environment, with most of Kubernetes native functions, small resource occupation.

Download: The curl – Lo minikube http://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/releases/v1.2.0/minikube-linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/

Startup: minikube start –cpus 4 –memory 8192

Start docker image center JCR, configure in-secure private image center, add local Docker image center to docker engine’s non-secure whitelist In Minikube: / User/the qing /. Minikube/those/minikube/config. The json, the EngineOptions InsecureRegistry add art. The local: 8081 (private image library path)

Added domain name resolution for JCR in Minikube

Minikube SSH su sudo echo "192.168.3.180 art.local >> /etc/hosts" docker login art.local:8081 -uadmin -ppasswordCopy the code

Kubectl common command:

The command describe
kubectl create -f filename Create one or more resources
kubectl apply -f filename Modify a resource
kubectl exec POD command Execute commands in POD
kubectl get Type List the resources
kubectl logs POD Prints logs for containers in pod
kubectl top This section lists resource consumption in the cluster
kubectl describe Type Listing resource Information
kubectl delete -f pod.yaml Delete some K8S objects

7.2 Related Concepts

7.2.1 Namespaces

Kubernetes supports virtual isolation of Kubernetes clusters through namespaces, making it possible to create multiple non-interfering Kubernetes cluster environments on a single physical machine. When different Kubernetes environments are required for testing and production deployment, namespace can be used for resource quotas based on users.

K8s default namespace:

  • Default: the default
  • Kube-system: space used by the K8S environment
  • Kube-public: automatically created namespace, visible to all users, used for resources that need to be shared globally.

Create a namespace: kubectl create namespace test Query a namespace: kubectl get namespace kubectl run nginx –image=nginx –namespace=dev

7.2.2 Pod

Pod consists of one or more containers, which share storage and network and follow a consistent operation mode. The containers in Pod are always created, run or run in the same operating environment at the same time. One or more relatively coupled applications can exist within a Pod.

Pod lifecycle:

  • Pending: Pod was accepted by K8S, but one of the containers was not successfully created
  • Running: The Pod is assigned to a Node and runs properly
  • Succeeded: All containers terminated correctly
  • Failed: All containers were terminated, but at least one container was not terminated correctly
  • Unknown: The Pod loses communication

Create pod: kubectl run –image=nginx nginx-app –port=80 kubectl exec -it podname /bin/sh

Pod and Container:

  • One-container-per-pod: Each Pod has only one container
  • Pod contains multiple containers: All containers share a network and storage, and is suitable for scenarios where multiple containers are tightly coupled and need to share resources.

There are three types of Pod controllers:

  • Deployment: Maintains pod Deployment and expansion within the cluster
  • StatefulSet: Maintains the deployment and expansion of stateful PODS in a cluster
  • DaemonSet: Ensure that copies of pod are running on each node in the cluster

7.2.3 Service

A Service in K8S is a REST object. Like a Pod, it is a logically abstract definition that describes a policy for accessing a Pod, associating a Pod object with a selector

Define a service, K8S through the my-service object, all 9376 port requests according to a certain policy to all select MyApp Pod

apiVersion: v1
kind: Service
metadata:
    name: my-service
spec:
    selector:
        app: MyApp
    ports:
        - protocol: TCP
          port: 80
          targetPort: 9376
Copy the code

A service exposes a service in the following ways

  • ClusterIP: access through the ClusterIP address
  • NodePort: port opened through a node
  • LoadBalancer: Loads balancer
  • External Name: indicates the External domain Name

The required service is discovered in the cluster. Procedure

  • System environment variable env Vars: When POD runs on node, Kubelet adds the active Service to the container as an environment variable for domain resolution.
  • DNS: cluster-aware DNS server-coreDNS

7.2.4 Scheduling between Service and Pod

User Space Proxy mode:

  • SessionAffinity
  • Random selection

Iptables Proxy mode:

  • ClusterIP and port
  • According to Readeness Probe

IPVS proxy mode

  • IPVS rules are periodically synchronized with the K8S Service
  • Redirect directly to the optimal server
  • Better performance than Iptables Proxy

7.2.5 Volumes

Volumes are the persistent Volumes of K8S. Common Volumes include Emptydir, Local, NFS, persistentVolumeClaim, cephfs, and Glusterfs

Emptydir: When a pod is assigned to a node, an emptydir is created with the same life cycle as a pod. It is created and destroyed simultaneously. Emptydir acts as a checkpoint to prevent the program from crashing when a long operation is not finished. The container acts as the Content Manager and provides web services to hold files temporarily.

Local Volume: The persistentVolume type is static. The nodeAffinity attribute must be configured. This volume cannot be accessed even when the Node is inaccessible.

NFS volume: You can use the existing NFS shared storage and mount it to a POD. After the POD disappears, NFS content will remain. Therefore, you can preload some data before starting a POD and share data between pods.

Persistent Volume: Persistent volume is created by a cluster administrator or dynamically created by a storage class. PVC is a storage request made by users and consumes pv resources of the cluster. Claim matches PV resources based on a specific size and read/write pattern. For example, declare a 5G PERSISTENT Volume of NFS type to mount to 172.17.0.2

kind: PersistentVolume
metadata:
  name: pv0001
spec:
  capacity:
    storage: 5Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce Volumes can be mounted by a single node as read/write volumes
  storageClassName: slow
  nfs:
    path: /tmp
    server: 172.17. 02.
Copy the code

Access mode for persistent volumes:

  • ReadWriteOnce: Can be mounted by a single node as a read/write volume
  • ReadOnlyMany: can be mounted by multiple nodes as read-only volumes
  • ReadWriteMany: Can be mounted by multiple nodes as a read-write volume

PVC matches PV resources through a series of rules. StorageClassName matches the storage physical resource type. Selector Matches a logical classification of the store.

Create A PV for Deployment: declare the PV object, storage type, and hostPath directory to mount

kind: PersistentVolume
apiVersion: v1
metadata:
  name: task-pv-volume
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 3Gi
  accessModes:
     - ReadWriteOnce
   hostPath:
     path: "/data/jenkins-home"
Copy the code

Declare the volume mount in the Deployment, declare the PVC to match the PV, and the POD restart will not lose data.

spec:
  containers:
    - name: jenkins
      image: jenkins/jenkins:lts
      ports:
        - containerPort: 8080
      volumeMounts:
        - name: task-pv-storage
        mountPath: "/var/jenkins_home"
  volumes:
    - name: task-pv-storage
      persistentVolummeClaim:
        claimName: task-pv-claim
Copy the code

7.2.6 Deployment

Deployment provides Pod and ReplicaSets update management, and the Deployment controller adjusts the running state of the resource to the desired state in real time, such as the number of Pod copies.

For example, the nginx deployment: kubectl apply -f https://k8s.op/examples/controllers/nginx-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: Nginx: 1.14.2
        ports:
        - containerPort: 80
Copy the code

See deployment: kubectl get deployments upgrade deployment: Kubectl — Record deployment.apps/nginx-deployment set image deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1 Look at the bottom of Deployment: kubectl get RS/Pods

Causes of deployment failure:

  • The quota is not enough
  • Readiness Probes fail to be detected
  • Image pull failed. Procedure
  • Permission access unavailable
  • Out of the resource limit
  • The application runtime configuration is incorrect

Example: Deployment of app-Service

apiVersion: apps/v1
kind: Deployment
metadata:
  name: app-service
  labels:
    app: app-service
spec:
  replicas: 3
  selector:
    matchLabels:
      app: app-service
  template:
    metadata:
      labels:
        app: app-service
    spec:
      containers:
      - name: app-service
        image: art.local:8081/docker/notebook-k8s/app-service:latest
        ports:
        - containerPort: 80
      imagePullSecrets:
      - name: regcred-local
Copy the code

7.2.7 probe

Probe Indicates whether an application is running properly. The probe can be used to probe applications using HTTP, tcpSocket (between servers), or exec (before processes) to describe the application status.

Probe:

  • LivenessProbe: indicates whether the container is allowed
  • ReadinessProbe: Whether the application in the container can start providing services
  • StartupProbe: The application inside the container has been started. If this probe is set, other types of probes will not take effect until it succeeds

Probe type:

  • ExecAction: Executes a specific command inside the container
  • TCPSocketAction: Invokes a TCP request to verify that the container is working properly
  • HTTPGetAction: Initiates an HTTPGet request to access an application port and path

Specify probes and probe types in containers objects,

livenessProbe:
  exec:
    command:
    - cat
    - /tmp/healthy
  initialDelySeconds: 5
  periodSeconds: 5
Copy the code

Check probe status: kubectl describe pod XXXXX

7.2.8 ConfigMap

ConfigMap configs K8S in the form of key-value pairs stored in etCD. It is used to decouple container images from environment-related variables and become key-value pairs for ConfigMap. Container images are highly portable. Db. url=${DATABASE_HOST}:${DATBASE_PORT}

The introduction of ConfigMap

  • Injection into the container entryPoint as a command line argument
  • Container environment variables
  • Add a read-only file to the volume for all applications to read
  • K8S API is called in the application code to read ConfigMap

Kubectl create configmap confIgnapName –from-literal=key=value: kubectl create configmap ConfignapName –from-literal=key=value

Yaml: kubectl create -f configmapTest.yaml Prints environment variables at the container entry: command:[/bin/sh,”-c”,”env”]

Configmap can only be stored in plaintext. Sensitive ciphertext data (passwords, tokens, and keys) is stored in k8s Secret. Pod must declare and reference a secret for use. Secret can be mounted as a file to one or more containers and can be used as an environment variable for the container for Kubelet to pull images from the image center when the container is started.

kubectl create secret db-user-pass --from-file=./username.txt --from-file=./password.txt

7.3 Deploying microservices on K8S

Yaml kubectl create -f app.yaml kubectl create -f discovery. Yaml kubectl create -f gateway. Yaml kubectl get Po # Kubectl get SVC # service minikube IP # service minikube IP #Copy the code

7.4 Deploying K8S at the Helm

Helm is a package management tool that can be used to manage complex multi-container deployments with easier version updates, easier version sharing, and one-click rollback of multiple container versions.

7.4.1 Installing and Configuring the Helm

  1. Download and unpack the installation package:Tar - ZXVF helm - v3.0.0 - Linux - amd64. Tar. Gz
  2. mv linux-amd64/helm /usr/local/bin/helm
  3. Create the Helm remote repository in the JCR
  4. Configuring the remote repository locally:helm repo add stable http://localhost:8081/artifactory/helm

7.4.2 Operations on the Helm

  1. Create helm Chart:helm create foo
  2. Deploy helm Chart to K8S:
helm install --set name=prod myredis ./redis
helm install -f myvalues.yaml -f override.yaml myredis ./redis
Copy the code
  1. Five deployment modes supported by Helm Chart
  • Install from helm warehouse
  • Install the Helm Chart package
  • Install from the chart directory
  • Install from chart absolute path
  • Specify the repository path to install

7.4.3 Creating the Helm Chart for the Application

The HELm-Local, helm-remote, and Helm virtual warehouses are created in the JCR mirror warehouse. The virtual warehouse aggregates the helm-Local and helm-remote warehouses and provides unique warehouse addresses

Deploy chart to JCR, go to the artifacts page and click SET ME UP to get the command.

Error 503 indicating that the JCR EULA is not enabled curl -XPOST -vu username:password http://${ArtifactoryURL}/artifactory/ui/jcr/eula/accept

Helm install -f values.yaml notebook./