instructions

Through some previous learning and practice, in fact, it is not appropriate for non-professional operation and maintenance personnel to engage in CI. However, in order to learn, it is necessary to play around with the following, at least you know how the process works, which is very good!

The actual operation

First of all, it should be noted that the operation recorded in this paper is only the result of personal learning and actual combat. There are still many problems to be considered in the production environment, such as the mount of the host computer, as well as safety and other issues. So if anyone is willing to take me to learn the application in the production environment, I will invite him to eat a big meal

Since all related service operations are based on Docker before, the premise is that you need to install the Docker environment. The following is a summary of all the steps to consider for the integrity of notes and reading.

1: Docker environment construction (Linux)

1.1 Basic Environment

Docker requires a kernel higher than 3.10. Check the system version

[root@localhost ~]# uname -r
3.10.0-514.el7.x86_64
[root@localhost ~]#
Copy the code

Prepare updated related YUM packages

[root@localhost ~]# yum makecache fastCopy the code

Change Settings YUM source Settings:

/ root @ localhost ~ # yum - config manager - add - repo replacement for ali cloud: https://download.docker.com/linux/centos/docker-ce.repo  [root@localhost ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repoCopy the code

PS: Because I built a virtual machine locally to learn, I remember to set up the social Settings before securing the Docker, so as to avoid error prompts during installation.

[root@localhost ~]# date
Wed Jul 29 11:25:29 CST 2020
[root@localhost ~]# date -s "2020-10-19 13:35:30"
Mon Oct 19 13:35:30 CST 2020
[root@localhost ~]# date
Mon Oct 19 13:35:31 CST 2020
Copy the code

1.2 Detailed Procedure

1) : Delete the old version of the data (not installed)

[root@localhost ~]# yum remove docker \ > docker-client \ > docker-client-latest \ > docker-common \ > docker-latest \ >  docker-latest-logrotate \ > docker-logrotate \ > docker-selinux \ > docker-engine-selinux \ > docker-engine Loaded plugins: fastestmirror No Match for argument: docker No Match for argument: docker-client No Match for argument: docker-client-latest No Match for argument: docker-common No Match for argument: docker-latest No Match for argument: docker-latest-logrotate No Match for argument: docker-logrotate No Match for argument: docker-selinux No Match for argument: docker-engine-selinux No Match for argument: docker-engine No Packages marked for removalCopy the code

2) Step 2: Install the dependencies

[root@localhost ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
Copy the code

3) configure the YUN source:

[root@localhost ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo Loaded  plugins: fastestmirror adding repo from: http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo grabbing file http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo to /etc/yum.repos.d/docker-ce.repo repo saved to /etc/yum.repos.d/docker-ce.repoCopy the code

If present:

Loaded plugins: fastestmirror
Copy the code

Perform:

Yum Makecache fast or yum Clean MetadataCopy the code

Add: View the Docker version

 yum list docker-ce --showduplicates | sort -r
Copy the code

4) : Execute the official installation script (install community version drop)

curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun
Copy the code

or

```
yum install docker-ce docker-ce-cli containerd.io
```
Copy the code

5) : Verify installation


[root@localhost ~]# docker --version
Docker version 20.10.7, build f0df350
[root@localhost ~]#

Copy the code

6) : Configure boot

[root@localhost ~]# systemctl enable docker
Copy the code

7) : Start docker

systemctl start docker
Copy the code

8) : Run a simple example container: Docker run hello-world

[root@localhost ~]# docker run hello-world
Copy the code

Error message:

It’s still wrong after updating the YUM source, but it’s a different kind of error

The above situation should be related to the network, because the local does not have this container, all will be pulled from hub.docker.com/, domestic and foreign network problems! In this case you can configure it to use a domestic cloud without pulling from hub.docker.com!

The solution offered by others is (not practice, my DORA a few times he is good! Or see the following configuration of Ali Cloud acceleration mirror:

systemctl stop docker

echo "DOCKER_OPTS="$DOCKER_OPTS --registry-mirror=http://f2d6cb40.m.daocloud.io"" | sudo tee -a /etc/default/docker

service docker restart
Copy the code

9) : View all containers:

[root@localhost ~]# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d6087f654d9e hello-world "/hello"  4 minutes ago Exited (0) 4 minutes ago practical_wozniak [root@localhost ~]#Copy the code

10) : List all running containers

docker ps
Copy the code

11) Summary of the docker run hello-world process description

  • Procedure 1 Run the docker run hello-world command to search for the hello-world image

    • If an image exists, it is run to generate a container instance for the template
    • If the image does not exist, the remote hub.docker.com/ will be queried
  • 2 If yes, the image file is downloaded to the local computer. If no, the image file does not exist.

2.1 Configuring Aliyun Image acceleration

2.1.1 reasons:

The difference between tortoise speed and acceleration.

2.1.2 Detailed steps

1) Login to Ali Cloud Image Service Center (login by account)

Address: cr.console.aliyun.com/cn-shenzhen…

2) Obtain address:

3) Modify the OS Docker configuration file as required

Find daemon.json file in /etc/docker-/, create it if there is no such file. And write the following:


[root@localhost ~]# nano /etc/docker/daemon.json
[root@localhost ~]#

Copy the code

Write the content (note the address you applied to the address, XXXX count your own ID) :

{
  "registry-mirrors": ["https://xxxxxxxx.mirror.aliyuncs.com"]
}
Copy the code

4) Restart the daemon

 [root@localhost ~]#systemctl daemon-reload
Copy the code

5) Restart docker

[root@localhost ~]# systemctl restart docker
Copy the code

6) View the modified Docker information

[root@localhost ~]# docker info Client: Debug Mode: false Server: Containers: 2 Running: 0 Paused: 0 Stopped: 2 Images: 1 Server Version: 19.03.13 Storage Driver: overlay2 Backing Filesystem: XFS Supports D_type: true Native Overlay Diff: false Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host ipvlan macvlan null overlay Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: 8fba4e9a7d01810a393d5d25a3621dc101981175 runc version: dc9208a3303feef5b3839f4323d9beb36df0a9dd init version: Fec3683 Security Options: seccomp Profile: default Kernel Version: 3.10.0-514.el7.x86_64 Operating System: fec3683 Security Options: seccomp Profile: default Kernel Version: 3.10.0-514.el7.x86_64 CentOS Linux 7 (Core) OSType: Linux Architecture: x86_64 CPUs: 1 Total Memory: 976.5MiB Name: localhost.localdomain ID: TT6R:HUR2:3AQG:JLBU:CGAO:BG5T:LMC4:FYTX:PRGV:4D56:4TAN:BGFE Docker Root Dir: /var/lib/docker Debug Mode: false Registry: https://index.docker.io/v1/ Labels: Experimental: false Insecure Registries: 127.0.0.0/8 Registry Mirrors: https://aiyf7r3a.mirror.aliyuncs.com/ Live Restore Enabled: false WARNING: bridge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled [root@localhost ~]#Copy the code

7) Test pulling an image of Ubuntu :15.10

[root@localhost ~]# Pulling ubuntu:15.10 15.10: Pulling from library/ Ubuntu 7dcf5a444392: Pull complete 759aa75f3cee: Pull complete 3fa871dc8a2b: Pull complete 224c42ae46e7: Pull complete Digest: sha256:02521a2d079595241c6793b2044f02eecf294034f31d6e235ac4b2b54ffc41f3 Status: Downloaded newer image for ubuntu: 15.10 docker. IO/library/ubuntu: 15.10 / root @ localhost ~ #Copy the code

Additional test handling for mirroring operations:

8) View the local mirror

[root@localhost ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE hello-world latest bf756fb1ae65 9 months ago 13.3KB Ubuntu 15.10 9b9cb95443b5 4 years ago 13.3MB [root@localhost ~]#Copy the code

9) Run an example foreground mode startup using mirroring

/bin/echo "Hello world": [root@localhost ~]# docker run Ubuntu :15.10 /bin/echo "Hello world" Hello world  root@0a22083fa318:~# cd ll bash: cd: ll: No such file or directory root@0a22083fa318:~# cd ~ root@0a22083fa318:~# ll total 8 drwx------. 2 root root 37 Jul 6 2016 ./ drwxr-xr-x. 1 root root 6 Oct 19 07:57 .. / -rw-r--r--. 1 root root 3106 Feb 20 2014 .bashrc -rw-r--r--. 1 root root 140 Feb 20 2014 .profile root@0a22083fa318:~#  cd .. root@0a22083fa318:/# ls bin dev home lib64 mnt proc run srv tmp var boot etc lib media opt root sbin sys usr root@0a22083fa318:/# eixt bash: eixt: Command Not found root@0a22083fa318:/# exit exit [root@localhost ~]# docker run -i -t Ubuntu :15.10 /bin/bash root@a2cc1c83603a:/# exitCopy the code

10) The foreground mode starts to view the relevant container situation

[root@localhost ~]# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a2CC1C83603a Ubuntu :15.10 "/bin/bash" 26 seconds ago Exited (0) 24 seconds ago elated_cartwright 0a22083fa318 Ubuntu :15.10 "/bin/bash" About a Minute ago Exited (127) 29 seconds ago epic_Galileo e0069b97f043 Ubuntu :15.10 "/bin/echo 'Hello wo..." 3 minutes ago Exited (0) 3 minutes ago intelligent_lamport 209fc9dd3324 hello-world "/hello" 56 minutes ago Exited (0) 56 minutes ago flamboyant_knuth d6087f654d9e hello-world "/hello" 2 hours ago Exited (0) 2 hours ago practical_wozniakCopy the code

11) The background mode starts the container instance

[root@localhost ~]# docker run -d ubuntu:15.10 /bin/sh -c "while true; do echo hello world; sleep 1; done"
f1b40b91bb48a2cc832412b9e71860a44cddcd4fa83a82db7b64fcc9c68ad82f
[root@localhost ~]#
Copy the code

12) Check the container again:

[root@localhost ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES f1b40b91bb48 Ubuntu :15.10 "/bin/sh - c 'while t..." 3 minutes ago Up 3 minutes great_tereshkova [root@localhost ~]#Copy the code

13) Stop container:

[root@localhost ~]# docker stop f1b40b91bb48 f1b40b91bb48 [root@localhost ~]# [root@localhost ~]# docker ps CONTAINER ID  IMAGE COMMAND CREATED STATUS PORTS NAMES [root@localhost ~]#Copy the code

3.1 Enabling Remote Access (Optional, pay attention to security issues)

Docker remote access can be used for procedures, directly use Pycharm and other tools to connect to our Docker, and then directly use tools for image production!

PS: This method must not be in the production environment, otherwise 100% mining!!!!!!!!!!!!!

3.1.1 steps

  • 1) edit docker hosting the file/lib/systemd/system/docker. Service
[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket [Service] Type=notify # the default is not to use systemd for cgroups because the delegate issues still # exists and systemd currently does not support the cgroup feature set required # for containers run by docker ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock ExecReload=/bin/kill -s HUP $MAINPID TimeoutSec=0 RestartSec=2 Restart=always # Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. # Both the old, and new location are accepted by systemd 229 and up, so using the old location # to make them work for either version of systemd. StartLimitBurst=3 # Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. # Both the old, and new name are accepted by systemd 230 and up, so using the old name to make # this option work for either version of systemd. StartLimitInterval=60s # Having non-zero  Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Comment TasksMax if your systemd version does not support it. # Only systemd 226 and above support this option. TasksMax=infinity # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.targetCopy the code
  • 2) Modified Contents:
ExecStart = / usr/bin/dockerd -h TCP: / / 0.0.0.0:2375 - H Unix: / / var/run/docker. The sockCopy the code
  • 3) Update and modify
[root@localhost web_statistics]# systemctl daemon-reload
Copy the code
    1. Restart the Docker service
[root@localhost web_statistics]# systemctl restart docker
Copy the code
    1. Test access
[root@localhost web_statistics]# curl http://localhost:2375/version {"Platform":{"Name":"Docker Engine - Community "}, "Components" : [{" Name ":" Engine ", "Version" : "19.03.13", "Details" : {" ApiVersion ":" 1.40 ", "Arch" : "amd64", "BuildTime ":" the 2020-09-16 T17:02:21. 000000000 + 00:00 ", "Experimental" : "false", "GitCommit" : "4484 c46d9d", "GoVersion go1.13.15" : ""," KernelV Ersion ":" 3.10.0-1127.19.1. El7. X86_64, "" MinAPIVersion" : "1.12", "Os" : "Linux"}}, {" Name ":" containerd ", "Version" : "1.3.7", "Deta Skyriver ": {" GitCommit" : "8 fba4e9a7d01810a393d5d25a3621dc101981175"}}, {" Name ":" runc ", "Version" : "1.0.0 - rc10", "Details" : {" GitComm It ":" dc9208a3303feef5b3839f4323d9beb36df0a9dd "}}, {" Name ":" docker - init ", "Version" : "0.18.0", "Details" : {" GitCommit ":" fec368 3 "}}], "Version" : "19.03.13", "ApiVersion" : "1.40", "MinAPIVersion" : "1.12", "GitCommit" : "4484 c46d9d", "GoVersion go1.13.15" : ""," Linux Os ":" ", "Arch" : "amd64", "KernelVersion" : "3.10.0-1127.19.1. El7. X86_64," "BuildTime" : "the 2020-09-16 T17:02:21. 000000000 + 00:0 0"} [root@localhost web_statistics]#Copy the code
  • 6) Internet access

2: Docker-compose installation (Linux)

yum install -y git
Copy the code
pip3 install docker-compose
Copy the code

3: Build gogs+ Drone environment based on Docker-compose

3.1 Compilation of choreography files:

version: '3' services: gogs: image: gogs/gogs:latest container_name: gogs ports: - "3000:3000" - "10022:22" volumes: -./data/fastapi_drone/gogs:/data gogspgdb: image: "postgres:9.4" restart: always environment: POSTGRES_USER: postgres POSTGRES_PASSWORD: 123456 POSTGRES_DB: gogs ports: - '5566:5432' volumes: - ./data/fastapi_drone/pgdata:/var/lib/postgresql/data drone-server: image: drone/drone:latest container_name: drone-server ports: - "8080:80" - "8443:443" volumes: - ./data/fastapi_drone/drone:/var/lib/drone/ - /var/run/docker.sock:/var/run/docker.sock restart: always environment: -drone_logs_trace =true -drone_logs_trace =true -drone_logs_debug =true -drone_open =true # Set drone_server host name, can be IP address and port number; -drone_server_host = drone-server-drone_git_always_auth =false # Enable gogs -drone_gogs =true - DRONE_GOGS_SKIP_VERIFY=false # gogs service address, using the container name + port number - DRONE_GOGS_SERVER=http://gogs:3000 # drone provider, - DRONE_PROVIDER= GOgs # configure drone database - DRONE_DATABASE_DRIVER= SQlite3 # configure DRONE database file - DRONE_DATABASE_DATASOURCE = / var/lib/drone/drone # sqlite agreement, optional HTTP, HTTPS - DRONE_SERVER_PROTO = HTTP # secret key, -drone_rpc_secret =xiaozhong # secret key, -drone_secret =xiaozhong # this line is very critical for the direct request of drone-server and drone-agent. After adding it, you can log in drone with yourUsername and become the administrator. If you do not add it, Do not see the Trusted button, you can also modify the database! - DRONE_USER_CREATE=username:zyx308711822,admin:true drone-agent: image: drone/agent:latest container_name: drone-agent depends_on: - drone-server volumes: - /var/run/docker.sock:/var/run/docker.sock restart: always environment: -drone_logs_trace =true -drone_logs_trace =true -drone_logs_debug =true -drone_logs_debug =true -drone_logs_debug =true -drone_rpc_server =http://drone-server # -drone_rpc_secret = xiaozhong-drone_server =drone-server:9000 # secret key, -drone_secret = xiaozhong-drone_max_procs =5 # -docker_host = TCP ://127.0.0.1:2375Copy the code

3.2 Starting container services

[root@localhost fd]#  docker-compose -f drone_docker_compose_linux.yml up
Copy the code

3.3 Configuring GOGS Information

3.3.1 Immediate Installation (Error resolution – Database host address problem)

Consult the official dictionary information:

The IP address of the container is displayed as follows:

Change the database host address:

Then it is installed properly and can be logged in:

Now you can enter our warehouse:

PS: the article because part of the screenshot is before, so part of the port screenshot may not be the same! Note the following access instructions!

Final warehouse address for our visit: http://192.168.219.131:3000/

3.3 Creating a GOGS Warehouse

3.4 Drone synchronizes warehouse project information

According to the configuration file, visit the address is: http://192.168.219.131:8080

Resynchronize this this need to note points:

If we need to mount the host directory during pipeline build, we need to enable our trust button with the administrator identity information:

PS: If you do not see this switch, it means that the account you use is not the administrator information. You can modify it through the database information or set the user information during the process of building the pipeline:

Such as:

Specific code:

# this line is very critical. After adding it, you can log in drone with yourUsername and become the administrator. If you do not add it, Is can't see the Trusted that button - DRONE_USER_CREATE = username: zyx308711822, admin: trueCopy the code

3.5 Drone will activate after synchronizing warehouse items

PS: After this activation, webhooks are configured in Gogs by default

The first successful activation, you can push the test to verify:

4: Git detects the project and customizes the container startup and configuration files for the project

4.1 Define our app test main.py:

#! The/usr/bin/evn python # - * - coding: utf-8 - * - "" "-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- the file name: the main file function description: Founder of functional description: let students creation time: 2021/8/2 -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - 2021/8/2 modify description: ------------------------------------------------- """ from fastapi import FastAPI from aioredis import create_redis_pool, Redis import uvicorn app = FastAPI() @app.on_event('startup') async def startup_event(): """ Get link :return: Conn = psycopg2.connect(database="ceshi", user="postgres", password="123456", host="container-db", Port ="5432") cursor = conn.cursor() # SQL statement SQL =" SELECT VERSION()" # execute statement cursor.execute(SQL) # Obtain single data. data = Fetchone () # print("database version: Conn.close () @app.on_event('shutdown') async def shutdown_event(): """ "@app.get("/") def read_root(): # redis # read_root() return {" "you are my love 222222222222!!!!!" } if __name__ == '__main__': Uvicorn. run('main:app', host='0.0.0.0', port=8081, debug=True, reload=True, access_log=False,workers=1, use_colors=True)Copy the code

4.2 Customizing the FastAPI service Dockerfile

FROM python:3.7 WORKDIR /app COPY./app./app COPY requirements.txt. RUN PIP install --upgrade PIP -i http://pypi.douban.com/simple/ - trusted - host pypi.douban.com \ && PIP install setuptools = = 33.1.1 - I http://pypi.douban.com/simple/ --trusted-host pypi.douban.com \ && pip install -i https://mirrors.aliyun.com/pypi/simple/ --trusted-host pypi.douban.com -r requirements.txtCopy the code

4.3 Custom start fastAPI service docker-compose.yml:

Version: '3.7' services: API: build:. Container_name: "fastapitest-api-container" command: Uvicorn app.main:app --host 0.0.0.0 --port 8080 --reload restart: always ports: - "8980:8080" volumes: - /data/ pythonFastAPitest :/usr/ SRC/fastAPI depends_on: -db db: image: postgres:12.0-alpine container_name: "container-db" volumes: - postgres_data:/var/lib/postgresql/data/ environment: - POSTGRES_USER=postgres - POSTGRES_PASSWORD=123456 - POSTGRES_DB=ceshi expose: - 5432 volumes: postgres_data:Copy the code

4.4 Definition. Drone. YML assembly line:

Assembly line execution process (my own order! May not suit you) :

  • 1: Git Clone will be automatically done in the workspace by default (it will be deleted after completion).

  • 2: copy the latest repository code to a directory on the host machine

  • 3: Communicates with the host through the rone-SSH plug-in and executes related commands

Docker type kubernetes, exec, SSH, etc. Workspace: path: /drone/ SRC steps: - name: code- SCP image: appleboy/drone-scp Settings: host: 192.168.219.131 # remote connection address username: root # remote connection account password: 123456 Port: 22 /data/fatest # copy all files related to the current workspace (git pulled project file) source:. # Host: 192.168.219.131 # host: 192.168.219.131 # host: 192.168.219.131 -cd /data/fatest | | | | | | | | | | | | | | | | Because our code is copied to the mirror inside! #- docker-compose stop && echo y | docker-compose rm && docker rmi fatest_api:latest - docker-compose stop && Docker-compose up-d --build # do not write this! - docker-compose up --build && docker-compose up -d - name: notify image: drillster/drone-email settings: host: Smtp.qq.com # example smtp.qq.com port: 465 # example QQ mailbox port: 465 username: [email protected] # email username password: enter your own ah !!!! Drone build: [{{build.status}}] {{rebo. name}} ({{rebo. branch}}) #{{build.number}}" from: [email protected] skip_verify: true recipients_only: true # Recipients_only: recipients_only. Does not default to recipients_only: recipients_only. Exerts: [[email protected]] when: # Status: [changed, failure, success]Copy the code

Points for attention:

  • Docker-compose up -d –build needs to be started in the background. If the docker-compose up to -d is not enough, the pipeline cannot be run down
  • 2: Note the difference between open port access and mapped ports in fastAPI containers!
  • 3: In fact, the above write dead related account information is unreasonable, usually we need to write to our DroneSecrets secret key information management, and then from inside to get:
  • DroneSecrets key information management process:

1) Establish corresponding key-value pairs:

2) Modify the corresponding pipeline configuration from the object in the secret key information:

5: Execute to submit our code to the repository

6: Check the flow construction:

7: The interface to access our FastAPI service when the build is complete

Docker-compose is the main source of docker-compose startup.

To access the address is: http://192.168.219.131:8980/

Verify the test, modify our code again, and commit again!

Access after execution:

8: places to be optimized

  • 1: Avoid repeated downloads during image construction
  • 2: The problem of code transfer during pipelining execution
  • 3: The new image should be pushed to warehouse management

9: Some problems to deal with

Problem 1 Couldn’t connect to Docker daemon:

Be unable to:

ERROR: Couldn't connect to Docker daemon at http://0.0.0.0:2375 - is it running?
Copy the code

The service is not enabled, and if it is enabled or an error is reported, it is usually the docker process that is configured for you. Need to check carefully!

Generally, if it is not restarted directly restart the docker service can be:

 systemctl daemon-reload && systemctl restart docker
Copy the code

Question 2: Configuration of web hooks for gogs:

Drone use process some small notes:

1: After starting relevant Drone services, it is not necessary to manually configure the Web hook of Gogs, but only need to configure the synchronous activation under the Drone project:

1) Drone before activation:

2) drone synchronization:

3) Refresh to view hook configuration (automatically configured) :

4) Test verification push:

Question 3: Some basic principles of drone

  • After activating the warehouse through which drone passes, it will automatically configure a Webhook on the corresponding warehouse. There is no need to manually configure the warehouse (unless you need to define the Webhook yourself). Only after activating the configuration, the update of the warehouse will inform drone to perform the relevant pipeline tasks.
  • The.drone.yml files are mainly used to describe the build and deployment process
  • Each step of drone is a container, each plug-in is also a container, all kinds of combinations to carry out the execution of relevant scripts, regardless of compilation, submission to the mirror warehouse, deployment, notification and other functions are by the mirror function
  • Drone also has different types of actuators for different types of runners, and uses the types that need to be used to process relevant logic
  • The pipeline is just responsible for starting the container, and the system doesn’t care what the container does

Problem 4 Mounting the temporary workspace to the host directory

This operation is not reasonable drop!! Can’t execute!

Final advice:

See the official website documentation!! Version is not the same words, a lot of things are different!!


The above is just a personal combination of their own actual needs, do study practice notes! If there are clerical errors! Welcome criticism and correction! Thank you!

At the end

END

Jane: www.jianshu.com/u/d6960089b…

The Denver nuggets: juejin. Cn/user / 296393…

Public account: wechat search [children to a pot of wolfberry wine tea]

Let students | article | welcome learning exchange together 】 【 original 】 【 QQ: 308711822