➤ Modernize traditional applications with Docker webinars







IT companies always spend 80% of their budget on maintaining existing applications and only 20% on innovation efforts. That ratio hasn’t changed at all in the past decade, but the pressure to innovate remains. Whether it comes directly from a customer’s new needs or from your management chain, the message is the same: you need to do more with less.


Fortunately, Docker has the features of modern traditional apps. You can take existing applications that are the same as your business, make them 70% more efficient, more secure, and work with any infrastructure.


➤ Extend Kubernetes 1.7 with custom resources





If you want to build clustered applications or software as a service. Before you can start writing a line of application code, you must address a number of architectural issues, including security, multi-tenancy, API gateway, CLI, configuration management, and logging.


So how can you leverage all of your infrastructure from Kubernetes so that you can save some human time in development and focus on implementing your own service?


The latest Kubernetes 1.7 adds a called CustomResourceDefinitions (CRD) important function, it can be inserted into your own managed objects, and application, as if it were a local Kubernetes components. This way, you can leverage the Kubernetes CLI, API services, security, and cluster management framework without modifying Kubernetes or understanding its internals. We use Iguazio here to seamlessly integrate Kubenetes with our new real-time “serverless” project and data platform objects.


➤ CI/CD in the Docker environment





One of Docker’s strengths is its seamless CI/CD flow; The container is just a read-only instance of a running Docker image. Updating the container is as simple as updating the image file, and we can then redeploy the container based on the updated image. There are even free tools that continuously monitor the image repository and redeploy the container the moment an updated image is detected. However, while this simplifies running the container, creating and updating the image still requires us to do it manually.


Take the CI/CD concept to the next level by creating a development process that automates all of your steps in the software delivery process. A typical single business process consists of the following basic steps:


  • Build – Starts the build process in which source code is converted into a compiled artifact and packaged in a Docker image.

  • Testing – Run unit tests inside the Docker container using any testing tool that supports your framework.

  • Push – Push the Docker image under test to a Docker registry service, such as a Docker Hub.

  • Deployment – Download the Docker image from the registry service to the appropriate staging/production Docker environment.

  • Run – Instantiates containers or services from one or more images.


The next logical extension of CI/CD is to fully integrate the development code base into the development pipeline via Webhook; In simple terms, this allows an automated build deployment pipeline process to be performed when an event occurs in the code repository, such as a code commit or merge. With this integration, every time a developer submits code to the repository, a few seconds later, the Docker container is combined with the developer’s changes, which can then be used for real-time integration testing.


ETP(Emerging Technology Partners) has researched and validated the most comprehensive container-centric CI/CD functional solution on the market. Our solution allows you to convert the entire environment required for integration testing, UI testing, or performance testing as part of your build, allowing you to test any commit or PR. With our solution, developers and testers can quickly find regressions and fix them before staging. Adopting this solution can speed up the development cycle and save time and effort.



➤ Creating SQL Server containers using Docker


So far, my articles on containers have been about using only a single container. In the real world, that can’t be the case. There are many containers running on the host at any one time (I have more than 30).

I needed a way to easily open and run multiple containers. There are two possible approaches:


  • Application Server driver

  • Container host driver


In the application server-driven approach, the application server communicates with the container host, building the running container and getting its details (such as port numbers).


This point-to-point pattern works well because containers can only be started and used when needed, saving resources on the host. However, this means that the application must wait for the container to start successfully. OK, I know the process of starting the container is quick, but what I’m trying to achieve is to reduce deployment time.


If we know how many containers we need? What if we want the application to connect directly to the containers after they are deployed?


I’m going to walk you through all the steps of building multiple containers at once using Docker Compose.


Compose is a tool for defining and running multi-container Docker applications.


As SQL Server developers, we are only interested in applications, but that doesn’t mean we get the advantage of Compose.


All I have to do is step by step start up the five containers running SQL Server, all of which listen on different ports with different SA passwords.


Preparation prior to execution of an order. I’m going to create several folders on my C: \ drive that will be used to hold compose and dockerfile:

mkdir C:\docker  
mkdir C:\docker\builds\dev1  
mkdir C:\docker\compose  Copy the code

In C: docker\builds\dev1 I will put database files and dockerfiles



Note: – Note the name of dockerfile (dockerfile.dev1)


Here is my dockerfile code:

# building our new image from the microsft SQL 2017 image
FROM microsoft/mssql-server-windows
 
 
# creating a directory within the container
RUN powershell -Command (mkdir C:\\SQLServer)
 
 
# copying the database files into the container
# no file path for the files so they need to be in the same location as the dockerfile
COPY DevDB1.mdf C:\\SQLServer
COPY DevDB1_log.ldf C:\\SQLServer
 
COPY DevDB2.mdf C:\\SQLServer
COPY DevDB2_log.ldf C:\\SQLServer
 
COPY DevDB3.mdf C:\\SQLServer
COPY DevDB3_log.ldf C:\\SQLServer
 
COPY DevDB4.mdf C:\\SQLServer
COPY DevDB4_log.ldf C:\\SQLServer
 
COPY DevDB5.mdf C:\\SQLServer
COPY DevDB5_log.ldf C:\\SQLServer
 
 
# attach the databases into the SQL instance within the container
ENV attach_dbs="[{'dbName':'DevDB1','dbFiles':['C:\\SQLServer\\DevDB1.mdf','C:\\SQLServer\\DevDB1_log.ldf']}, \ {'dbName':'DevDB2','dbFiles':['C:\\SQLServer\\DevDB2.mdf','C:\\SQLServer\\DevDB2_log.ldf']}, \ {'dbName':'DevDB3','dbFiles':['C:\\SQLServer\\DevDB3.mdf','C:\\SQLServer\\DevDB3_log.ldf']}, \ {'dbName':'DevDB4','dbFiles':['C:\\SQLServer\\DevDB4.mdf','C:\\SQLServer\\DevDB4_log.ldf']}, \ {'dbName':'DevDB5','dbFiles':['C:\\SQLServer\\DevDB5.mdf','C:\\SQLServer\\DevDB5_log.ldf']}]"Copy the code

In the C: docker\compose directory, I create a docker-comemess. yml file to define the service I want to run in the container.


Change the file code:

# specify the compose file format
# this depends on what version of docker is running
version: '3'
 
 
# define our services, all database containers
# each section specifies a container... 
# the dockerfile name and location...
# port number & sa password
services:
  db1:
    build:
        context: C:\docker\builds\dev1
        dockerfile: dockerfile.dev1
    environment:
      sa_password: "Testing11@@"
      ACCEPT_EULA: "Y"
    ports:
      - "15785:1433"
  db2:
    build:
        context: C:\docker\builds\dev1
        dockerfile: dockerfile.dev1
    environment:
      sa_password: "Testing22@@"
      ACCEPT_EULA: "Y"
    ports:
      - "15786:1433"
  db3:
    build:
        context: C:\docker\builds\dev1
        dockerfile: dockerfile.dev1
    environment:
      sa_password: "Testing33@@"
      ACCEPT_EULA: "Y"
    ports:
      - "15787:1433"
  db4:
    build:
        context: C:\docker\builds\dev1
        dockerfile: dockerfile.dev1
    environment:
      sa_password: "Testing44@@"
      ACCEPT_EULA: "Y"
    ports:
      - "15788:1433"
  db5:
    build:
        context: C:\docker\builds\dev1
        dockerfile: dockerfile.dev1
    environment:
      sa_password: "Testing55@@"
      ACCEPT_EULA: "Y"
    ports:
      - "15789:1433"Copy the code

Note: – to check which version of which compose the docker and file format compatibility, https://docs.docker.com/compose/compose-file/ to check the compatibility list.


Now that we have the file created, let’s run our first compose command.

docker-composeCopy the code


Note: – This is a test command, if it is installed you should see a help reference (you can skip the next section).



So we need to install. Run:

Invoke-WebRequest "https://github.com/docker/compose/releases/download/1.14.0/docker-compose-Windows-x86_64.exe" -UseBasicParsing -OutFile $Env:ProgramFiles\docker\docker-compose.exeCopy the code

The 1.14.0 in the command above is the latest version. You want to check out the latest version, please jump to this (https://github.com/docker/compose/releases).


After installation, verify the version:

docker-compose versionCopy the code



Before running the compose command, go to the C: docker\compose directory.

cd C:\docker\composeCopy the code

Now we can run the compose command. The command to build the container using compose is very simple:

docker-compose up -dCopy the code



The script is based on docker-comemage.yml and builds five containers that reference dockerfile.dev1.


I can confirm this by running the following command:

docker psCopy the code



Done, five containers in operation! By using Docker Compose, you can build multiple containers running SQL with a single command. This is a great help in building up the development environment, as soon as we deploy the applications, they can connect to the SQL in the container.


That’s the end of this issue of “Ship’s Log” and we’ll see you next time


Refer to the link


The authors introduce

Yan Xia: DaoCloud technical Consultant, O&m engineer && language enthusiast.

Yang Xueying Misha: DaoCloud technology Consultant, versatile player of Wenwen and Xiangjian, and mascot of Labs.


Review of the previous period:

Container technology standardization, OCI V1.0 official release! | Logbook Vol.21