This is the 9th day of my participation in the August More Text Challenge. For details, see:August is more challenging

One, foreword

We sometimes hear the term “montage,” but don’t know what it means. Montage was originally an architectural term meaning to form or assemble. Then it is extended to a theory of editing: when different shots are spliced together, they often produce a specific meaning that each shot does not have when it exists alone. This is what we often hear about montage. In the movie Up, Pixar used montage to show more than half of the protagonist’s life in less than five minutes, which touched countless audiences. Let’s see how today’s content relates to montage.

Two, the effect display

Say so much is empty, let’s see the effect to achieve the effect, exactly what is a montage Mosaic picture, here with komatsu naai’s photo as a test:The far left is a reduced montage, the second is a full-size display, the third is the original image, and the fourth is an area of captured detail. As you can easily see from Figure 4, our montage is a patchwork of many different images.

Three, code implementation

The implementation of the program is divided into several steps, first we need to prepare work, one is our base map, which is figure 3 above. Another is the need for a photo collection, the selection of the photo collection has several specifications, first can not have GIF and PNG pictures, the second is the color of the picture as rich as possible, the number of pictures is also more, so that the effect will be better. Another is to choose the image with an aspect ratio close to 1 will be better. Then there’s our code part:

  1. Image preprocessing
  2. Gets the main tone list for a color
  3. Walk through each pixel block of the base image
  4. Look for the image in the tone list that most closely matches the current tone block
  5. Paste the resized image into the tone block currently traversed
  6. Save the picture

If you have any questions about the above steps, these questions will be discussed in detail in the implementation. Let’s take a look at some of the modules we’ll use:

import os
import cv2
import math
import numpy as np
Copy the code

The installation of OpencV is as follows:

pip install opencv-python
Copy the code

3.1 Image preprocessing

Manually pick the picture is more troublesome, so we just ask people to pick some pictures, and then we will not meet the specifications of the picture can be deleted:

def renameImages(path) :Filelist = [path + Ifor i inOs.listdir (path)] // Name the image with a number img_num =str(len(filelist))
    name = int(math.pow(10.len(img_num))) // Iterate through the listfor file inFilelist: // Delete GIF and PNG imagesif file.endswith('.gif') or file.endswith('.GIF') or file.endswith('.png') or file.endswith('.PNG'):
            os.remove(file)
            continue
		# Rename the image with a numeric number
        os.rename(file, path + str(name) + '.jpg')
        name += 1
Copy the code

After executing the above method, we will filter out the appropriate images.

Get the main tone list of the color

Before getting the main tone list, we need to get the main tone first. Here, we directly use the average value of BGR as the main tone:

def getDominant(im) :
    """ Get the dominant color """
    b = int(round(np.mean(im[:, :, 0])))
    g = int(round(np.mean(im[:, :, 1])))
    r = int(round(np.mean(im[:, :, 2)))return (b, g, r)
Copy the code

In OpenCV, images are read in BGR mode. The meaning of each letter is the same, but in a different order. Next we get the list of main colors:

def getColors(path) :
    """ get the color table of the picture list """
    colors = []
	
	# Get a list of images
    filelist = [path + i for i in os.listdir(path)]
    # Traverse the list
    for file in filelist:
    	# Read image
        im = cv2.imdecode(np.fromfile(file, dtype=np.uint8), -1)
        try:
        	# Get the main color of the image
            dominant = getDominant(im)
        except:
            continue
        # Add the main tone to the tone list
        colors.append(dominant)
    return colors
Copy the code

With the hue list, we can compare colors directly with the hue list.

3.3. Look for the picture with the closest dominant color

I compare the BGR values of the main colors of the two images, and then add the absolute values of the differences to obtain the color differences:

def fitColor(color1, color2) :
    """ return the size of the difference between the two colors. """
	Figure out the difference between b channels
    b = color1[0] - color2[0]
    # Find the difference between channels G
    g = color1[1] - color2[1]
    Figure out the difference between r channels
    r = color1[2] - color2[2]
    # returns the sum of absolute values
    return abs(b) + abs(g) + abs(r)
Copy the code

3.4. Iterate, find and paste

Here is the body of our method, which is quite a lot of content, so let’s look at the code first:

def generate(im_path, imgs_path, box_size, multiple=1) :
    """ Generate picture """

    # Read the list of images
    img_list = [imgs_path + i for i in os.listdir(imgs_path)]

    # Read image
    im = cv2.imread(im_path)
    im = cv2.resize(im, (im.shape[1]*multiple, im.shape[0]*multiple))

    # Get the image width and height
    width, height = im.shape[1], im.shape[0]

    Traversal image pixels
    for i in range(height // box_size+1) :for j in range(width // box_size+1) :# Block starting point coordinates
            start_x, start_y = j * box_size, i * box_size

            Initialize the width and height of the image block
            box_w, box_h = box_size, box_size

			# Intercept the block currently traversed
            box_im = im[start_y:, start_x:]
            if i == height // box_size:
                box_h = box_im.shape[0]
            if j == width // box_size:
                box_w = box_im.shape[1]

            if box_h == 0 or box_w == 0:
                continue

            # Get the main color
            dominant = getDominant(im[start_y:start_y+box_h, start_x:start_x+box_w])

            img_loc = 0
            # Difference, the maximum difference with the main color is 255*3
            dif = 255 * 3

            Traverse the tone table to find the image with the smallest difference
            for index in range(colors.__len__()):
                if fitColor(dominant, colors[index]) < dif:
                    dif = fitColor(dominant, colors[index])
                    The # color list is in the same position as the image list, so we just get the color subscript
                    img_loc = index
	
            Img_list [img_loc] is the image with the least difference
            box_im = cv2.imdecode(np.fromfile(img_list[img_loc], dtype=np.uint8), -1)

            # Convert to proper size
            box_im = cv2.resize(box_im, (box_w, box_h))

            # Foreshadowing blocks
            im[start_y:start_y+box_h, start_x:start_x+box_w] = box_im

            j += box_w
        i += box_h
	
	# Return the result graph
    return im
Copy the code

First let’s look at what the parameters are:

Imgs_path: indicates the root directory of the image list. Box_size: indicates the size of the pixel block. Multiple =1: The zoom size of the image. Default is1
Copy the code

The first two parameters are easy to understand. forbox_sizeThe explanation of the parameters is the effect of four, the size of each photo, because I all to square processing, so only one size. whilemultipleThe parameter is the scale size, when our base graph is50 * 50When no scaling is set, the resulting graph is the same50 * 50When we set the zoom to 2, the resulting graph is100 * 100. Because the image is too small to see the image in the pixel block, so use the zoom to make the effect better, but set the zoom value too large image memory will be much larger. The rest of the explanation is in the code. Finally, LET me show you a rendering:Because the effect is not very optimistic, so I will show you a hazy renderings. Interested readers can follow: New folder. If you think the article is helpful, you can move your hand and click “Like”