This is the 11th day of my participation in the August More Text Challenge. For details, see:August is more challenging


We often see some characters on the B station ghost video, the main is to convert a video into characters to show. It looks very high-end, but it’s actually very simple to implement, and we just need to touch the OpencV module, and we can quickly implement the video character. But before we do that, let’s look at what we achieved:

The above is a part of the renderings, and let’s start with our topic.

One, OpenCV installation and image reading

In Python we only need to install PIP. We execute the following statement from the console:

pip install opencv-python
Copy the code

The installation is complete and ready to use. Let’s first read an image:

import cv2
im = cv2.imread('jljt')	# Read image
cv2.imshow('im', im)	# Display images
cv2.waitKey(0)	Wait for keyboard input
cv2.destroyAllWindows()	# Destroy memory
Copy the code

First we read the image using the cv2.imread method, which returns an NdarRay object. We then call the imshow method to display the image, which brings up a window that will only appear for a split second, so we call waitKey to wait for the input, passing in a 0 to indicate an infinite wait. Since opencv is written in c++, we need to destroy memory.

Two, some basic operations in OpenCV

The idea of characterizing the video is to first convert the video into a frame of image, and then conduct characterizing processing on the image, and finally show the effect of the character video. Before we generate character drawings, we need to look at some OpenCV operations.

(1) Gray scale conversion

Grayscale processing is a very common operation, and our original image has three layers of BGR (in OpenCV, the image is read in BGR). What we’re doing is we’re visually turning the image into black and white, but we’re essentially turning the three layers of the image into one layer by calculating it. We don’t need to do this calculation, we just need to call the OpenCV function:

import cv2
# Read image
im = cv2.imread('jljt.jpg')
# Gray conversion
grey = cv2.cvtColor(frame, cv2.COLOR_RGB2GRAY)
Copy the code

The comparison between the effect drawing and the original drawing is as follows:

The left is the original image, and the right is the gray converted image.

(3) Change the image size

Since the image will be larger after characterization, we need to shrink the image first. We can call cv2.resize to change the size of the image:

import cv2
# Read image
im = cv2.imread('jljt.png')
# Change image size
re = cv2.resize(im, (100.40))
cv2.imshow('11', re)
Copy the code

(2) Read the video frame by frame

We can read the video through VideoCapture and then call a method in it to read each frame.

import cv2
# Read video
video = cv2.VideoCapture('jljt.mp4')
This method returns two parameters. The first is whether there is a next frame and the second is the ndarray object of the frame
ret, frame =
while ret:
    # Loop read frame
    ret, frame =
Copy the code

With that in mind, we can begin our next step.

Three, the picture character

For an image with only one channel, we can think of it as a rectangle with a minimum unit of one pixel. The process of characterization is the process of replacing pixels with characters. So we’re going to go through every pixel of the image, but what character should we replace with?

We have a reference table for colors, and OpencV divides this parameter table into 256 pieces representing different degrees. We can also make a reference table, but the contents of the table are not colors, but characters.

Above is the color table. We can map the color table to the character table. Suppose the character table is as follows:


After transformation, the characters in the character table corresponding to the corresponding color can be obtained:

It’s okay if you don’t understand this formula, you just need to know how to use it. Here is our complete code:

import cv2
str = 'mqpka89045321@#$%^&*()_=||||}' # character table
im = cv2.imread('jljt.jpg')	# Read image
grey = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)	# Gray conversion
grey = cv2.resize(grey, (50.18))	# Zoom out
str_img = ' '	# for character drawing
for i in grey:	# Traverse every pixel
    for j in i:
        index = int(j / 256 * len(str))	# Get character coordinates
        str_img += str[index]	# Add characters to the character drawing
    str_img += '\n'
Copy the code

Generate the following character drawings:

Because the size is relatively small, the effect is not very good, we adjust the size is good.

Iv. Video transfer characters

We know that pictures are translated into characters, but naturally video is not a problem. We just need to perform the image characterization operation on the frame-by-frame reading.

import os
import cv2
str = 'mqpka89045321@#$%^&*()_=||||}'	# character table
video = cv2.VideoCapture('jljt.mp4') 	# Read video
ret, frame =	Frame # read
while ret:	# Read frame by frame
    str_img = ' '	# character painting
    grey = cv2.cvtColor(frame, cv2.COLOR_RGB2GRAY)	# Gray conversion
    grey = cv2.resize(grey, (100.40))	# Size of table
    for i in grey:	# Traverse every pixel
        for j in i:
            index = int(j / 256 * len(str))	# Get character coordinates
            str_img += str[index]	# Add characters to the character drawing
        str_img += '\n'
    os.system('cls')	# Clear the output of the last frame
    print(str_img)	# Output character drawing
    ret, frame =	# Read next frame
Copy the code

This will allow us to execute a frame every 5 milliseconds, but when we execute with PyCharm, we will notice that the screen is not being cleared, so we need to run it on the command line. The end result is our character video:When selecting the character table, we need to pay attention to the color of the body. If the color of the body is light, the end of the character table should be some complex characters, such as:# $% @ &. The header of the character table is some simple characters, such as:-|/And so on. If the body is dark and the background is light, the opposite is true. Of course, there is no single standard, you can adjust slowly. Interested readers, you can follow my personal public account: ZackSock, see the nose picking is my right.