Internet video live broadcast software was a strange concept to us ten years ago, but now it has realized “nationwide live broadcast”. Hundreds of thousands or even millions of people can watch a live broadcast at the same time and make real-time interactive comments. The speed of development of Internet technology is beyond our imagination, and the continuous improvement of technology provides new opportunities for the development of Internet-related industries. In recent years, under the catalysis of capital and market, the video live broadcasting industry has seen a blowout development. More and more people have seen the business opportunities of the Internet and joined the army of the video live broadcasting industry. It is estimated that the annual output value of the Internet live streaming industry will reach more than 100 billion yuan in 2020. As the video streaming industry continues to expand the sources of live streaming content, integrate various industries to add live streaming functions, and derive different types of video streaming forms, Internet celebrities + live streaming + commercial industrial economy has become the development trend of live streaming platforms. Although now live has evolved a variety of forms, short video broadcast live, one-on-one video dating, shopping, dating video dating, etc., but plus ca change, live video is divided into acquisition, pretreatment, coding, transmission, decoding, and rendering This several parts, including the implementation of the live video of push-pull flow is still one of the important link, Let’s look at the process of pushing and pulling flow respectively. 1. Video capture is carried out through the camera or screen recording function for setting up the live broadcast room. 2. Collect audio through microphone, audio original data format :PCM. 3. Encode the video file and change the video format from YUV(RGB) to H.264(H.265). Encode the audio file and change the video format from YUV(RGB) to H.264(H.265). 4. Encode the audio file from PCM to AAC. 5. Encode the video and audio files in a “streaming” Multimedia Container Format, where the audio and video (H.264 and AAC) are combined into FLV or TS or RTMP Packet, depending on the transport protocol). 6. Select a protocol (Streams are files in the format of multimedia containers with streaming media features) to push streams to the server. The application layer protocol is HLS. RTSP; RTMP: transport layer protocols :RTCP and RTP; network layer protocols :RSVP. 7. Pass in the stream address (URL) to locate our stream target (that is to send the stream to “who”) and then start the stream. For example, if you work as a host in Douyu, the stream address in OBS will be written as Douyu; if you work as a programmer in 6 rooms, the default stream address in the program will be written as 6 rooms. 2. The process of pulling stream: 1. The direct broadcast room is built to obtain the URL of pulling stream (or play URL) through some channel, and select a protocol to pull stream from the server, and then start pulling stream application layer protocol :HLS; RTSP; RTMP: transport layer protocols :RTCP and RTP; network layer protocols :RSVP. 2. Demultiplexing the stream in “multimedia container format” into “video encoded format” visual cheek data (e.g. H.264) and “audio encoded format” audio data (e.g. AAC). 3. Use hard decoding (GPU decoding +CPU assisted) or soft decoding (CPU decoding) to decode video data in YUV or RGB format, and audio data in PCM format. 4. Perform audio and picture synchronization. 5. Send the synchronized audio (PCM) to the audio output device for playback. Send the synchronized video (YUV or RGB) to the video output device for playback. On live video source code push and pull flow detailed explanation of the process is here. The competition in the development of live video broadcasting industry will become more and more fierce, which requires relevant software developers to constantly improve their own technology and adapt to the needs of market development.