Following on from the previous Control article, the Texture overlay will also be used for soft decoding in Flutter

Previous related articles

Flutter-Texture external to the Texture to implement video playback

Scrcpy screen casting principle analysis – equipment control section

Refer to the article

A simple attempt at Screen projection for Android PC — Final Chapter 1

Scrcpy server

Scrcpy has a main function so that it can be started directly by app_process, so let’s look directly at its main function

    public static void main(String... args) throws Exception {
        System.out.println("service starting...");
        Thread.setDefaultUncaughtExceptionHandler(new Thread.UncaughtExceptionHandler() {
            @Override
            public void uncaughtException(Thread t, Throwable e) {
                Ln.e("Exception on thread "+ t, e); suggestFix(e); }});// unlinkSelf();
        Options options = createOptions(args);
        scrcpy(options);
    }
Copy the code

UnlinkSelf is a way for it to delete itself. I annotated the main function for easy debugging. Options is an object that it encapsulates as a parameter. Simply put, the startup parameters of the command line are encapsulated in the object

CLASSPATH=/data/local/ TMP/scrcpy - server. Jar app_process. / com. Genymobile. Scrcpy. Server 1.12.1 8000000 0 0true - true true
Copy the code

That is

1.12.1 0 8000000 0 true - true true
Copy the code

CreateOptions returns this object and then calls scrcpy and passes the parsed parameter object into scrcpy

    private static void scrcpy(Options options) throws IOException {
        final Device device = new Device(options);
        boolean tunnelForward = options.isTunnelForward();
        try (DesktopConnection connection = DesktopConnection.open(device, tunnelForward)) {
            ScreenEncoder screenEncoder = new ScreenEncoder(options.getSendFrameMeta(), options.getBitRate(), options.getMaxFps());

            if (options.getControl()) {
                Controller controller = new Controller(device, connection);

                // asynchronous
                //
                startController(controller);
                startDeviceMessageSender(controller.getSender());
            }

            try {
                // synchronous
                screenEncoder.streamScreen(device, connection.getVideoFd());
            } catch (IOException e) {
                // this is expected on close
                Ln.d("Screen streaming stopped"); }}}Copy the code

The function called by the statement in try

    public static DesktopConnection open(Device device, boolean tunnelForward) throws IOException {
        LocalSocket videoSocket;
        LocalSocket controlSocket;
        if (tunnelForward) {
            LocalServerSocket localServerSocket = new LocalServerSocket(SOCKET_NAME);
            try {
                System.out.println("Waiting for video socket connection...");
                videoSocket = localServerSocket.accept();
                System.out.println("video socket is connected.");
                // send one byte so the client may read() to detect a connection error
                videoSocket.getOutputStream().write(0);
                try {

                    System.out.println("Waiting for input socket connection...");
                    controlSocket = localServerSocket.accept();

                    System.out.println("input socket is connected.");
                } catch (IOException | RuntimeException e) {
                    videoSocket.close();
                    throwe; }}finally{ localServerSocket.close(); }}else {
            videoSocket = connect(SOCKET_NAME);
            try {
                controlSocket = connect(SOCKET_NAME);
            } catch (IOException | RuntimeException e) {
                videoSocket.close();
                throw e;
            }
        }

        DesktopConnection connection = new DesktopConnection(videoSocket, controlSocket);
        Size videoSize = device.getScreenInfo().getVideoSize();
        connection.send(Device.getDeviceName(), videoSize.getWidth(), videoSize.getHeight());
        return connection;
    }

Copy the code

The open function creates two sockets and blocks them until they are connected. Then the open function sends the name, width, and height of the device. These functions are used to send the name and width of the device in the resolution of the video stream

    private void send(String deviceName, int width, int height) throws IOException {
        byte[] buffer = new byte[DEVICE_NAME_FIELD_LENGTH + 4];

        byte[] deviceNameBytes = deviceName.getBytes(StandardCharsets.UTF_8);
        int len = StringUtils.getUtf8TruncationIndex(deviceNameBytes, DEVICE_NAME_FIELD_LENGTH - 1);
        System.arraycopy(deviceNameBytes, 0, buffer, 0, len);
        // byte[] are always 0-initialized in java, no need to set '\0' explicitly

        buffer[DEVICE_NAME_FIELD_LENGTH] = (byte) (width >> 8);
        buffer[DEVICE_NAME_FIELD_LENGTH + 1] = (byte) width;
        buffer[DEVICE_NAME_FIELD_LENGTH + 2] = (byte) (height >> 8);
        buffer[DEVICE_NAME_FIELD_LENGTH + 3] = (byte) height;
        IO.writeFully(videoFd, buffer, 0, buffer.length);
    }
Copy the code

DEVICE_NAME_FIELD_LENGTH is a constant of 64, so when the socket of the video stream is connected, a character 0 is sent, followed by a string of 68 bytes, 64 bytes for the device name. Four bytes is the width and height of the device, which also uses a shift operation to store the number greater than 255 into two bytes, so we have to resolve the corresponding on the client side.

Because the client

To clarify the general idea, the first socket is for video output. The second is that the device control, Flutter, cannot decode video directly. This is also the importance I mentioned in my last article. So is

  • The Flutter–>Plugin–> Android native calls ffmPEG related decoding and connects to the first socket

  • Dart connects to a second socket for device control

C + +

So we connect the first socket in Native and decode the video stream data of the server immediately

Create a custom c++ socket class.

ScoketConnection.cpp

//
// Created by Cry on 2018-12-20.
//

#include "SocketConnection.h"

#include <string.h>
#include <unistd.h>

bool SocketConnection::connect_server() {
    / / create a Socket
    client_conn = socket(PF_INET, SOCK_STREAM, 0);
    if(! client_conn) { perror("can not create socket!!");
        return false;
    }

    struct sockaddr_in in_addr;
    memset(&in_addr, 0.sizeof(sockaddr_in));

    in_addr.sin_port = htons(5005);
    in_addr.sin_family = AF_INET;
    in_addr.sin_addr.s_addr = inet_addr("127.0.0.1");
    int ret = connect(client_conn, (struct sockaddr *) &in_addr, sizeof(struct sockaddr));
    if (ret < 0) {
        perror("socket connect error!! \\n");
        return false;
    }
    return true;
}

void SocketConnection::close_client() {
    if (client_conn >= 0) {
        shutdown(client_conn, SHUT_RDWR);
        close(client_conn);
        client_conn = 0; }}int SocketConnection::send_to_(uint8_t *buf, int len) {
    if(! client_conn) {return 0;
    }
    return send(client_conn, buf, len, 0);
}

int SocketConnection::recv_from_(uint8_t *buf, int len) {
    if(! client_conn) {return 0;
    }
    / / the difference between rev and read https://blog.csdn.net/superbfly/article/details/72782264
    return recv(client_conn, buf, len, 0);
}

Copy the code

The header file

//
// Created by Cry on 2018-12-20.
//

#ifndef ASREMOTE_SOCKETCONNECTION_H
#define ASREMOTE_SOCKETCONNECTION_H

#include <sys/socket.h>
#include <arpa/inet.h>
#include <zconf.h>
#include <cstdio>

/** * all methods here block */
class SocketConnection {
public:
    int client_conn;

    /** * connect Socket */
    bool connect_server(a);

    /** * Close the Socket */
    void close_client(a);

    /** * Socket sends */
    int send_to_(uint8_t *buf, int len);

    /** * Socket accepts */
    int recv_from_(uint8_t *buf, int len);
};


#endif //ASREMOTE_SOCKETCONNECTION_H
Copy the code

To facilitate socket connection in CPP

Connect and decode corresponding functions

SocketConnection *socketConnection;
LOGD("Connecting");
socketConnection = new SocketConnection();
if(! socketConnection->connect_server()) {return;
}
LOGD("Connection successful");
Copy the code

LOGD is the log.d of the calling Java

After the connection is successful, a character 0 sent from the server is read away

uint8_t zeroChar[1];
    // This is an empty byte sent by the server
socketConnection->recv_from_(reinterpret_cast<uint8_t *>(zeroChar), 1);
Copy the code

Then the device information is received

uint8_t deviceInfo[68];
socketConnection->recv_from_(reinterpret_cast<uint8_t *>(deviceInfo), 68);
LOGD("Device name ===========>%s", deviceInfo);
int width=deviceInfo[64] < <8|deviceInfo[65];
int height=deviceInfo[66] < <8|deviceInfo[67];
LOGD("Width of device is %d",width);
LOGD("Height of the device is %d",height);
Copy the code

Look at the downgrade attempt

    SocketConnection *socketConnection;
    LOGD("Connecting");
    socketConnection = new SocketConnection();
    if(! socketConnection->connect_server()) {return;
    }
    LOGD("Connection successful");
    uint8_t zeroChar[1];
    // This is an empty byte sent by the server
    socketConnection->recv_from_(reinterpret_cast<uint8_t *>(zeroChar), 1);
    uint8_t deviceInfo[68];
    socketConnection->recv_from_(reinterpret_cast<uint8_t *>(deviceInfo), 68);
    LOGD("Device name ===========>%s", deviceInfo);
    int width=deviceInfo[64] < <8|deviceInfo[65];
    int height=deviceInfo[66] < <8|deviceInfo[67];
    LOGD("Width of device is %d",width);
    LOGD("Height of the device is %d",height);
// std::cout<<
    // Initialize the FFmPEG network module
    avformat_network_init();

    AVFormatContext *formatContext = avformat_alloc_context();
    unsigned char *buffer = static_cast<unsigned char *>(av_malloc(BUF_SIZE));
    AVIOContext *avio_ctx = avio_alloc_context(buffer, BUF_SIZE,
                                               0, socketConnection,
                                               read_socket_buffer, NULL.NULL);
    formatContext->pb = avio_ctx;
    int ret = avformat_open_input(&formatContext, NULL.NULL.NULL);
    if (ret < 0) {
        LOGD("avformat_open_input error :%s\n", av_err2str(ret));
        return;
    }
    LOGD("Opened successfully");
    // The following statements are required to decode the traditional video, which will always be unavailable and block the thread
    // Populate the allocated AVFormatContext structure with data
// if (avformat_find_stream_info(formatContext, NULL) < 0) {
// LOGD(" Failed to read input video stream information." );
// return;
/ /}

    LOGD("Current video data, number of data streams contained: %d", formatContext->nb_streams);
    // The following is the traditional decoder method, here directly set the decoder to H264
    // Find "video streams ". The NB_STREAMS field in the AVFormatContext structure stores the total number of streams contained in the current video file
    // Video stream, audio stream, subtitles stream
// for (int i = 0; i < formatContext->nb_streams; i++) {
//
// // If it is a data stream, the encoding format is AVMEDIA_TYPE_VIDEO -- video stream.
// if (formatContext->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO) {
// video_stream_index = i; // Record the video stream subscript
// break;
/ /}
/ /}
// if (video_stream_index == -1) {
// LOGD(" No video stream found." );
// return;
/ /}
    // Get the corresponding stream decoder from the codec id -- codec_id





    AVCodec *videoDecoder = avcodec_find_decoder(AV_CODEC_ID_H264);

    if (videoDecoder == NULL) {
        LOGD("No corresponding stream decoder found.");
        return;
    }
    LOGD("Decoder found successfully.");
    // Assign a decoder context (initialized with default values) through the decoder
    AVCodecContext *codecContext = avcodec_alloc_context3(videoDecoder);

    if (codecContext == NULL) {
        LOGD("Failed to allocate decoder context.");
        return;
    }

    LOGD("Allocating decoder context succeeded.");
    // Populates the encoder context with the specified encoder value
// if (avcodec_parameters_to_context(codecContext, codecParameters) < 0) {
// LOGD(" Failed to populate the codec context." );
// return;
/ /}
//
// LOGD(" The codec context was filled successfully." );
    // Initialize the codec context with the given codec
    if (avcodec_open2(codecContext, videoDecoder, NULL) < 0) {
        LOGD("Failed to initialize decoder context.");
        return;
    }
    LOGD("Initialization of decoder context succeeded.");
    AVPixelFormat dstFormat = AV_PIX_FMT_RGBA;
    codecContext->pix_fmt = AV_PIX_FMT_YUV420P;
    // Allocate AVPacket, the structure object storing compressed data
    // In the case of a video stream, AVPacket contains a frame of compressed data.
    // However, audio may contain multiple frames of compressed data
    AVPacket *packet = av_packet_alloc();
    // Allocate the structure of each decoded data message (pointer)
    AVFrame *frame = av_frame_alloc();
    // Allocate the final display of the target frame information structure (pointer)
    AVFrame *outFrame = av_frame_alloc();

    codecContext->width = width;
    codecContext->height = height;
    uint8_t *out_buffer = (uint8_t *) av_malloc(
            (size_t) av_image_get_buffer_size(dstFormat, codecContext->width, codecContext->height,
                                              1));
    // More specified data initialization/padding buffer
    av_image_fill_arrays(outFrame->data, outFrame->linesize, out_buffer, dstFormat,
                         codecContext->width, codecContext->height, 1);
    // Initialize SwsContext
    SwsContext *swsContext = sws_getContext(
            codecContext->width   // Width of the original image
            , codecContext->height  / / source high figure
            , codecContext->pix_fmt // Source image format
            , codecContext->width  // Target graph width
            , codecContext->height  // The height of the target graph
            , dstFormat, SWS_BICUBIC, NULL.NULL.NULL
    );
    if (swsContext == NULL) {
        LOGD("swsContext==NULL");
        return;
    }
    LOGD("SwsContext initialized successfully")
    //Android native drawing tool
    ANativeWindow *nativeWindow = ANativeWindow_fromSurface(env, surface);
    // Define the drawing buffer
    ANativeWindow_Buffer outBuffer;
    // Limit the number of pixels in the buffer by setting the width and height, not the size of the screen.
    // If the buffer does not match the display size of the physical screen, the actual display may be a stretched or compressed image
    ANativeWindow_setBuffersGeometry(nativeWindow, codecContext->width, codecContext->height,
                                     WINDOW_FORMAT_RGBA_8888);
    // Loop to read the next frame of the data stream
    LOGD("Decoding");


    while (av_read_frame(formatContext, packet) == 0) {

        // Send the raw data to the decoder
        int sendPacketState = avcodec_send_packet(codecContext, packet);
        if (sendPacketState == 0) {
            int receiveFrameState = avcodec_receive_frame(codecContext, frame);
            if (receiveFrameState == 0) {
                // Lock the window drawing interface
                ANativeWindow_lock(nativeWindow, &outBuffer, NULL);
                // The output image for color, resolution scaling, filtering processing
                sws_scale(swsContext, (const uint8_t *const *) frame->data, frame->linesize, 0,
                          frame->height, outFrame->data, outFrame->linesize);
                uint8_t *dst = (uint8_t *) outBuffer.bits;
                // The first address of the decoded pixel data
                // The decoded image data is stored only in data[0] because RGBA format is used here. But if it's YUV, it will have data[0]
                //data[1],data[2]
                uint8_t *src = outFrame->data[0];
                // Get a line of bytes
                int oneLineByte = outBuffer.stride * 4;
                // Copy the actual amount of memory in a row
                int srcStride = outFrame->linesize[0];
                for (int i = 0; i < codecContext->height; i++) {
                    memcpy(dst + i * oneLineByte, src + i * srcStride, srcStride);
                }
                / / unlock
                ANativeWindow_unlockAndPost(nativeWindow);
                // Hibernate briefly. If the sleep time is too long, each frame will have a sense of delay, if it is short, it will have a sense of acceleration.
                // Generally 60 frames per second -- 16 milliseconds per frame for hibernation

            } else if (receiveFrameState == AVERROR(EAGAIN)) {
                LOGD("Failure from decoder - receive - data: AVERROR(EAGAIN)");
            } else if (receiveFrameState == AVERROR_EOF) {
                LOGD("Failed from decoder - receive - data: AVERROR_EOF");
            } else if (receiveFrameState == AVERROR(EINVAL)) {
                LOGD("Failed to receive data from decoder: AVERROR(EINVAL)");
            } else {
                LOGD("Failed to receive data from decoder: unknown"); }}else if (sendPacketState == AVERROR(EAGAIN)) {// Send data denied, must try to read data first
            LOGD("Failed to send decoder - packet: AVERROR(EAGAIN)");// The decoder has refreshed data but no new packets can be sent to the decoder
        } else if (sendPacketState == AVERROR_EOF) {
            LOGD("Failed to send decoder - data: AVERROR_EOF");
        } else if (sendPacketState == AVERROR(EINVAL)) {// The decoder is not open, or is currently an encoder, or needs to refresh data
            LOGD("Failed to send decoder - data: AVERROR(EINVAL)");
        } else if (sendPacketState == AVERROR(ENOMEM)) {// The packet cannot be pressed into the decoder queue, or the decoder decoded incorrectly
            LOGD("Failed to send decoder - data: AVERROR(ENOMEM)");
        } else {
            LOGD("Failed to send decoder data: unknown");
        }

        av_packet_unref(packet);
    }
    ANativeWindow_release(nativeWindow);
    av_frame_free(&outFrame);
    av_frame_free(&frame);
    av_packet_free(&packet);
    avcodec_free_context(&codecContext);
    avformat_close_input(&formatContext);
    avformat_free_context(formatContext);
Copy the code

Note that the above annotations are not open, the decoding of audio and video understanding will know that the annotation part is in fact the normal playback of the video will go through the process, in this is not feasible

How to apply colours to a drawing

Obtain the corresponding textureId and give it to Flutter to render the corresponding picture. Upload its object to the above function body. CPP directly operates this window.

In the dart

  init() async {
    SystemChrome.setEnabledSystemUIOverlays([SystemUiOverlay.bottom]);
    texTureId = await videoPlugin.invokeMethod("");
    setState(() {});
    networkManager = NetworkManager("127.0.0.1".5005);
    await networkManager.init();
  }
Copy the code

NetworkManager is a simple socket wrapper class

class NetworkManager {
 final String host;
 final int port;
 Socket socket;
 static Stream<List<int>> mStream;
 Int8List cacheData = Int8List(0);
 NetworkManager(this.host, this.port);

 Future<void> init() async {
   try {
     socket = await Socket.connect(host, port, timeout: Duration(seconds: 3));
   } catch (e) {
     print("The socket connection is abnormal, e=${e.toString()}");
   }
   mStream = socket.asBroadcastStream();
   // socket.listen(decodeHandle,
   // onError: errorHandler, onDone: doneHandler, cancelOnError: false);* * *}Copy the code

We get the device size in C++, but not in dart. The Flutter Texture Widget will default to the entire screen, so we also need to get the device width and height in dart

    ProcessResult _result = await Process.run(
      "sh"["-c"."adb -s $currentIp shell wm size"],
      environment: {"PATH": EnvirPath.binPath},
      runInShell: true,);var tmp=_result.stdout.replaceAll("Physical size: "."").toString().split("x");
    int width=int.tryParse(tmp[0]);
    int height=int.tryParse(tmp[1]);
Copy the code

$currentIp is the IP address of the current device. Adb is cross-compiled to android devices to prevent multiple devices from connecting.

AspectRatio(
    aspectRatio:
    width / height,
    ***
Copy the code

The Texture display on the client side will also maintain the scale of the remote device screen

control

Of course GestureDetector will be used as follows

 GestureDetector(
            behavior: HitTestBehavior.translucent,
            onPanDown: (details) {
              onPanDown = Offset(details.globalPosition.dx / fulldx,
                  (details.globalPosition.dy) / fulldy);
              int x = (onPanDown.dx * fulldx*window.devicePixelRatio).toInt();
              int y = (onPanDown.dy * fulldy*window.devicePixelRatio).toInt();

              networkManager.sendByte([
                2.0.2.3.4.5.6.7.8.9,
                x >> 24,
                x << 8 >> 24,
                x << 16 >> 24,
                x << 24 >> 24,
                y >> 24,
                y << 8 >> 24,
                y << 16 >> 24,
                y << 24 >> 24.1080 >> 8.1080 << 8 >> 8.2280 >> 8.2280 << 8 >> 8.0.0.0.0.0.0
              ]);
              newOffset = Offset(details.globalPosition.dx / fulldx,
                  (details.globalPosition.dy) / fulldy);
            },
            onPanUpdate: (details) {
              newOffset = Offset(details.globalPosition.dx / fulldx,
                  (details.globalPosition.dy) / fulldy);
              int x = (newOffset.dx * fulldx*window.devicePixelRatio).toInt();
              int y = (newOffset.dy * fulldy*window.devicePixelRatio).toInt();
              networkManager.sendByte([
                2.2.2.3.4.5.6.7.8.9,
                x >> 24,
                x << 8 >> 24,
                x << 16 >> 24,
                x << 24 >> 24,
                y >> 24,
                y << 8 >> 24,
                y << 16 >> 24,
                y << 24 >> 24.1080 >> 8.1080 << 8 >> 8.2280 >> 8.2280 << 8 >> 8.0.0.0.0.0.0
              ]);
            },
            onPanEnd: (details) async {
              int x = (newOffset.dx * fulldx*window.devicePixelRatio).toInt();
              int y = (newOffset.dy * fulldy*window.devicePixelRatio).toInt();
              networkManager.sendByte([
                2.1.2.3.4.5.6.7.8.9,
                x >> 24,
                x << 8 >> 24,
                x << 16 >> 24,
                x << 24 >> 24,
                y >> 24,
                y << 8 >> 24,
                y << 16 >> 24,
                y << 24 >> 24.1080 >> 8.1080 << 8 >> 8.2280 >> 8.2280 << 8 >> 8.0.0.0.0.0.0
              ]);
            },
            child: Container(
              alignment: Alignment.topLeft,
              / / color: MToolkitColors. AppColor. WithOpacity (0.5),
              // child: Image.memory(Uint8List.fromList(list)),
              width: fulldx,
              height: fulldy,
            ),
          ),
Copy the code

Ok, final effect (allow me to use GIF from previous post)

  • Show delay
  • Currently, you can only control Android from Android LAN or OTG
  • Linux and other desktop side as there is no video playback solution (I am trying to create an OpengL surface, but finally failed to achieve a very poor player can not use), but Linux through this solution control side is feasible until I deal with some local issues