Author: Hao Yang, engineer of Soundnet Agora

If you are interested in our Flutter plugin development process, or if you have any questions related to real-time audio and video development, please visit our Agora Q&A section and post to our engineers.

In response to the needs of developers, we have launched the Agora Flutter SDK, which adds real-time audio and video capabilities to the Flutter App as a Flutter Plugin. We also present a Quickstart Demo.

Considering that Flutter is still new to some developers. So, we also shared our Flutter development experience with authors from the RTC developer community. In fact, at the beginning of the development of the Agora Flutter SDK, our technical team did in-depth research on how to implement real-time audio and video based on Flutter. This article will share some of our experience in the process and results of the research.

How does Flutter call native code

What we want to do is to enable real-time audio and video on Flutter. Before we can start, we need to understand how Flutter calls native platform apis such as “get media devices”.

This is clear enough from the official architecture diagram above. Flutter initiates a method call via MethodChannel, then the native platform executes the corresponding implementation (Java/Kotlin/Swift/ object-c) upon receiving the message and returns the result asynchronously. Using getUserMedia as an example, this method is first declared in the Flutter layer, which is implemented by sending a message with the name of the calling method and the corresponding parameters via MethodChannel.

Future<MediaStream> getUserMedia( Map<String, Dynamic > mediaConstraints) async {// get MethodChannel (); Try {// Call the corresponding method getUserMedia final Map<dynamic, dynamic> Response = await channel.invokemethod ('getUserMedia'// Method name <String, dynamic>{'constraints': mediaConstraints}, // arguments); // Encapsulate a MediaStream object to be used in the Flutter layer based on the result returned asynchronously. String streamId = Response ["streamId"];
    MediaStream stream = new MediaStream(streamId);
    stream.setMediaTracks(response['audioTracks'], response['videoTracks']); // Return the resultreturnstream; } on PlatformException catch (e) {// throw an exception'Unable to getUserMedia: ${e.message}'; }}Copy the code

A Future represents an asynchronous call, similar to a Javascript Promise; Async /await is similar. In an async function, await methods are executed sequentially and synchronously, even though await is followed by asynchronous methods.

When the platform registers MethodChannel in MainActivity, after receiving the message and parameter of method call through MethodChannel, it implements the corresponding logic based on the platform and returns the execution result. Here, only the Android platform is used as an example:

/ / registered MethodChannel receive Flutter call import IO. Flutter. App. FlutterActivity; import io.flutter.plugin.common.MethodCall; import io.flutter.plugin.common.MethodChannel; import io.flutter.plugin.common.MethodChannel.MethodCallHandler; import io.flutter.plugin.common.MethodChannel.Result; public class MainActivity extends FlutterActivity { private static final String CHANNEL ="FlutterWebRTC"; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); GeneratedPluginRegistrant.registerWith(this); // Register MethodChannel, The ChannelName should be the same as the new MethodChannel(getFlutterView(), CHANNEL) registered with Flutter. SetMethodCallHandler (// Provide the implementation new for each methodMethodCallHandler() { @Override public void onMethodCall(MethodCall call, Result result) { // TODO } }); }}Copy the code
@override public void onMethodCall(MethodCall call, Result Result) {// If the method name is getUserMediaif (call.method.equals("getUserMedia"// Android implements getUserMedia method //... // result.success(// related information); // result. Error (// error message); }else{ result.notImplemented(); }}Copy the code

For more detailed information about Flutter, see the official Flutter examples and explanations.

Realization of audio and video SDK ideas

Knowing how Flutter calls the native platform approach above, we have two ideas for implementing an audio and video SDK.

1. Implement the AUDIO and video SDK on the native platform first, and then Flutter directly calls the methods provided by the SDK through MethodChannel.

The specific solution is to directly call the existing sound net Agora SDK through MethodChannel, and erase the possible differences, such as different parameters and partial method names, in the Flutter layer.

The main advantage of this approach is that it maximizes reuse of existing SDKS, similar to creating a layer of bridge.

2. First implement WebRTC standard based on native platform, then call WebRTC interface through MethodChannel in the Flutter layer, and then implement audio and video SDK logic.

This solution implements the WebRTC standard using the native platform (getUserMedia implemented in the previous section is part of this standard) and then registers with the Flutter layer as a WebRTC Plugin. The Flutter WebRTC Plugin is connected to the Agora SD-RTN™ global virtual communication network based on the Sound Net AUDIO and video SDK.

This approach is equivalent to implementing an entirely new Dart language SDK, using more Dart standard libraries (such as Math, IO, convert, etc.) and third-party ecosystems (such as flutter_webrTC) than the previous one. If you want to support more platforms (such as Windows), you just need that platform to implement the WebRTC standard.

If you are familiar with WebRTC, you may know that there is an Adapter concept when implementing WebRTC applications in browsers. The purpose is to hide some differences between the WebRTC interfaces of the major browsers. It just changed from Firefox/Chrome/Safari to Windows/iOS/Android etc.

Finally, for research purposes, and in order to better cater to the idea that Flutter is a set of code that can be used across multiple platforms (theoretically, the SDK is a fully designed layer of client logic. With WebRTC well supported, the work will be as follows: How to use the Dart language to implement audio and video communication logic on the WebRTC standard), we chose to adopt this solution, so readers may find that the Flutter SDK as a whole is somewhat more conceptually similar to the Audio and video SDK of the Sonnet Web platform.

The structure of the SDK

The main functions of SDK generally include audio and video collection and playback, establishing P2P connection and management with Agora Gateway, and message exchange and processing with Gateway.

Although the Flutter community is relatively new, Dart’s standard library is fairly complete and already has a number of excellent third-party plugins.

The code can be broken down into the following modules:

Communicate messages (such as publish/subscribe messages and replies) to the Gateway based on websocket-related methods in DART: IO

The flutter_ WebrTC project based on the open source community provides webrTC-related functions including audio and video collection and P2P connection

Implement helper classes for The EventEmitter SDK based on Dart Stream objects or simple maps (you can also use the Dart Stream/Sink concept as an alternative).

Upon completion of these modules, Client and Stream objects similar to those in acoustic Web SDK can be implemented.

One thing worth mentioning is the playback of video streams, which can be realized with the help of RTCVideoView object in flutter_webrTC Plugin. If you want to know more about the principle of Flutter, you can learn the concepts related to Flutter Texture.

The SDK was basically there, and then the UI layer was developed. The Flutter part was largely inspired by the React framework, Web developers familiar with the framework can easily implement a video calling App running on Android/iOS platforms based on this SDK. The demo we shared before has been successfully interconnected with the existing Sonnet Android/iOS/Web SDK, and the corresponding code may be open source in the near future.

conclusion

Although the Flutter community is still young, there are already a number of excellent third-party plug-ins emerging. With Dart’s relatively comprehensive standard library, implementing such an AUDIO and video SDK or similar functionality doesn’t require a lot of wheels. Plus, the Environment of Flutter itself is very easy to build/build/debug, so there are few hiccups in the development process.

In addition, in the development process of the application layer, the style is very similar to Web development using React, plus the Hot Reload of Flutter sub-second level, which has a great advantage in development experience and efficiency compared with native development.

Add to that the improving cross-platform features of the flutter-desktop-embedding on the desktop and the humming bird on the browser, as well as the likely arrival of Google’s new operating system Fuchsia, and for Web developers who want to get into native development, For native developers looking for greater productivity and better development experiences, Flutter is an ideal entry point to add to their technology stack in the New Year.


Want to learn more about the technical team’s Flutter development experience? On March 23rd, RTC Dev Meetup will invite engineers from LeanCloud, Sone.com, Damai.com and Meituan-Dianping to share more with you. Click here to learn more.