Zoom (zoom.us) is a widely used online meeting software. I believe you must have experienced or used it in office, conference, chat and other scenarios. As a mature commercial software, Zoom provides stable real-time audio and video call quality, as well as whiteboard, chat, screen sharing, PPT projection and other common functions. But in an era when the browser is the dominant service, how can live audio and video be left behind? Developing a similar meeting software directly on the web would certainly get more attention than Zoom, which requires a package. When it’s time for a meeting, just follow a link and you can jump in and start the meeting. Now, using qiniu real-time audio and video Web SDK, we can easily make your idea a reality.

First, let’s take a look at the key points that a Web online meeting product needs to be cracked

  • Browsers are compatible and must support most mainstream desktop browsers.

Qiniu real-time audio and video is based on the WebRTC protocol implemented by Google in Chrome. At present, the protocol has been formally written into the Web standard, and all modern browsers have good compatibility with it.

  • Good call quality, low latency, high definition

Unlike traditional WebRTC, which uses user-to-user P2P communication, we use nodes deployed worldwide as a real-time interactive network with low latency to communicate with each end of P2P, ensuring both latency and call quality.

  • Rich meeting features, PPT presentation, whiteboard, screen sharing and so on I want

Our SDK provides a rich list of features that meet the needs of most meeting scenarios, and it is theoretically possible to completely reproduce a Web version of Zoom using the SDK.

  • Having said that, how difficult is it to access? Are there examples and documentation?

Of course! Now you can go to our Web Demo (open in the desktop browser). Demo-rtc.qnsdk.com. Demo source code is also open source on Github for your reference github.com/pili-engine…

This Demo implements most of the functions provided directly by the SDK. The Demo integrating whiteboard /PPT sharing/chat and other scenarios is still being prepared for launch. Please look forward to it. About access to specific process we will simply introduce below again, detailed instructions, and reference to our stand developer.qiniu.com/rtn/sdk/441 document…

The development process

A simple meeting product, generally through the following process:

  • User registration/login (developer’s own integration, SDK only needs to distinguish userIDS of users)
  • Create a meeting room/join a meeting room
  • Collect your own camera/microphone data
  • Release the collected media data to the room
  • Subscribe to other people’s media data in the room and play it in real time
  • Handle user joining/leaving, publishing/unpublishing

This simplifies to the various functions of the SDK, which are essentially joining a room – collecting local media streams – publishing media streams – subscribing to media streams – event handling. The SDK encapsulates each step in a few lines of code.

The introduction of the SDK

It is recommended to use NPM to import our SDK, directly NPM I pili-RtC-web, or you can choose to import packaged JS file github.com/pili-engine…

Asynchronous processing

Real-time audio and video is a strong asynchronous scenario, all kinds of operations are asynchronous related because of the network, in order to allow developers to better control the asynchronous logic in the process of code writing. Instead of using the cumbersome callback mode, the SDK uses the async/await or Promise features of modern Javascript to write asynchronous code, Callback hell during development is avoided as much as possible (all await code below is assumed to be wrapped under an async function).

To join the room

When you’re ready, step one, join the room. It is said to join a room, but after the abstraction, it is actually “what user joins what room with what identity”. There are three unknowns: the user id, the identity id (permissions), and the room id. In fact, there are many unknowns in the whole process of adding a room, such as which APP the room belongs to (apps, rooms in different applications are independent), which qiniu account the APP belongs to and so on. Here, we uniformly encode and sign these values into a roomToken to provide to the front end, and the end can join the room only with this token. (Token generation can be done on the Qiniu console or dynamically generated using the server SDK as required)

const myRTC = new QNRTC.QNRTCSession(); // Initialize await myrtc. joinRoomWithToken(ROOM_TOKEN); // Join the roomCopy the code

Collect local media streams

Generally, audio and video are collected at the same time, that is, microphone and camera. However, the SDK also supports pure audio or video collection mode based on requirements. Calling the method is also very simple, just change the options.

const DOM_ELEMENT = ... // Capture the local stream constlocalStream = await QNRTC.deviceManager.getLocalStream({
    video: { enabled: true },
    audio: { enabled: true}}); // Play the collected streamlocalStream.play(DOM_ELEMENT)
Copy the code

Release media streams

Just call the Publish method and enter the stream object you just received

await myRTC.publish(localStream);
Copy the code

Subscribe to media streams

After joining the room successfully, we can access the users member to obtain the status of the current users in the room at any time. If there are users in the room who are publishing but not themselves, then we can initiate subscription

const users = myRTC.users;
users.forEach(async (user) => {
    if(user.published && user.userId ! == myrtc.userid) {const remoteStream = await myrtc.subscribe (user.userid); Remotestream. play(DOM_ELEMENT); // Remotestream. play(DOM_ELEMENT); }});Copy the code

The event processing

The SDK exposes a rich list of events to meet the needs of most scenarios, and event handling is simple, such as the event “published by another user”

// Listen on event myrtc.on ('user-publish', handleUserPublish); // myrtc.once ('user-publish', handleUserPublish); // Cancel a listener myrtc.off ('user-publish', handleUserPublish); Myrtc.removealllisteners ('user-publish');
Copy the code

Specific event list can be reference developer.qiniu.com/rtn/sdk/442…

features

In addition to these basic features, the SDK provides many powerful advanced features that further satisfy the needs of various industries.

Screen sharing

In addition to directly collecting cameras, the SDK also supports screen capture (or window capture) to share the screen for meetings. In addition, seamless switching between screen sharing and camera collection is supported to ensure user experience.

/ / screen sharing await QNRTC. DeviceManager. GetLocalStream ({screen: {enabled:true },
    audio: { enabled: true}});Copy the code

Live forward

For an online meeting, there may be only ten or so people participating in the discussion, but most of them need to watch the meeting in real time (but not participate in the live discussion). This is the intersection of real-time audio and video and live broadcast. A small number of users with real-time interaction needs are allocated to the real-time audio and video cloud with ultra-low latency (200ms), while most users with real-time viewing needs are allocated to the live broadcast cloud with low latency (2-3s), which can reduce costs to meet the needs to the greatest extent. At the same time, after the real-time video stream is transferred to the live streaming cloud, the API of Qiniu Live streaming cloud can be used to store the stream data in various storage Spaces for long-term preservation. Complete a complete business process from real-time interaction to live viewing and finally to file storage (for on-demand, etc.).

First of all, associate the corresponding live cloud space on the console page of the real-time audio and video cloud, and then turn on the switch of joint flow push.

If you want to push to a customized RTMP address (without using qiniu live cloud), you can also configure it through the back-end API of real-time audio and video cloud (see the document for details).

After that, the work is on the SDK. It is very simple to use the SDK to open the live broadcast and retweet. After joining the room, you can call a line of code.

myRTC.setDefaultMergeStream(WIDTH, HEIGHT); // The width height here corresponds to the confluence output size set aboveCopy the code

Using this code, the SDK will, by default, average the layout of all streams in the room and eventually push them to the target RTMP address for synstream push. If you want to customize the layout, you can use the following API:

myRTC.setMergeStreamLayout("Target user ID", { w: 100, h: 100, x: 0, y: 0, muted: false, hidden: false });
Copy the code

In addition, we also provide a real-time whiteboard service for web pages. Just like Zoom, users can share a whiteboard on the page with other users in the room for auxiliary presentations. At the same time, the whiteboard also supports PPT and PDF presentations. For a Demo of this section, visit our cow classroom experience at edu-demo.qnsdk.com.

The above just lists the common function scenarios of a meeting software. By integrating these basic functions with users’ own scenarios, a simple meeting software can be easily completed. If you’re ready to get started and give it a try, a visit to our Demo (see above) would be a good choice. If you want to connect your product to our live audio and video, There is a more detailed real-time audio and video https://developer.qiniu.com/rtn/sdk/5043/rtc-application-development-process application development practice, from the HTML/CSS to js, From every line of code to every feature there are detailed explanations and examples to help you quickly access.