Welcome to visit
RTC developer community
To exchange experience with more WebRTC developers.

WebRTC on the Web platform is not its only media API. The WebVR specification was introduced a few years ago to support virtual reality devices in the browser. It is now a new WebXR device API specification.

Dan Jenkin says it’s relatively simple to add a WebRTC video conferencing stream to a virtual reality environment using WebVR, FreeSWITCH. FreeSWITCH is a popular open source telephony platform that has owned WebRTC for several years.

Dan is a Google developer who likes to talk about combining the latest Web apis with RTC apps. The following article shows his code to show how he used WebVR to turn a FreeSWITCH Verto WebRTC video meeting into a virtual reality meeting.

A few weeks ago, I attended an event about WebRTC and WebVR. Adding VR content to your browser and mobile phone will increase your App’s potential to appeal to many people. For the past two or three years, VR has become affordable and widely available on mobile phones with Google Cardboard, and Oculus Go requires no mobile devices at all. I want to explore how apps can use this new cheap medium in WebRTC media.

In fact, I didn’t have a clue about WebVR until I uploaded the discussion to Call For Papers, but I knew I might learn something after seeing other demos. I think just go ahead and upload a crazy discussion and see who agrees.

A – Frame Frame

There are several ways to start WebVR, I chose to use A framework called A-Frame, which allows me to write some HTML and introduce JavaScript libraries to build the VR experience directly. While the demo didn’t go as I expected, it shows that you can actually achieve amazing VR experiences with very little code.

If you’re familiar with Web composition, you know firsthand what A-Frame is doing. Now, you may be wondering why I use it instead of directly using WebGL, WebVR Polyfill, and three.js to create WebGL objects or another framework. In short, I like to write less code, and a-Frame seems to fit the bill.

If you don’t like a-frame, you can try other options, such as React360 on webvr.info.

Build a WebVR experience using WebRTC

A variety of WebRTC VR experiences are now possible using A-frame. The Mozilla team has implemented a VR scenario where users can see each other in dots and hear each other’s voices. They did this using WebRTC Data Channels and WebRTC Audio, but I couldn’t find any places where WebRTC video was used, thus raising the challenge of how to use real-time video in a 3D environment.

My presentation is based on the open source FreeSWITCH Verto Communicator. Verto uses WebRTC, and I already know how to use the Verto client library in FreeSWITCH to communicate with Verto components, so I’m halfway through the challenge. The Verto client library is the sending part – replacing the SIP on the Websocket and connecting the SIP PBX to the WebRTC breakpoint.

HTML

Check out the A-frame code I added to Verto Communicator, which is only eight lines.



First, the A-Scene element creates a scene that contains all the steps in the VR experience. The empty A-Assets tag is used to put our WebRTC video tag.

The next line, A-Entity, is the most important line in a simple user immersion experience. It is an A-frame whole, with a pre-configured environment that divides the whole experience into steps.

Other entities are responsible for camera and fantasy controls. Check out the components supported in A-frames that you can use to create 3D shapes and objects.

This is just a collection of scenarios. Next we set up the control logic code, using JavaScript.

JavaScript

Verto Communicator is an Angular based App, so elements can be added or removed from the main application space. We need some logic to connect Verto and a-frame. This logic requires less than 40 lines of code:

function link(scope, element, attrs) {

  var newVideo = document.createElement('a-video');
  newVideo.setAttribute('height'.'9');
  newVideo.setAttribute('width'.'16');
  newVideo.setAttribute('position'.'0 5-15');
  console.log('ATTACH NOW');
  var newParent = document.getElementsByClassName('video-holder');
  newParent[0].appendChild(newVideo);

  window.attachNow = function(stream) {
    var video = document.createElement('video');

    var assets = document.querySelector('a-assets');

    assets.addEventListener('loadeddata', () => {
      console.log('loaded asset data');
    })

    video.setAttribute('id'.'newStream');
    video.setAttribute('autoplay'.true);
    video.setAttribute('src'.' ');
    assets.appendChild(video);

    video.addEventListener('loadeddata', () => {
      video.play();

      // Pointing this aframe entity to that video as its source
      newVideo.setAttribute('src', `#newStream`);
    });

    video.srcObject = stream;
  }
Copy the code

The above Link function is installed when you use Verto Communicator app to enter the meeting interface.

Modify Verto

As you can see, when the link is called, it will create a new video element and add it to the 3D environment with width and height attributes.

The AttachNow function is where real Magic happens. I modified the Verto library to call a function called AttachNow when a session is established. By default, the Verto library uses tags in the form of jQuery to initialize and add or remove media from tags. I need a stream to manage myself so I can add video tags to the empty objects I showed above. This allows the A-frame to do its logic – get the data and load it into A canvas in the A-video tag of the 3D environment.

I also added a function to vertoservice.js:

function updateVideoRes() {
    data.conf.modCommand('vid-res', (unmutedMembers * 1280) + 'x720');
    attachNow();
    document.getElementsByTagName('a-video')[0].setAttribute('width', unmutedMembers*16);
  }
Copy the code

UpdateVideoRes is designed to change the resolution of the Verto session output of FreeSWITCH. When users join the meeting, we want to create a longer video presentation in a 3D environment. As necessary, each time a new member joins, we lengthen the output so that users can see other users at each end.

visual

This is the final result, a VR environment that includes me and Simon Woodhead.

One of the great things about WebVR is that in order for it to work, you don’t need a VR headset, you can just click a button and get a full screen VR experience, just like if you were wearing a VR headset. You can check out the video on YouTube.

What have we learned?

The demo was only half effective, and the big takeaway is that even if it was only half effective, it was a great way to watch a video conference. For VR viewers, adding them to the stream when they are using headsets is not a viable solution. Maybe that’s why Microsoft’s HoloLens optimized it with mixed reality.