By Very Good Ventures Team

We (the Very Good Ventures team) partnered with Google to launch the INTERACTIVE Photo Booth experience (I/O Photo Booth) at Google I/O this year. You can take photos with beloved Google mascots — Dash for Flutter, Android Jetpack, Dino for Chrome and Sparky for Firebase — and decorate your photos with a variety of stickers, including party hats, pizza, funky glasses and more. Of course, you can also download and share via social media, or use it as your avatar!

Dash for Flutter, Sparky for Firebase, Android Jetpack, and Dino for Chrome

We built an I/O photo booth using Flutter Web and Firebase. Because Flutter now supports building Web applications, we thought this would be a great way for attendees around the world to easily access the application at this year’s online Google I/O conference. Flutter Web removes the barrier of having to install apps through an app store and gives users the flexibility to choose which device to run their apps on: mobile, desktop or tablet. Therefore, as long as the browser is available, users can use the I/O camera booth directly without downloading it.

Although the I/O photo booth is designed to provide a Web experience, all code is written in a platform-independent architecture. Once support for native features such as camera plug-ins is in place across platforms, the code becomes universal across all platforms (desktop, Web, and mobile).

Build a virtual photo booth using Flutter

Build the Web version of the Flutter camera plugin

The first challenge was to build a webcam plug-in for Flutter on the Web. Initially, we contacted the Baseflow team because they were responsible for maintaining the existing open source Flutter camera plugin. Baseflow is committed to building first-class camera plug-in support for iOS and Android, and we are happy to work with them to provide Web support for the plug-in using a federated plug-in approach. We tried to conform to the official plug-in interface so that we could merge it back into the official plug-in when it was ready.

We identified two apis that are critical to building the I/O booth camera experience in Flutter.

  • Initialize the camera: The application first needs to access your device’s camera. For a desktop device, access might be a webcam, but for a mobile device, we chose to access the front-facing camera. We also offer an expected resolution of 1080p to fully improve shooting quality depending on the user’s device type.
  • Take photos: We used the built-in HtmlElementView, which uses platform views to render native Web elements as Flutter widgets. In this project, we rendered the VideoElement as a native HTML element, which is what you see on the screen before taking a photo. We also use a CanvasElement to capture images from the media stream when you click the Take a photo button.
Future<CameraImage> takePicture() async {
 final videoWidth = videoElement.videoWidth;
 final videoHeight = videoElement.videoHeight;
 finalcanvas = html.CanvasElement( width: videoWidth, height: videoHeight, ); canvas.context2D .. translate(videoWidth,0)
   ..scale(- 1.1)
   ..drawImageScaled(videoElement, 0.0, videoWidth, videoHeight);
 final blob = await canvas.toBlob();
 return CameraImage(
   data: html.Url.createObjectUrl(blob),
   width: videoWidth,
   height: videoHeight,
 );
}
Copy the code

Camera permission

After completing the Flutter camera plug-in on the Web, we created an abstract layout to display different interfaces based on camera permissions. For example, we can display an illustrative message while waiting for you to allow or refuse to use the browser camera, or if there is no camera available to access.

Camera(
 controller: _controller,
 placeholder: (_) => const SizedBox(),
 preview: (context, preview) => PhotoboothPreview(
   preview: preview,
   onSnapPressed: _onSnapPressed,
 ),
 error: (context, error) => PhotoboothError(error: error),
)
Copy the code

In the above abstract layout, Placeholder returns to the initial screen while the application waits for you to grant camera permissions. Preview returns to the real screen after you grant permission and displays a live video stream from the camera. The Error construction statement at the end catches the Error when it occurs and displays the corresponding message.

Generate mirrored photos

Our next challenge was to generate mirrored photos. If we use camera photos as they are, what you see will not be what you see when you look in the mirror. Some devices have Settings that specifically deal with this, so if you take a picture with a front-facing camera, you’re actually looking at a mirrored version of your photo.

In our first approach, we try to capture the default camera view and then flip it 180 degrees around the Y-axis. This method seemed to work, but then we ran into a problem where the Flutter would occasionally overwrite this flip, causing the video to revert to an unmirrored version.

With the help of the Flutter team, we solved this problem by placing VideoElement in DivElement and updating the VideoElement to fill the width and height of the DivElement. In this way, we can apply a mirror image to the video element without being overridden by a Flutter because the parent element is a DIV. That way, we have the mirror camera view we need!

△ Unmirrored view

△ Mirror view

Maintain the aspect ratio

Maintaining a 4:3 aspect ratio on a large screen and a 3:4 aspect ratio on a small screen is harder than it looks! Maintaining an aspect ratio is important both to fit the overall design of your Web application and to ensure that the pixels in your photos are clearly natural when shared on social media. This is a challenging task because the aspect ratio of the built-in cameras on different devices varies widely.

To enforce the aspect ratio, the application first requests the maximum possible resolution from the device’s camera using the JavaScript getUserMedia API. We then pass this API to the VideoElement flow, which is what you see in the camera view (the mirrored version, of course). We also apply the Object-fit CSS attribute to ensure that the video element covers its parent container. We use the AspectRatio widget that comes with Flutter to set the AspectRatio. So the camera doesn’t make any assumptions about the aspect ratio it displays; It always returns the maximum resolution supported and then complies with the constraints provided by the Flutter (4:3 or 3:4 in this case).

final orientation = MediaQuery.of(context).orientation;
final aspectRatio = orientation == Orientation.portrait
   ? PhotoboothAspectRatio.portrait
   : PhotoboothAspectRatio.landscape;
returnScaffold( body: _PhotoboothBackground( aspectRatio: aspectRatio, child: Camera( controller: _controller, placeholder: (_) = >const SizedBox(),
     preview: (context, preview) => PhotoboothPreview(
       preview: preview,
       onSnapPressed: () => _onSnapPressed(
         aspectRatio: aspectRatio,
       ),
     ),
     error: (context, error) => PhotoboothError(error: error),
   ),
 ),
);
Copy the code

Drag and drop to add stickers

A big part of the I/O photo booth experience is taking pictures with your favorite Google mascots and adding props. You can drag and drop mascots and props in photos, as well as resize and rotate them until you get the image you like. You’ll also find that you can drag and resize mascots as you add them to the screen. The mascots also have animated effects — made possible by Sprite Sheet.

for (final character in state.characters)
 DraggableResizable(   
   canTransform: character.id == state.selectedAssetId,
   onUpdate: (update) {
     context.read<PhotoboothBloc>().add(
       PhotoCharacterDragged(
         character: character, 
         update: update,
       ),
     );
   },
   child: _AnimatedCharacter(name: character.asset.name),
 ),
Copy the code

To resize the object, we create widgets that can be dragged, resized, and hold other Flutter widgets, in this case, mascots and props. The widget uses LayoutBuilder to handle zooming of the widget based on window constraints. Internally, we use the GestureDetector to hook into onScaleStart, onScaleUpdate, and onScaleEnd events. These callbacks provide the gesture details necessary to reflect the user’s actions on the mascots and props.

Using data from multiple GestureDetector feeds, the Transform Widget and 4D matrix Transform can scale and rotate mascots and props based on various gestures made by the user.

Transform( alignment: Alignment.center, transform: Matrix4.identity() .. scale(scale) .. rotateZ(angle), child: _DraggablePoint(...) .)Copy the code

Finally, we created separate packages to determine whether your device supports touch input. Draggable, resizable widgets adjust accordingly to touch functionality. On touch-input devices, you can’t see resizing anchors and rotating ICONS because you can manipulate images directly with two-finger open and flat gestures; On devices that do not support touch input, such as your desktop, we have added anchor points and rotating ICONS to accommodate click and drag.

Optimize Flutter for Web

Develop for the Web using Flutter

This is one of the first pure Web projects we built with Flutter, which has different characteristics from mobile apps.

We need to make sure that the application is responsive and adaptive to any browser on any device. That said, we had to make sure that the I/O photo booth could scale according to browser size and handle input from mobile devices and the Web. We did this in several ways:

  • Responsive resizing: The user can resize the browser at will, and the interface responds. If your browser window is portrait, the camera will flip from a 4:3 landscape view to a 3:4 portrait view.
  • Responsive design: For desktop browsers, we designed Dash, Android Jetpack, Dino, and Sparky to be displayed on the right, and for mobile devices, at the top. For desktop devices, we used the drawer navigation bar on the right side of the camera, while for mobile devices, we used the BottomSheet class.
  • Adaptive input: Mouse click actions will be considered input if you access the I/O photo booth using a desktop device, or touch input if you use a tablet or phone. This is especially important when resizing stickers and placing them in photos. Mobile devices support two-finger gestures, while desktop devices support click and drag.

Extensible architecture

We also built scalable mobile applications for this application. Our I/O photo booth was built on solid foundations, including good empty security, internationalization, and 100% unit and widget test coverage from the first commit. We use Flutter_BLOC for state management because it allows us to easily test business logic and observe all state changes in the application. This is especially useful for generating developer logs and ensuring traceability, as we can accurately observe changes from one state to another and isolate problems more quickly.

We also implemented a function-driven single code base structure. Stickers, sharing, and live camera previews, for example, are all implemented in their own folders, each containing their own interface components and business logic. These functions also use external dependencies, such as the camera plug-in in the Package subdirectory. With this architecture, our team was able to work on multiple functions in parallel without interfering with each other, minimizing merge conflicts and effectively reusing code. For example, the interface component library is a separate package called Photobooth_UI, and the camera plug-in is separate.

By splitting components into separate packages, we can extract individual components that are not tied to this particular project and open source them. Similar to the Material and Cupertino component libraries, we can even open source the Interface component library Package for use by the Flutter community.

Firebase + Flutter = Perfect combination

Firebase Auth, storage, hosting, etc

The photo booth leverages the Firebase ecosystem for various back-end integrations. Firebase_auth Package allows users to log in anonymously immediately after application startup. Each session uses Firebase Auth to create an anonymous user with a unique ID.

This setting comes into play when you come to the shared page. You can download photos to save as your profile picture or share them directly on social media. If you download a photo, it will be stored on your local device. If you share photos, we use the Firebase_storage package to store them in Firebase for later retrieval and generating posts for social media.

We defined Firebase security rules on the Firebase storage partition to ensure that photos are immutable after creation. This prevents other users from modifying or deleting photos in the storage partition. In addition, we used object lifecycle management provided by Google Cloud to define a rule to delete all objects from 30 days ago, but you can request that your photos be deleted as soon as possible by following the instructions listed in the app.

The application is also hosted quickly and securely using Firebase Hosting. You can use the action-hosting-deploy GitHub action to automatically deploy your application to Firebase Hosting based on the target branch. When we merge changes into the main branch, this action triggers a workflow to build a specific development version of the application and deploy it to Firebase Hosting. Similarly, when we merge changes into the release branch, this action triggers the deployment of the production version. By combining GitHub Action with Firebase Hosting, our team was able to iterate quickly and always get a preview of the latest version.

Finally, we used Firebase performance monitoring to monitor key Web performance metrics.

Use Cloud Functions for networking

Before generating your social posts, we first make sure the photo content is pixel perfect. The final image includes a nice border to feature an I/O photo booth and is cropped to a 4:3 or 3:4 aspect ratio to look great on social posts.

We use the Screencanvas API or CanvasElement to compose layers of original photos, mascots and props and generate individual images that you can download. This processing step is performed by the image_Compositor package.

We then leverage Firebase’s powerful Cloud Functions to share photos to social media. When you click the Share button, the system will take you to a new TAB and automatically generate posts to be published on the social platform of your choice. The post also includes a link to the Cloud Functions we wrote. As the browser analyzes the URL, it detects dynamic metadata generated by Cloud Functions and displays a nice preview of your photos in your social posts, along with a link to a share page where your fans can view photos and navigate back to the I/O Photo booth app to get their own photos.

function renderSharePage(imageFileName: string, baseUrl: string): string {
 const context = Object.assign({}, BaseHTMLContext, {
   appUrl: baseUrl,
   shareUrl: `${baseUrl}/share/${imageFileName}`,
   shareImageUrl: bucketPathForFile(`${UPLOAD_PATH}/${imageFileName}`),
 });
 return renderTemplate(shareTmpl, context);
}
Copy the code

The finished product is as follows:

For more information on how to use Firebase in the Flutter project, check out this Codelab.

The final results

This project demonstrates in detail how to build an application for the Web. To our surprise, the Web app build workflow was very similar to the experience of building mobile apps with Flutter. We have to consider window size, adaptation, touch and mouse input, image load time, browser compatibility, and all the other factors we need to consider when building a Web application. However, we can still use the same patterns, architectures, and coding standards to write Flutter code, which makes us feel comfortable when building Web applications. The tools and growing ecosystem provided by Flutter Package, including the Firebase tools suite, helped us implement the I/O photo booth.

The Very Good Ventures team that built I/O photo booths

We’ve opened up all the source code, so check out the Photo_booth project on GitHub, and don’t forget to show off as many photos as you can!