How itâs made: Holobooth
A virtual photo booth experience showcasing Flutter and Machine Learning
Introducing the Flutter Forward Holobooth, a new experience showcasing the power of Flutter, Firebase, and Machine Learning (using MediaPipe and TensorFlow.js) in a virtual photo booth experience. Start by selecting your avatar (Dash or Sparky) and transport yourself to a tropical beach, volcanic mountain, outer space, the ocean floor, or somewhere else! Since we canât transport everyone to Nairobi to attend Flutter Forward in person, we wanted to provide a virtual experience that is just as exciting! With Holobooth, you can capture a short video to commemorate your virtual visit. Then, show your friends by sharing on Twitter or Facebook.
The Holobooth builds on the first version of the Photo Booth app from Google I/O 2021. Instead of taking photos of you and Dash or Sparky, Holobooth uses machine learning to control animations of Dash or Sparky using your facial expressions.
Weâll break down how our team collaborated with Google to create a more immersive and futuristic photo booth experience by tapping into the power of Google tools. We used Flutter and Firebase to build the Holobooth app. Web ML in JavaScript allowed us to take the experience to the next level with virtual, interactive, 3D avatars. Letâs dive into how we built it!
Detecting faces with TensorFlow.js
One of the most exciting features of the Holobooth is the ability to map live video of your face onto a 3D model of Dash (or Sparky) as you travel through their virtual world. If your face expresses surprise, Dashâs face expresses surprise, and so on. To achieve this, we used the camera plugin for Flutter web and TensorFlow.js to detect the userâs face within the frame of the camera. More specifically, we used the MediaPipe FaceMesh model, which estimates 468 3D face landmarks in real-time, to detect features of the userâs face within the camera frame across web and mobile browsers.
Based on the position of each facial feature, we can determine if the user is in frame, if their eyes or mouth are open, and more. As the user moves around the camera view, the MediaPipe FaceMesh model (available via the TensorFlow.js Face Landmarks Detection package) ensures that we can track the exact coordinates of the userâs features so that we can mirror them on Dash or Sparky. For more details, you can dig into the face_geometry.dart file. While there isnât an official Dart package for TensorFlow.js yet, the Dart JS package allowed us to import the javascript library into a Flutter web app (see the tensorflow_models package folder for more details).
FaceGeometry({
required tf.Face face,
required tf.Size size,
}) : this._(
rotation: FaceRotation(keypoints: face.keypoints),
leftEye: LeftEyeGeometry(
keypoints: face.keypoints,
boundingBox: face.boundingBox,
),
rightEye: RightEyeGeometry(
keypoints: face.keypoints,
boundingBox: face.boundingBox,
),
mouth: MouthGeometry(
keypoints: face.keypoints,
boundingBox: face.boundingBox,
),
distance: FaceDistance(
boundingBox: face.boundingBox,
imageSize: size,
),
);
const FaceGeometry._({
required this.rotation,
required this.mouth,
required this.leftEye,
required this.rightEye,
required this.distance,
});
Animating backgrounds and avatars with Rive and TensorFlow.js
We turned to Rive to bring Holobooth animations to life. Rive is a web app built in Flutter that provides tools for building highly performant, lightweight, interactive animations that are easy to integrate into a Flutter app. We collaborated with talented designers at Rive and HOPR design studio to create animated Rive graphics that work seamlessly within our Flutter app. The animated backgrounds and avatars are Rive animations.
The avatars use Rive State Machines that allow us to control how an avatar behaves and looks. In the Rive State Machine, designers specify all of the inputs. Inputs are values that are controlled by your app. You can think of them as the contract between design and engineering teams. Your productâs code can change the values of the inputs at any time, and the State Machine reacts to those changes.
For Holobooth, we used inputs to control things like how wide a mouth is open or closed. Using the feature detection from the FaceMesh model, we can map them to the corresponding coordinates on our avatar models. Using the StateMachineController, we transform the input from the models to determine how the avatar appears on screen.
class CharacterStateMachineController extends StateMachineController {
CharacterStateMachineController(Artboard artboard)
: super(
artboard.animations.whereType<StateMachine>().firstWhere(
(stateMachine) => stateMachine.name == 'State Machine 1',
),
For example, the avatar models have a property to measure the openness of the mouth (measured from 0â1 where 0 is fully closed and 1 is fully open). If the user closes their mouth within the camera view, the app provides the corresponding value and feeds it into the avatar model so you see your avatarâs mouth also closes on the screen.
Capturing the dynamic photo with Firebase
The main feature of Holobooth is the GIF or video that you can share to celebrate Flutter Forward. We turned to Cloud Functions for Firebase to help us generate and upload your dynamic photo to Cloud Storage for Firebase. Once you press the camera button, the app starts capturing frames for a duration of 5 seconds. With ffmpeg, we use a Cloud Function to convert the frames into a single GIF and video that are then uploaded to Cloud Storage for Firebase. You can choose to download your GIF or video for later viewing or to manually upload it to social media.
To share your GIF directly to Twitter or Facebook, you can click the share button. You are then taken to the selected platform with a pre-populated post containing a photo of the first frame of your video. To see the full video, click on the link to your holocard â a web page that displays your video in full and a button directing visitors to try out Holobooth for themselves!
Challenges and how we addressed them
Holobooth contains a lot of elements that are expanding whatâs possible with Flutter â like using machine learning models and Rive graphics, all while ensuring a performant, smooth experience for users.
Working with TensorFlow.js was a first for us at Very Good Ventures. There are currently no official Flutter libraries, so much of our early work on this project focused on experimenting with the available models to figure out which one fit our needs. Once we settled on the landmark detection model, we then had to make sense of the data that the models output and map them onto the Rive animations. Here is an early exploration with face detection:
The official Flutter camera plugin gave us a lot of functionality out of the box, but it currently doesnât support streaming images on the web. For Holobooth, we forked the camera plugin to add this functionality. We hope that this is supported by the official plugin in the future.
Another challenge was optimizing for performance. Recording the screen can be an expensive operation because the app is capturing lots of data. We also had to take into account that users would be accessing this app from different browsers and devices. We wanted to ensure that the app is performant and provides a smooth experience for users no matter what device theyâre using. When accessing Holobooth on desktop, video backgrounds are animated and reflect a landscape orientation. To optimize for mobile browsers, backgrounds are static and cropped to fit portrait mode orientation. Since a mobile screen is smaller than on desktop, we also optimized the resolution of image assets to reduce the initial page load and the amount of data used by the device.
For more details on how we addressed these challenges and more, you can check out the open source code. We hope that this can serve as inspiration for developers wanting to experiment with TensorFlow.js, Rive, and videos, or even those just looking to optimize their web apps.
Looking forward
In creating this demo, we wanted to explore the potential for Flutter web apps to integrate with TensorFlow.js models in an easy, performant, and fun way. While a lot of what weâve built is still experimental, weâre excited for the future of machine learning in Flutter apps to create delightful experiences for users on any device! Join the community conversation, let us know what you think, and how you might use machine learning in your next Flutter project.
Take a video in the Holobooth and share it with us on social media!