With Instagram, Snap, and TikTok upping the filter and effects game, the bar is as high as ever for photo and video apps. Keep the fun going on your app by making sure your machine vision models are on the leading edge with our pre-made, easy to use datasets.
Portrait Segmentation & Matting
Quality photo & video transformation often starts with accurate background removal with anti-aliasing. To develop better models for this task, a broader set of people, hair, accompanying objects, and backgrounds are needed which the datasets below provide.
See things through their eyes by implementing gaze detection. Understanding gaze lets your application gauge attentiveness levels as well as deduce spatial relationships between important objects in meetings. Gaze correction to adjust for camera offsets also helps meetings feel more natural and connected.
Automatically choose the photo where everyone is looking, with gaze detection. Understanding gaze lets your app gauge attentiveness levels as well as deduce spatial relationships between important objects.
Anyone can be a professional photographer or videographer with the right re-lighting. With advances in machine vision upscaling and re-lighting techniques, your app can turn the average into the extraordinary shot.
The average amount spent on single image for full-segmentation is $6.40* – any additional labels cost more above and beyond that. Our synthetic data provides full-segmentation, landmarks, surface normals, and more – for as little as $0.03 per image.
Of course, that’s only the labeling cost. Procuring the images to label is incredibly time-consuming as well. It can take weeks or months to legally collect diverse images of individuals’ faces for most companies. Our datasets are available immediately, and our programmatic API returns generated images and labels in minutes to hours.
*Based on scale.ai pricing, January 2021.
Not only do our datasets provide training data affordably and nearly instantaneously, they do so much more than human-collected & labeled data. So you can build more advanced, more ethical computer vision models.
100% accurate ground truth–every time. Eliminate your QA step on every label.
Get peace of mind: with non-real humans, privacy concerns are history.
Even sampling across skin tones, ethnicities, and ages, for more ethical machine vision.
New Label Types
Use cutting edge models with depth, normals, dense 3D landmarks, & subsegmentation.
Combine identities, hair styles, facial hair, makeup, hats, glasses, face masks, lighting conditions, and camera angles for trillions of possibilities – all at the speed of writing JSON with our API.
Get Going Fast
Check out our snippets to jump-start the training process.
As with all synthetic data, there’s a shift from our domain to the one captured by real cameras. Although there’s no universal domain adaptation approach for every use-case, we stand on the shoulders of giants to get great results.
Adaptive Batch Normalization
Adaptive Batch Normalization is a simple technique, can be easily applied to any network with batch normalization layers, and combined with all other techniques for surprisingly good results.
Image-2-image translation methods coupled with self-regularization loss allows dataset-level refinement. While these methods require additional pipeline to train, it is completely independent and does not require modifications of the main training pipeline.