Virtual Camera Prototyping

Iterating on camera system design is a lengthy process if you have to collect data from real hardware for each iteration. Instead, our synthetic data platform enables you to understand model performance trade-offs in software. Simulate camera placement, image modality, resolution, and more to inform the overall design of your computer vision system–all without touching a single wire.

100x Cheaper.
1000x Faster Turnaround.

The average amount spent on single image for full-segmentation is $6.40* – any additional labels cost more above and beyond that. Our synthetic data provides full-segmentation, landmarks, surface normals, and more – for as little as $0.03 per image.

Of course, that’s only the labeling cost. Procuring the images to label is incredibly time-consuming as well. It can take weeks or months to legally collect diverse images of individuals’ faces for most companies. Our datasets are available immediately, and our programmatic API returns generated images and labels in minutes to hours.

*Based on pricing, January 2021.

Better. Stronger.
Not only do our datasets provide training data affordably and nearly instantaneously, they do so much more than human-collected & labeled data. So you can build more advanced, more ethical computer vision models.
Pixel-Perfect Accuracy
100% accurate ground truth–every time. Eliminate your QA step on every label.
Get peace of mind: with non-real humans, privacy concerns are history.
Less Bias
Even sampling across skin tones, ethnicities, and ages, for more ethical machine vision.
New Label Types
Use cutting edge models with depth, normals, dense 3D landmarks, & subsegmentation.
Broader Distributions
Combine identities, hair styles, facial hair, makeup, hats, glasses, face masks, lighting conditions, and camera angles for trillions of possibilities – all at the speed of writing JSON with our API.
Get Going Fast
Check out our snippets to jump-start the training process.
from face_api_dataset import FaceApiDataset, Modality
dataset = FaceApiDataset("test_dataset")
item = dataset[0]


landmark_show(item[Modality.RGB], item[Modality.LANDMARKS])
Domain Adapt
As with all synthetic data, there’s a shift from our domain to the one captured by real cameras. Although there’s no universal domain adaptation approach for every use-case, we stand on the shoulders of giants to get great results.
Adaptive Batch Normalization
Adaptive Batch Normalization is a simple technique, can be easily applied to any network with batch normalization layers, and combined with all other techniques for surprisingly good results.
Adversarial Domain Adaptation
Adversarial domain adaptation and its modifications for particular tasks usually result in strong improvement. The downside is that it typically requires heavy pipeline modifications.
Image-2-image translation methods coupled with self-regularization loss allows dataset-level refinement. While these methods require additional pipeline to train, it is completely independent and does not require modifications of the main training pipeline.
Combined methods
For the best results all the methods above typically should be combined together.
Ready to Grow With You
We’re here to help you create your solution with the help of our programmatic data platform.
Scales out of the box

Our technology seamlessly scales in the cloud with our customers’ demands, from R&D phases with small amounts of data to production requirements of terabytes of data.

With everything available via an API, we integrate seamlessly with your workflows from day 1.

Learn More about our API

Machine Learning Development Support

If your team needs a little more machine learning muscle, our experts are ready to jump in. We’ll help reduce your time to market, so don’t hesitate to reach out.

Contact Us

Synthesis AI speaking at the MetaBeat conference on Oct 4th