Avatar Creation
Synthetic data for computer vision to enable more capable and ethical AI.
Avatars and digital humans play a key role in gaming, AR/VR, and emerging metaverse applications. Developing high-quality 3D human avatars requires diverse facial and body data with accurate labels captured across environments and activities. However, capturing and annotating 3D human data to train AI models is time-consuming, labor-intensive, and expensive. In addition to draining resources, capturing human data is complicated by privacy and regulatory concerns.
Synthesis Humans, built on top of the Synthesis AI data generation platform, is a tool that lets ML engineers create their own labeled 3D datasets for training and optimizing avatar models.
Advanced data labels for creating avatars include:
- Detailed segmentation maps
- Depth levels
- Surface normals
- 2D/3D landmarks
- Custom variables
Digital Human Datasets for Metaverse Applications
Synthesis AI is the technology leader for creating realistic digital humans. Our digital human datasets support the many different types of avatar characteristics, including skin tones, gestures, and facial expressions, needed for next generation virtual innovation in the Metaverse.
We focus on the fine details of human movement and characteristics, as well as human interactions with a given environment, to support the development of augmented and virtual reality applications, avatars and experiences in the Metaverse.
High-density Facial Landmarks
Synthesis Humans for generating labeled 3D avatar datasets offers:
- 100K unique identities spanning sex, age, skin-tone, and ethnicity
- Ability to modify a wide range of facial attributes
- A proprietary set of 5K 3D facial landmarks
- Full control of lighting, environments, and cameras


Datasets with Diverse Identities
AI bias results from training models on misbalanced or non-representative datasets across sex, skin tone, and age. Synthesis AI’s platform enables machine learning engineers to create limitless human data with any distribution of characteristics, including those associated with communities typically underrepresented in publicly available datasets. Training ML models on more representative data reduces bias in AI systems. With 100K identities spanning sex, age, skin tone and ethnicity, Synthesis AI provides the broadest and most inclusive synthetic data offering.
Emotion Tracking
Computer vision systems can detect human emotion through facial expressions, which are often culture- and context-dependent. Recognizing emotion is vital for several applications, including Avatar Creation. With over 150 action unit-based expressions, Synthesis AI provides the nuanced facial movements needed for robust emotion tracking model development.


Activity & Gesture Recognition
Activity and gesture recognition models can help computer vision teams develop avatars with realistic facial, body and hand motion across a wide variety of body types, camera angles, backgrounds, and environments. Creating these models requires diverse data with nuanced 3D labels that are difficult to obtain using traditional human annotation approaches. Synthetic data provides high-density 3D landmarks of hand, joint and body positions to help build more performant models.
Pose Estimation

Pose Estimation
Synthesis Humans offers control of body types, clothing, and pose, all with accurate joint and body landmarks to help developers incorporate robust pose estimation into their models. A large library of actions and movements is mapped to thousands of identities and body types. Clothing variations can confuse pose estimation models; it is important to use training data that spans body types and reflects the full spectrum of clothing materials, styles, textures, and layers.

Learn More About Our Synthetic Data Solutions And Schedule A Demo Today
Take a moment and submit a bit more information about the problem you’re trying to solve with synthetic data, below. We’d love the opportunity to help you build the AR, VR and Metaverse applications of tomorrow.
