Synthesis Blog

Together, we’re building the future of computer vision and machine learning.
Featured Post

Generative AI II: Discrete Latent Spaces

Last time, we discussed one of the models that have made modern generative AI possible: variational autoencoders (VAE). We reviewed the structure and basic assumptions of a VAE, and by now we understand how a VAE makes the latent space more regular by using distributions instead of single points. However, the variations of VAE most often used in modern generative models are a little different: they use discrete latent spaces with a fixed vocabulary of vectors. Let’s see what that means and how it can help generation!

Continue reading
All Posts
March 21, 2023

Last time, we discussed one of the models that have made modern generative AI possible: variational autoencoders (VAE). We reviewed…

February 7, 2023

It might seem like generative models are going through new phases every couple of years: we heard about Transformers, then…

January 5, 2023

Some of the most widely publicized results in machine learning in recent years have been related to image generation. You’ve…

December 14, 2022

Today we have something very special for you: fresh results of our very own machine learning researchers! We discuss a…

November 16, 2022

https://youtu.be/2I-rLtUP12A Synthesis AI CEO and Founder Yashar Behzadi recently sat down for an Ask Me Anything with our friends at…

November 1, 2022

Introducing Synthesis Humans & Synthesis Scenarios Our mission here at Synthesis AI has been the same since our initial launch:…

All Series

Explore datasets and labels with our Data Visualizer

X