Synthesis Blog

Together, we’re building the future of computer vision and machine learning.
Featured Post

Lost in Context: How Much Can You Fit into a Transformer

The announcement of Gemini 1.5 by Google was all but eclipsed by OpenAI’s video generation model Sora. Still, there was one very important thing there: the promise of processing a context window of up to 1 million tokens. A very recent announcement of new Claude models by Antropic also boasts context windows of up to 1M tokens, with 200K tokens available at launch. Today, we discuss what context windows are, why they are a constraint for Transformer-based models, how researchers have been trying to extend the context windows of modern LLMs, and how we can understand if a large context window usefully works. By virtue of nominative determinism, this is a a very long post even by the standards of this blog, so brace yourself and let’s go!

Continue reading
All Posts
April 8, 2024

The announcement of Gemini 1.5 by Google was all but eclipsed by OpenAI’s video generation model Sora. Still, there was…

February 13, 2024

One of the most interesting AI-related news for me recently was a paper by DeepMind researchers that presented a new…

December 4, 2023

Here at Synthesis AI, we have decided to release the "Generative AI" series in an e-book form; expect a full-fledged…

October 10, 2023

This is the last post in the "Generative AI" series. Today, we look into the future and discuss where the…

September 20, 2023

Last time, we finished all intended mathematical content, so it is time for us to wrap up the generative AI…

August 9, 2023

Congratulations, my friends, we have finally come to the end of the series! Although… well, not quite (see below), but…

All Series

Explore datasets and labels with our Data Visualizer

X