Synthesis Blog

Together, we’re building the future of computer vision and machine learning.
Featured Post

AI Safety I: Concepts and Definitions

In October 2023, I wrote a long post on the dangers of AGI and why we as humanity might not be ready for the upcoming AGI revolution. A year and a half is an eternity in current AI timelines—so what is the current state of the field? Are we still worried about AGI? Instead of talking about how perception of the risks has shifted over the last year (it has not, not that much, and most recent scenarios such as AI 2027 still warn about loss of control and existential risks), today we begin to review the positive side of this question: the emerging research fields of AI safety and AI alignment. This is still a very young field, and a field much smaller than it should be. Most research questions are wide open or not even well-defined yet, so if you are an AI researcher, please take this series as an invitation to dive in!

Continue reading
All Posts
April 17, 2025

In October 2023, I wrote a long post on the dangers of AGI and why we as humanity might not…

March 21, 2025

Today, I want to discuss two recently developed AI systems that can help with one of the holy grails of…

February 25, 2025

Some of the most important AI advances in 2024 were definitely test-time reasoning LLMs, or large reasoning models (LRM), that…

January 28, 2025

We interrupt your regularly scheduled programming to discuss a paper released on New Year’s Eve: on December 31, 2024, Google…

January 17, 2025

It is time to discuss some applications. Today, I begin with using LLMs for programming. There is at least one…

November 20, 2024

We have already discussed how to extend the context size for modern Transformer architectures, but today we explore a different…

All Series