Synthesis Blog

Together, we’re building the future of computer vision and machine learning.
Featured Post

Kolmogorov-Arnold Networks: KAN You Make It Work?

Although deep learning is a very new branch of computer science, foundations of neural networks have been in place since the 1950s: we have been training directed graphs composed of artificial neurons (perceptrons), and each individual neuron has always looked like a linear combination of inputs followed by a nonlinear function like ReLU. In April 2024, a new paradigm emerged: Kolmogorov-Arnold networks (KAN) work on a different theoretical basis and promise not only a better fit for the data but also much improved interpretability and an ability to cross over to symbolic discoveries. In this post, we discuss this paradigm, what the main differences are, and where KAN can get us right now.

Continue reading
All Posts
November 7, 2024

Although deep learning is a very new branch of computer science, foundations of neural networks have been in place since…

September 25, 2024

OpenAI’s o1-preview has been all the buzz lately. While this model is based on the GPT-4o general architecture, it boasts…

September 18, 2024

We continue our series on LLMs and various ways to make them better. We have already discussed ways to increase…

August 13, 2024

We continue our series on generative AI. We have discussed Transformers, large language models, and some specific aspects of Transformers…

July 2, 2024

One of the most striking AI advances this spring was OpenAI's Sora, a video generation model that sets new standards…

April 8, 2024

The announcement of Gemini 1.5 by Google was all but eclipsed by OpenAI’s video generation model Sora. Still, there was…

All Series