Synthesis Blog

Together, we’re building the future of computer vision and machine learning.
Featured Post

Fine-Tuning LLMs: RLHF, LoRA, and Instruction Tuning

We continue our series on generative AI. We have discussed Transformers, large language models, and some specific aspects of Transformers – but are modern LLMs still running on the exact same Transformer decoders as the original GPT? Yes and no; while the basics remain the same, there has been a lot of progress in recent years. Today, we briefly review some of the most important ideas in fine-tuning LLMs: RLHF, LoRA, instruction tuning, and recursive self-improvement. These ideas are key in turning a token prediction machine into a useful tool for practical applications.

Continue reading
All Posts
August 13, 2024

We continue our series on generative AI. We have discussed Transformers, large language models, and some specific aspects of Transformers…

July 2, 2024

One of the most striking AI advances this spring was OpenAI's Sora, a video generation model that sets new standards…

April 8, 2024

The announcement of Gemini 1.5 by Google was all but eclipsed by OpenAI’s video generation model Sora. Still, there was…

February 13, 2024

One of the most interesting AI-related news for me recently was a paper by DeepMind researchers that presented a new…

December 4, 2023

Here at Synthesis AI, we have decided to release the "Generative AI" series in an e-book form; expect a full-fledged…

October 10, 2023

This is the last post in the "Generative AI" series. Today, we look into the future and discuss where the…

All Series