Welcome to the third edition of our newsletter The Token! In this episode we take a brief look at Lima from Meta and QLora, a new technique that allows you to fine tune an LLM as large as 65B parameters using just a single GPU. We also talk about the Microsoft Build conference, and Falcon: a new Large Language Models that dethroned Llama in the Open LLM benchmark.
Share this post
Lima, Qlora, Microsoft Build and more
Share this post
Welcome to the third edition of our newsletter The Token! In this episode we take a brief look at Lima from Meta and QLora, a new technique that allows you to fine tune an LLM as large as 65B parameters using just a single GPU. We also talk about the Microsoft Build conference, and Falcon: a new Large Language Models that dethroned Llama in the Open LLM benchmark.