Welcome to the fifth edition of our newsletter The Token! In this episode we take a brief look at emerging LLM architectures from Andreesen Horowitz blog, two new open source LLMs and the current state of open source LLMs. We also dive into a discrepancy in the open LLM leaderboard that showed Falcon 🦅 to be superior to Llama 🦙 and close with a look at a new research paper that trains a rather small LLM with an equally small dataset to achieve competitive performance with much bigger models.
Share this post
LLM architectures, Open LLMs and more
Share this post
Welcome to the fifth edition of our newsletter The Token! In this episode we take a brief look at emerging LLM architectures from Andreesen Horowitz blog, two new open source LLMs and the current state of open source LLMs. We also dive into a discrepancy in the open LLM leaderboard that showed Falcon 🦅 to be superior to Llama 🦙 and close with a look at a new research paper that trains a rather small LLM with an equally small dataset to achieve competitive performance with much bigger models.