LAST UPDATED
Jun 24, 2024
Ever wanted to learn how to build an LLM Chatbot from scratch? Check out this article to learn how!
The table below includes some of the most influential models, and the order is sorted by their release dates.
| Model | Release Date | Developer | License | Description |
|---|---|---|---|---|
| GPT-4 | 2023 | OpenAI | Custom | Successor to GPT-3, built on a similar architecture but with improvements. |
| GPT-3 | June 2020 | OpenAI | Custom | 175 billion parameters, known for its versatility and capability. |
| Turing-NLG | February 2020 | Microsoft | Custom | 17 billion parameters, aimed at natural language understanding and generation. |
| GPT-2 | February 2019 | OpenAI | Modified MIT | Initially withheld from public release due to concerns over potential misuse. |
| BERT | October 2018 | Apache 2.0 | Designed to understand the context of words in search queries. | |
| Transformer XL | January 2019 | Google/CMU | Apache 2.0 | Extended Transformer model to handle longer sequences of text. |
| GPT | June 2018 | OpenAI | Modified MIT | First Generative Pre-trained Transformer with 117M parameters. |
| ELMo | March 2018 | Allen Institute | Apache 2.0 | Deep contextualized word representations, allowing for rich word meanings. |
Mixture of Experts (MoE) is a method that presents an efficient approach to dramatically increasing a model’s capabilities without introducing a proportional amount of computational overhead. To learn more, check out this guide!
Get conversational intelligence with transcription and understanding on the world's best speech AI platform.