Discover the Secrets Behind AI's Long Context Capabilities
Explore the importance of context length in AI models, its impact on memory, computation, and practical applications. Learn cutting-edge methods for efficient long-context handling.
Welcome to the "Practical Application of AI Large Language Model Systems" Series
In this lesson, I'll introduce a crucial technical metric for large models: context length.
We know that AI Q&A products differ from traditional Q&A products mainly in their use of context. AI Q&A products can provide deeper, more nuanced answers based on context, making them seem intelligent and human-like.
A recent popular AI Q&A product, Kimi, is known for its long context length. It can support inputs of up to 2 million words, allowing it to process several books' worth of text at once. Another example is GPT-4-turbo, which supports a context length of 128K, and 6B, with its latest version supporting a context length of 32K.
In the past, companies mainly touted the parameter scale of their products. Now, besides parameter scale, they often highlight the supported context length. This has led to a joke in the industry: after competing on parameters, models are now competing on context length.
In March this year, Alibaba Cloud's Tongyi Qianwen increased its context length to 10 million words, five times that of Kimi, and offered it to customers for free, pushing the competition to the extreme.
Why are major companies competing on context length?
Keep reading with a 7-day free trial
Subscribe to AI Disruption to keep reading this post and get 7 days of free access to the full post archives.