As companies like Google, Anthropic, and OpenAI update and upgrade their AI models, the way that those LLMs interact with users is sure to change as well. However, getting used to the new system can become a hassle for users who then have to adjust how they pose their queries in order to get the results they’ve come to expect. An Apple research team has developed a new method to streamline that upgrade transition while reducing inconsistencies between the two versions by as much as 40%.
As part of their study, “MUSCLE: A Model Update Strategy for Compatible LLM Evolution,” published July 15, the researchers argue that when upgrading their models, developers tend to focus more on upping the overall performance, rather than making sure that the transition between models is seamless for the user. That includes making sure that negative flips, wherein the new model predicts the incorrect output for a test sample that was correctly predicted by the older model, are kept to a minimum.
This is because, the study authors argue, each user has their own quirks, quibbles, and personalized ways of interacting with chatbots. Having to continually adjust and adapt the manner in which they interact with a model can become an exhausting affair — one that is antithetical to Apple’s desired user experience.
The research team even argues that incorrect predictions by the AI should remain between versions, “There is value in being consistent when both models are incorrect,” they wrote. “A user may have developed coping strategies on how to interact with a model when it is incorrect.”
Apple presents MUSCLE
A Model Update Strategy for Compatible LLM Evolution
Large Language Models (LLMs) are frequently updated due to data or architecture changes to improve their performance. When updating models, developers often focus on increasing overall performance… pic.twitter.com/ATm2zM4Poc
— AK (@_akhaliq) July 15, 2024
To address this, the researchers first developed metrics by which to measure the degree of regression between models and then developed a strategy to minimize their occurrence. The result is MUSCLE, a strategy that doesn’t require developers to retrain the entire base model and instead relies on the use of training adapters. Adapters small AI modules that can integrate at different points along the overall LLM.
Developers can then fine-tune these specific modules instead of the entire model. This enables the model as a whole to perform distinct tasks at a fraction of the training cost and with only a small increase in the number of parameters. They’re essentially plug-ins for large language models that allow us to fine-tune specific sections of the overall AI instead of the whole thing.
The research team upgraded LLMs including Meta’s Llama and Microsoft’s Phi as part of their study, using specific math queries as samples, and found that negative flips occurred as much as 60% of the time. By incorporating the MUSCLE strategy, the team wasn’t able to fully eliminate negative flips, but they did manage to reduce their occurrence by as much as 40% compared to the control.