While large language models are becoming exceptionally good at learning from vast amounts of data, a new technique that does the opposite has tech companies abuzz: machine unlearning.
This relatively new approach teaches LLMs to forget or “unlearn” sensitive, untrusted or copyrighted data. It is faster than retraining models from scratch and retroactively removes specific unwanted data or behavior.
No surprise then that tech giants like IBM, Google and Microsoft are hustling to get machine unlearning ready for prime time. The growing focus on unlearning, however, also highlights some hiccups with this technique: models that forget too much and a lack of industry-wide tools to evaluate the effectiveness of the unlearning.
From learning to unlearning
Trained on terabytes of data, LLMs “learn” to make decisions and predictions without being explicitly programmed to do so. This branch of AI known as machine learning has soared in popularity as algorithms imitate the way that humans learn, gradually improving the accuracy of the content they generate.
But more data also means more problems. Or as IBM Senior Research Scientist Nathalie Baracaldo puts it, “Whatever data is learned—the good and the bad—it will stick.”
And so ever larger models can also generate more toxic, hateful language and contain sensitive data that defy cybersecurity standards. Why? These models are trained on unstructured and untrusted data from the internet. Even with rigorous attempts to filter data, aligning models to define what questions not to answer and what answers to provide and using other guardrails to inspect a model’s output—still, unwanted behavior, malware, toxic and copyrighted material creep through.
Retraining these models to remove the undesirable data takes months and costs millions of dollars. Furthermore, when models are open sourced, any vulnerabilities in the base model are carried onto many other models and applications.
Unlearning approaches aim to alleviate these problems. By identifying unlearning targets such as specific data points like content containing harmful, unethical or copyrighted language or unwanted text prompts, unlearning algorithms efficiently remove the effect of targeted content.
Forgetting Harry Potter
A team of researchers from Microsoft used this unlearning approach to see if they could make Meta’s Llama2-7b model forget copyrighted material from Harry Potter, which it had been trained on from the internet. Before unlearning, when the researchers entered a prompt such as “Who is Harry Potter?” the model responded: “Harry Potter is the main protagonist in J.K. Rowling’s series of fantasy novels.”
After fine-tuning the model to “unlearn” copyrighted material, the model responds with the following to the same prompt: “Harry Potter is a British actor, writer, and director…”.
“In essence, every time the model encounters a context related to the target data, it ‘forgets’ the original content,” explained the researchers Ronen Elden and Mark Russinovich in a blog post. The team shared their model on Hugging Face so the AI community could explore unlearning and tinker with it as well.
In addition to removing copyrighted material, removing sensitive material to protect individuals’ privacy is another high-stake use case. A team, led by Radu Marculescu at the University of Texas at Austin, collaborating with AI specialists at JP Morgan Chase, is working on machine unlearning for image-to-image generative models. In a recent paper, they showed that they were able to eliminate unwanted elements of images (the “forget set”) without degrading the performance of the overall image set.
This technique could be helpful in scenarios such as drone surveys of real estate properties, for instance, said Professor Marculescu. “If there were faces of children clearly visible, you could blot those out to protect their privacy.”
Related: Tackling bias in machine learning models
Google is also busy tackling unlearning within the broader open-source developer community. In June 2023, Google launched its first machine unlearning challenge. The competition featured an age predictor that had been trained on face images. After the training, a certain subset of the training images had to be forgotten to protect the privacy or rights of the individuals concerned.
While it’s not perfect, the early results from various teams are promising. Using machine unlearning on a Llama model, for instance, Baracaldo’s team at IBM was able to reduce the toxicity score from 15.4% toxicity to 4.8% without affecting the accuracy of other tasks the LLM performed. And instead of taking months to retrain a model, not to mention the cost, unlearning took all of 224 seconds.
Speed bumps
So why isn’t machine unlearning widely used?
“Methods to unlearn are still in their infancy and they don’t yet scale well,” explains Baracaldo.
The first challenge that looms large is “catastrophic forgetting”—meaning a model forgets more than the researchers wanted so the model then no longer performs key tasks it was designed for.
The IBM team has developed a new framework to improve the functioning of models post training. Using an approach they describe as split-unlearn-then-merge or SPUNGE they were able to unlearn undesirable behavior such as toxicity and hazardous knowledge such as biosecurity or cybersecurity risks, while preserving the general capabilities of the models.
Developing comprehensive and reliable evaluation tools to measure the effectiveness of unlearning efforts also remains an issue to resolve, say researchers across the board.
The future of machine unlearning
While unlearning may still be finding its feet, researchers are doubling down since there is such a broad array of potential applications, industries and geographies where it could prove useful.
In Europe for instance, the EU’s General Data Protection Regulation protects individuals’ “right to be forgotten.” If an individual opts to remove their data, machine unlearning could help ensure companies adhere to this legislation and remove critical data. Beyond security and privacy, machine unlearning could also be useful in any situation where data needs to be added or removed when licenses expire or clients, for instance, leave a large financial institution or hospital consortium.
“What I love about unlearning,” says Baracaldo, “Is that we can keep using all our other lines of defense like filtering data. But we can also ‘patch’ or modify the model whenever we see something goes wrong to remove everything that’s unwanted.”
LATEST COMMENTS
MC Press Online