Catastrophic Forgetting // is FT isn’t the answer/solution?

LLMs/NNs suffer from suboptimal weight manipulation but there are some techniques to improve it

sbagency
6 min readJul 20, 2024

Catastrophic forgetting is a significant challenge faced by large language models (LLMs) and neural networks (NNs). It refers to the tendency of these models to abruptly forget previously learned information upon learning new information. This phenomenon poses a substantial hurdle for developing robust and reliable artificial intelligence systems, particularly in dynamic environments where continuous learning from new data is essential.

At the heart of catastrophic forgetting lies the architecture of neural networks. These models typically employ backpropagation to adjust their weights based on new input data. However, when presented with new tasks, the adjustments made to accommodate new information can overwrite or significantly alter the weights associated with previously learned tasks. As a result, the network’s performance on earlier tasks deteriorates, sometimes dramatically. This issue is especially prevalent in sequential learning scenarios, where the model is trained on one task after another without revisiting previous tasks.

The impact of catastrophic forgetting is multifaceted. In practical applications, it can lead to a loss of valuable knowledge that the model has previously acquired. For instance, a language model trained extensively on legal texts to assist in legal document analysis might lose its proficiency if subsequently trained on medical texts without measures to mitigate forgetting. This loss of knowledge is not just a theoretical concern but a practical impediment to the deployment of AI systems in real-world, multi-domain environments.

Researchers have proposed several strategies to address catastrophic forgetting. One approach is regularization, which penalizes changes to important weights to preserve knowledge. Elastic Weight Consolidation (EWC) is a notable technique in this category, which slows down learning on weights crucial to previous tasks. Another method is replay, which involves periodically retraining the model on a mixture of old and new data. This can be done through explicit storage of past data (rehearsal) or by generating synthetic data that mimic past experiences (generative replay). Additionally, architectural solutions such as dynamic network expansion allow the model to allocate new resources for new tasks, thereby minimizing interference with previously learned tasks.

Despite these advances, the problem of catastrophic forgetting remains unresolved. Each proposed solution comes with trade-offs. Regularization techniques might slow down the learning process, while replay methods can be computationally expensive and require significant storage resources. Moreover, dynamic network expansion can lead to scalability issues as the model grows in size with each new task. Therefore, the search for more efficient and effective solutions continues to be an active area of research in the field of artificial intelligence.

Understanding and mitigating catastrophic forgetting is crucial for the development of lifelong learning systems, which are expected to learn and adapt continuously over time. Achieving this capability would represent a significant step forward in artificial intelligence, enabling the deployment of models that maintain their competencies across various domains and tasks. It would also bring AI closer to human-like learning, where new knowledge is acquired without erasing previously learned information.

Catastrophic forgetting is a major challenge for LLMs and NNs, impacting their ability to retain knowledge over time. While several methods have been proposed to address this issue, none offer a perfect solution, and the trade-offs involved necessitate further research. Overcoming catastrophic forgetting is essential for creating more adaptable, reliable, and human-like AI systems capable of continuous learning.

Catastrophic forgetting and overtraining (or overfitting) are distinct challenges in training neural networks and large language models. Catastrophic forgetting occurs when a model loses previously learned information upon learning new information, particularly in sequential learning scenarios. This issue arises from the adjustments in model weights that interfere with earlier tasks. Conversely, overtraining happens when a model learns the training data too well, capturing noise and specific details rather than general patterns, resulting in poor generalization to new data. While catastrophic forgetting impedes knowledge retention in dynamic learning environments, overfitting hampers the model’s ability to generalize from the training set to unseen data. Addressing these problems involves different strategies: regularization, replay methods, and dynamic architectures for catastrophic forgetting, and techniques like regularization, dropout, early stopping, and data augmentation for overtraining.

https://www.nightfall.ai/ai-security-101/catastrophic-forgetting
https://pub.towardsai.net/fine-tuning-and-evaluating-large-language-models-key-benchmarks-and-metrics-47f695437ac0
https://arxiv.org/pdf/2403.01244

Large language models (LLMs) suffer from catastrophic forgetting during continual learning. Conventional rehearsal-based methods rely on previous training data to retain the model’s ability, which may not be feasible in real-world applications. When conducting continual learning based on a publicly-released LLM checkpoint, the availability of the original training data may be non-existent. To address this challenge, we propose a framework called Self-Synthesized Rehearsal (SSR) that uses the LLM to generate synthetic instances for rehearsal. Concretely, we first employ the base LLM for in-context learning to generate synthetic instances. Subsequently, we utilize the latest LLM to refine the instance outputs based on the synthetic inputs, preserving its acquired ability. Finally, we select diverse high-quality synthetic instances for rehearsal in future stages. Experimental results demonstrate that SSR achieves superior or comparable performance compared to conventional rehearsal-based approaches while being more data-efficient. Besides, SSR effectively preserves the generalization capabilities of LLMs in general domains.

https://nexusflow.ai/blogs/athene

We are excited to announce the release of Athene-Llama3–70B by Nexusflow, a strong open-weights chat model fine-tuned from Meta AI’s Llama-3–70B. Athene-70B has achieved an impressive Arena-Hard-Auto score of 77.8%, placing it close to leading proprietary models such as GPT-4o (79.2%) and Claude-3.5-Sonnet (79.3%). This represents a significant leap from its predecessor, Llama-3–70B-Instruct, which scored 46.6%. Athene-70B is now under public testing on Chatbot Arena. The improvement comes from Nexusflow’s targeted post-training pipeline to enhance desired model behaviors.

https://arxiv.org/pdf/2404.03592

Parameter-efficient finetuning (PEFT) methods seek to adapt large neural models via updates to a small number of weights. However, much prior interpretability work has shown that representations encode rich semantic information, suggesting that editing representations might be a more powerful alternative. We pursue this hypothesis by developing a family of Representation Finetuning (ReFT) methods. ReFT methods operate on a frozen base model and learn task-specific interventions on hidden representations. We define a strong instance of the ReFT family, Low-rank Linear Subspace ReFT (LoReFT), and we identify an ablation of this method that trades some performance for increased efficiency. Both are drop-in replacements for existing PEFTs and learn interventions that are 15×–65× more parameter-efficient than LoRA. We showcase LoReFT on eight commonsense reasoning tasks, four arithmetic reasoning tasks, instruction-tuning, and GLUE. In all these evaluations, our ReFTs deliver the best balance of efficiency and performance, and almost always outperform state-of-the-art PEFTs. We release a generic ReFT training library publicly at https://github.com/stanfordnlp/pyreft.

https://github.com/arcee-ai/mergekit

--

--

sbagency
sbagency

Written by sbagency

Tech/biz consulting, analytics, research for founders, startups, corps and govs.