top of page
JHN

Machine Unlearning: The New Art

In an increasingly data-driven world, Machine Learning (ML) has become a core component of many sectors of society. However, as we continue to expand the use of ML,, significant issues such as bias and hallucination are preventing ML from being mass adopted. Machine unlearning is gaining attention as one of a solution to minimize bias and for AI to function more neutrally. In this article, we will explore what machine unlearning is, why it is important, and how it can be implemented in practice.


eraser

Picture: Eraser. Source: Unsplash


What is Machine Unlearning?

Machine unlearning refers to the process of removing or modifying the knowledge or patterns that a machine learning model has acquired during its training phase. While machine learning models are designed to learn and adapt to new data, there may be situations where the model needs to unlearn certain information due to various reasons such as incorrect or biased learning, privacy concerns, or changes in the underlying data distribution. Moreover, with the mass amount of big data being learnt, the task of identifying bias or unwanted instances to be removed pre-training is almost impossible.


The first step in machine unlearning involves identifying the specific knowledge or patterns that need to be unlearned. This could be achieved by analyzing the model's behavior in various sensitive topics, examining its output, and evaluating the impact of the undesired knowledge. Once the undesired knowledge is identified, various techniques can be employed to modify or remove it. Before advanced method in machine unlearning, one common approach is to fine-tune model or to adjust its weight to avoid the unwanted outcome. For example, one popular case is making poem about Trump and Biden.


trump vs biden

Picture: Biden and Trump. Source: Bloomberg


Why is Machine Unlearning Popular?

Machine unlearning is gaining popularity as a solution to minimize bias in machine learning models due to its potential to address the ethical and fairness concerns associated with biased decision-making. One key reason for the popularity of machine unlearning in bias mitigation is its ability to provide a proactive approach. Rather than solely focusing on post hoc bias detection and remediation, machine unlearning allows for the identification and removal of biased knowledge during or after the model's training or deployment phase. By removing this weight on pre-processing unwanted data manually, it encourages the true automation process of online ML.


Furthermore, machine unlearning can aid in establishing trust and transparency in AI. As concerns about biased decision-making continue to grow, there is an increasing demand for explainable and accountable AI models. While it is virtually impossible to explain a black-box neural network model as complex as PaLM or GPT-4 at this point, we can still held companies accountable for using them. By incorporating machine unlearning techniques, organizations can demonstrate their commitment to addressing bias and ensuring fairness.


Trust

Picture: Trust. Source: Unsplash


Implementing Machine Unlearning

It's all great talk so far. How about actually using it? Machine unlearning can be implemented in several ways, depending on the ML model and data involved. Some possible methods include:


Retraining Models

A straightforward form of unlearning involves retraining the model after removing the data you wish to forget. This method, though simple, could be computationally expensive for large models and datasets, and it might not be feasible if the system needs to respond quickly to unlearning requests. As mentioned above, this is a hefty task as it relies on manual labour and it does not guarantee to fully remove related data.


Data Deletion

Machine unlearning can leverage data version control systems. These systems keep track of changes in data and model files, similar to version control systems used in software development. If you want to "unlearn" certain aspects, you can revert to any previous version of your data or model.


Also, programmers in this field need to always consider local regulations and ethical standards about data privacy. For example, the European Union's General Data Protection Regulation (GDPR) includes a "right to be forgotten," which requires systems to fully and effectively remove data if a user requests it. Designing your machine learning models with unlearning capabilities from the outset can help you comply with these regulations and respect user privacy.


Incorporating Differential Privacy

Another strategy is incorporating differential privacy into your machine learning models. Differential privacy is a method for sharing information about a dataset by describing the patterns of groups within the dataset while withholding information about individuals in the dataset. It provides a mathematically rigorous definition of privacy and can ensure that the removal of a single data point doesn't significantly impact the output of the learning algorithm. If a particular data point is removed, the ML model doesn't retain significant information about that data point.


If you are interested in learning more about this, Havard provides a great page about it.


Learning with Membership Negotiations

A more complex method involves designing learning algorithms that can update the current model when a data point needs to be removed. This is an active area of research, and there are algorithms like "version space algebra" and "ensemble learning" that can support this kind of dynamic membership.


Decoupled Learning

In decoupled learning, the data is partitioned into smaller subsets, and models are trained on these subsets. If you need to unlearn some data, you can identify which subsets include that data, unlearn only those subsets, and retrain the models. This approach can be more efficient than retraining a model on the entire dataset but needs careful management of the subsets.


Logging Updates

Some unlearning methods involve storing extra information during the initial learning process. For example, recording the order in which data was added can facilitate unlearning later by allowing you to revert the model to a state before the data was included. This requires extra storage and adds some complexity to the model training process, but it could be beneficial for systems where unlearning is a frequent requirement.


challenges

Picture: Challenges in A Journey. Source: Unsplash


Challenges in Machine Unlearning

Machine unlearning, while incredibly useful, brings about its own unique set of challenges that researchers and practitioners must navigate. One such challenge is the "catastrophic forgetting" problem. This issue arises when a model, in the process of unlearning certain data points, inadvertently forgets crucial information or patterns that were valuable for its predictions. This problem is particularly prominent in neural networks, which often struggle with retaining their performance on old tasks when trained on new ones. If the deleted data is just an instance, the impact might be low. However, if new regulation forces to forget certain features of learning input, depends on feature weights, the final outcome can be devastating and utterly destroy the whole model.


Moreover, ensuring a reversible unlearning process is another significant hurdle. There might be cases where a model needs to return to its previous state after unlearning, such as when the removal of certain data points was accidental or when the forgotten data becomes relevant again. Designing such a reversible process is non-trivial, particularly for complex models and large datasets. Therefore, there is a need for a thoughtful design of the learning and unlearning mechanisms for new models in the future.


Furthermore, the time and computational resources required for machine unlearning can be substantial, especially for large-scale models and datasets. Unlearning typically involves modifying the model in some way, which can be as computationally intensive as the initial learning process. As data scales, this becomes an increasingly difficult challenge.


Lastly, there's a delicate balancing act between privacy and utility, which is the ever-going debate of the last decades between functionality and privacy. While the aim of machine unlearning is often to enhance privacy, it's also important to ensure that the model continues to perform well and provide utility. Striking the right balance requires more than just trust for developers, but also strict monitoring from law makers.


future

Picture: Looking forward. Source: Unsplash


The Future of Machine Unlearning

Machine unlearning, despite its challenges, has immense potential for future applications. It's a nascent field, but numerous promising approaches are emerging, pointing towards a dynamic and innovative future.


Incremental learning algorithms, for example, are capable of adapting over time, learning from new data while forgetting old data that is no longer relevant. These algorithms are well-suited to applications where data streams continually, and the model must adapt on-the-fly. As these algorithms become more sophisticated, they offer a promising approach to addressing the challenges of machine unlearning.


Meanwhile, the development of privacy-preserving techniques such as differential privacy and federated learning is helping to address the tension between privacy and utility in machine unlearning. These techniques enable models to learn valuable patterns from data while ensuring that the data of individual users is not exposed. As these techniques mature, they will likely play an increasingly important role in machine unlearning.


Blockchain technologies are another interesting development. By leveraging the inherent transparency and immutability of blockchains, it may be possible to create secure, auditable records of what a model has learned and unlearned. This could potentially help address the challenges of data integrity and reversibility in machine unlearning.


As we move forward, machine unlearning will undoubtedly play a pivotal role in ensuring responsible and ethical applications of machine learning. By understanding and addressing its challenges, we have the opportunity to build more robust, fair, and trustworthy models. Improvements in performance and accuracy, along with enhanced user trust, will drive the acceptability and sustainability of machine learning models in the long run. In the future of machine learning, it's not just about how our models learn, but also about how they forget. This balance will guide the evolution of the field, ensuring it can responsibly harness the power of data while respecting individual privacy and agency.

Comments


bottom of page