Skip to main content
    Skip to main contentSkip to navigationSkip to footer
    Artificial Intelligence

    Machine Unlearning

    Also known as:
    Model Unlearning
    Data Deletion from Models
    Forget Learning
    Right to be Forgotten ML
    Updated: 2/11/2026

    Techniques to remove the influence of specific training data from an ML model without retraining the entire model.

    Quick Summary

    Machine unlearning removes the influence of specific data from trained models – essential for GDPR compliance (right to erasure).

    Explanation

    Exact unlearning retrains from scratch (expensive). Approximate unlearning uses gradient-based methods, SISA training, or influence functions to efficiently remove individual data point influence.

    Marketing Relevance

    GDPR Article 17 (right to erasure) requires not just data deletion but also removal from trained models – machine unlearning makes this practical.

    Example

    A user requests data deletion under GDPR. The company deletes their data AND removes their influence from the recommendation model via SISA training.

    Common Pitfalls

    Verifying unlearning is difficult. Approximate unlearning doesn't offer perfect guarantees. For LLMs, unlearning is particularly challenging.

    Origin & History

    Cao & Yang (2015) formalized machine unlearning. SISA Training (Bourtoule et al., 2021) made it practical. Google's Machine Unlearning Challenge (2023) advanced research. LLM unlearning is an active research area.

    Comparisons & Differences

    Machine Unlearning vs. Differential Privacy

    DP prevents individual data from being identifiable from the start; unlearning removes data retroactively from the model.

    Machine Unlearning vs. Data Deletion

    Data deletion removes raw data; unlearning additionally removes the learned influence of that data from the model.

    Related Services

    Related Terms

    👋Questions? Chat with us!