Skip to main content
    Skip to main contentSkip to navigationSkip to footer
    Artificial Intelligence

    Adafactor

    Also known as:
    Adafactor Optimizer
    Memory-Efficient Adam
    Updated: 2/10/2026

    Memory-efficient optimizer that replaces Adam's second moment with a factorized approximation – saves up to 50% optimizer memory.

    Quick Summary

    Adafactor saves ~50% optimizer memory through factorized approximation of the 2nd moment – standard for T5 and PaLM, ideal with limited GPU memory.

    Explanation

    Adam stores a full matrix for the 2nd moment. Adafactor factorizes this into row and column statistics. Especially effective for large embedding tables.

    Marketing Relevance

    Adafactor is the standard optimizer for T5 and PaLM. Essential when GPU memory is tight – especially for >1B parameter models.

    Common Pitfalls

    Can be less stable than Adam. Requires careful tuning. Not always the same final quality as AdamW.

    Origin & History

    Shazeer & Stern (Google, 2018) developed Adafactor for training transformer models with limited memory. It became standard for T5 (2020) and PaLM (2022) at Google.

    Comparisons & Differences

    Adafactor vs. AdamW

    AdamW stores full 1st and 2nd moment buffers; Adafactor factorizes the 2nd moment and saves ~50% memory but can be less stable.

    Adafactor vs. Lion

    Both save memory vs. Adam but in different ways: Adafactor factorizes, Lion uses only signs.

    Related Services

    Related Terms

    👋Questions? Chat with us!