Skip to main content
    Skip to main contentSkip to navigationSkip to footer
    Artificial Intelligence
    (LAMB)

    LAMB (Layer-wise Adaptive Moments for Batch Training)

    Also known as:
    LAMB Optimizer
    Layer-wise Adaptive Moments for Batch Training
    Updated: 2/12/2026

    Optimizer for extremely large batch sizes (up to 64K+) that adapts learning rates per layer, enabling stable training with massive parallelization.

    Quick Summary

    LAMB adapts learning rates per layer for extremely large batches – enabled BERT training in 76 minutes instead of 3 days.

    Explanation

    LAMB scales updates per layer based on the ratio of weight norm to gradient norm. This allows enormous batch size increases without losing training quality – ideal for fast pre-training runs.

    Marketing Relevance

    LAMB enabled BERT training in 76 minutes instead of 3 days. Essential for cost-effective training with large GPU clusters.

    Common Pitfalls

    Only useful with very large batch sizes. No advantage over AdamW with small batches. Per-layer hyperparameter tuning can be complex.

    Origin & History

    You et al. (2020) developed LAMB at Google to train BERT with batch size 64K. Training time dropped from 3 days to 76 minutes. LAMB combines Adam with layer-wise trust ratio (inspired by LARS).

    Comparisons & Differences

    LAMB (Layer-wise Adaptive Moments for Batch Training) vs. AdamW

    AdamW uses a global LR; LAMB additionally scales per layer. LAMB is only worth it with batch sizes >8K.

    LAMB (Layer-wise Adaptive Moments for Batch Training) vs. LARS

    LARS is based on SGD + layer scaling; LAMB is based on Adam + layer scaling. LAMB works better for NLP, LARS for vision.

    Related Services

    Related Terms

    👋Questions? Chat with us!