Skip to main content
    Skip to main contentSkip to navigationSkip to footer
    Artificial Intelligence

    Deep Compression

    Also known as:
    Deep Compression Pipeline
    Han Compression
    Pruning-Quantization-Huffman Pipeline
    Updated: 2/11/2026

    A three-stage compression pipeline (Pruning → Quantization → Huffman Coding) that can compress neural networks by 35-49x – the foundational work of model compression.

    Quick Summary

    Deep Compression combines pruning + quantization + Huffman coding for 35-49x model compression – the foundational work that launched the entire model compression field.

    Explanation

    Stage 1: Magnitude pruning removes 90%+ of weights. Stage 2: Remaining weights are quantized (5-8 bit). Stage 3: Huffman coding compresses weight distribution. AlexNet: 240MB → 6.9MB (35x); VGG-16: 552MB → 11.3MB (49x).

    Marketing Relevance

    Deep Compression proved in 2015 that drastic compression without significant quality loss is possible – the paper inspired the entire model compression research field.

    Example

    VGG-16 is compressed from 552MB to 11.3MB (49x) with only 0.2% accuracy loss on ImageNet. This enabled CNN inference on smartphones and IoT devices for the first time.

    Common Pitfalls

    Three-stage pipeline is complex. Huffman coding helps only with storage, not computation. For modern LLMs, more specialized methods have been developed.

    Origin & History

    Song Han et al. (Stanford, 2015) published "Deep Compression" and won the ICLR 2016 Best Paper Award. The paper and the Lottery Ticket Hypothesis (2018) are the two most influential works in model compression.

    Comparisons & Differences

    Deep Compression vs. Post-Training Quantization

    PTQ only quantizes; Deep Compression combines three techniques (pruning + quantization + Huffman) for maximum compression.

    Related Services

    Related Terms

    👋Questions? Chat with us!