Skip to main content
    Skip to main contentSkip to navigationSkip to footer
    Artificial Intelligence

    Scaled Dot-Product Attention

    Also known as:
    Dot-Product Attention
    QKV Attention
    Softmax Attention
    Updated: 2/10/2026

    The base attention computation: Attention(Q,K,V) = softmax(QK^T / √d_k) · V – the mathematical foundation of all Transformers.

    Quick Summary

    Scaled Dot-Product Attention = softmax(QK^T/√d_k)V – the mathematical formula behind every Transformer, computing similarity between tokens.

    Explanation

    Q (Query) asks: "What am I looking for?" K (Key) answers: "What do I offer?" V (Value) provides: "Here is the content." The dot product QK^T measures similarity. Division by √d_k prevents large dimensions from causing peaked softmax distributions.

    Marketing Relevance

    The exact formula running in every Transformer – from the smallest DistilBERT to the largest GPT-5.

    Common Pitfalls

    Quadratic complexity O(n²) with sequence length. Scaling factor √d_k often forgotten in custom implementations. Numerical stability with large d_k.

    Origin & History

    Dot-product attention was introduced by Luong et al. (2015) for machine translation. Vaswani et al. (2017) added the scaling factor 1/√d_k and made it the core of the Transformer.

    Comparisons & Differences

    Scaled Dot-Product Attention vs. Additive Attention (Bahdanau)

    Additive attention uses a learned network for score computation; dot-product is simpler, faster, and scales better with GPU matrix multiplication.

    Scaled Dot-Product Attention vs. Linear Attention

    Scaled dot-product has O(n²) complexity; linear attention approximates with O(n) through kernel tricks – faster but less precise.

    Related Services

    Related Terms

    👋Questions? Chat with us!