Skip to main content
    Skip to main contentSkip to navigationSkip to footer
    Artificial Intelligence

    Multi-Query Attention (MQA)

    Also known as:
    MQA
    Shared Key-Value Attention
    Single KV Head Attention
    Updated: 2/9/2026

    Multi-Query Attention shares a single key-value head across all query heads – reduces KV cache by up to 8x with minimal quality loss.

    Quick Summary

    MQA shares key-value heads across query heads – dramatically shrinks KV cache and makes long contexts affordable for LLM inference.

    Explanation

    Standard multi-head attention: each head has its own Q, K, V (e.g., 32 heads = 32 KV pairs). MQA: All heads share one K/V pair. Result: KV cache 32x smaller. Grouped-Query Attention (GQA) is the compromise: e.g., 8 groups instead of 32.

    Marketing Relevance

    MQA/GQA enables longer contexts and larger batches for LLM inference – LLaMA 2/3, Gemini, and Mistral use GQA.

    Origin & History

    Shazeer (2019) introduced Multi-Query Attention at Google. PaLM (2022) used MQA successfully. Ainslie et al. (2023) developed Grouped-Query Attention (GQA) as a more flexible compromise. LLaMA 2 (Meta, 2023) adopted GQA and made it the standard for open-source LLMs.

    Comparisons & Differences

    Multi-Query Attention (MQA) vs. Multi-Head Attention

    Multi-head: each head has own K/V (more expressiveness, more memory); MQA: shared K/V (less memory, minimally less quality).

    Multi-Query Attention (MQA) vs. Grouped-Query Attention (GQA)

    MQA: 1 KV head for all queries; GQA: groups of queries share KV heads (more flexible compromise).

    Related Services

    Related Terms

    👋Questions? Chat with us!