Skip to main content
    Skip to main contentSkip to navigationSkip to footer
    Artificial Intelligence

    Message Passing

    Also known as:
    Message Passing
    MPNN
    Message Passing Neural Network
    Updated: 2/10/2026

    Message Passing is the fundamental computation paradigm of Graph Neural Networks where nodes exchange information with their neighbors.

    Quick Summary

    Message Passing lets graph nodes exchange and aggregate information with neighbors – the core principle of all Graph Neural Networks.

    Explanation

    In each message passing step, a node collects messages from neighbors, aggregates them (e.g., sum, mean, max), and updates its own representation. After k steps, each node knows its k-hop neighborhood.

    Marketing Relevance

    Message Passing is the theoretical foundation of all modern GNN architectures – from social network profiling to molecule property prediction.

    Common Pitfalls

    Over-smoothing with many layers (all nodes become similar), over-squashing for long paths, high memory for dense graphs.

    Origin & History

    Gilmer et al. (2017) unified various GNN approaches under the "Message Passing Neural Network" (MPNN) framework. This became the de-facto standard for GNN research. PyTorch Geometric implements Message Passing as a base class.

    Comparisons & Differences

    Message Passing vs. Self-Attention (Transformer)

    Self-Attention considers all tokens simultaneously (complete graph); Message Passing works only with local neighbors (sparse graph).

    Related Services

    Related Terms

    👋Questions? Chat with us!