Skip to main content
    Skip to main contentSkip to navigationSkip to footer
    Artificial Intelligence

    Secure Aggregation

    Also known as:
    SecAgg
    Privacy-Preserving Aggregation
    Cryptographic Aggregation
    Updated: 2/11/2026

    A cryptographic protocol that allows a server to compute aggregate values from individual contributions without seeing the individual values.

    Quick Summary

    Secure Aggregation enables computing sums over individual contributions without seeing individual values – key building block for privacy-preserving Federated Learning.

    Explanation

    In Federated Learning, each client masks their model updates with random masks. The masks cancel out during aggregation. The server sees only the sum, never individual updates.

    Marketing Relevance

    Essential for privacy-compliant Federated Learning. Prevents the aggregating server from inferring individual gradients (and thus training data).

    Example

    Apple uses Secure Aggregation for iCloud ML: Emoji usage statistics are aggregated without Apple ever seeing which emojis an individual user uses.

    Common Pitfalls

    High communication overhead. Dropouts (clients going offline) must be handled securely. Scaling with thousands of clients is complex.

    Origin & History

    Bonawitz et al. (Google, 2017) introduced Practical Secure Aggregation for Federated Learning. Since then a standard component in Google's and Apple's on-device ML systems.

    Comparisons & Differences

    Secure Aggregation vs. Differential Privacy

    Differential Privacy adds noise to results; Secure Aggregation cryptographically hides individual contributions from the server.

    Secure Aggregation vs. Federated Learning

    Federated Learning is the training approach; Secure Aggregation is a protection layer within it.

    Related Services

    Related Terms

    👋Questions? Chat with us!