Skip to main content
    Skip to main contentSkip to navigationSkip to footer
    Artificial Intelligence

    Integrated Gradients

    Also known as:
    IG
    Path-integrated Gradients
    Axiomatic Attribution
    Updated: 2/11/2026

    XAI method that computes feature attributions by integrating gradients along a path from a baseline to the actual input.

    Quick Summary

    Integrated Gradients computes axiomatically correct feature attributions for deep learning – the most theoretically grounded gradient-based XAI approach.

    Explanation

    Integrated Gradients satisfies two important axioms: Sensitivity (if a feature changes output, it receives attribution) and Implementation Invariance (same function = same attribution). This makes it theoretically more robust than vanilla gradients or DeepLIFT.

    Marketing Relevance

    Standard attribution for deep learning in production. Implemented by Google in Cloud AI Explanations and Captum (PyTorch).

    Common Pitfalls

    Baseline choice strongly influences results. Path dependency with non-linear models. More compute-intensive than simple gradients.

    Origin & History

    Sundararajan, Taly & Yan published Integrated Gradients in 2017 (ICML). Google implemented it in Cloud AI Explanations. Meta integrated it in Captum. The method became the standard for deep learning attribution.

    Comparisons & Differences

    Integrated Gradients vs. SHAP (DeepSHAP)

    Integrated Gradients uses path integration (axiomatic); DeepSHAP uses Shapley approximation (faster but less exact for deep networks).

    Integrated Gradients vs. Saliency Map

    Saliency maps use one gradient step (noisy); Integrated Gradients accumulates over the entire path (more robust).

    Related Services

    Related Terms

    👋Questions? Chat with us!