Skip to main content
    Skip to main contentSkip to navigationSkip to footer
    Artificial Intelligence
    (Surrogatmodell)

    Surrogate Model

    Updated: 2/10/2026

    A simple, interpretable model that approximates a complex black-box model to explain its decisions.

    Quick Summary

    Surrogate models explain black-box AI by training a simple, interpretable model on the predictions of the complex model.

    Explanation

    Surrogate models train an interpretable model (e.g., decision tree) on the predictions of the black-box model. LIME uses local surrogates, global surrogates approximate the entire model.

    Marketing Relevance

    Surrogate models are the basis of LIME and enable explainability without access to model internals.

    Common Pitfalls

    Surrogate only approximates – cannot perfectly represent the black-box model. Fidelity must be measured.

    Origin & History

    The concept comes from simulation optimization in the 1970s. Ribeiro et al. used local surrogate models in LIME in 2016. Global surrogates became popular in XAI research as an alternative to SHAP.

    Comparisons & Differences

    Surrogate Model vs. SHAP

    SHAP computes exact feature contributions with Shapley values; surrogate models approximate behavior with a simple model.

    Surrogate Model vs. Distillation

    Knowledge distillation trains a model for prediction; surrogate models are trained for explanation.

    Related Services

    Related Terms

    👋Questions? Chat with us!