Skip to main content
    Skip to main contentSkip to navigationSkip to footer
    Artificial Intelligence

    ELBO (Evidence Lower Bound)

    Also known as:
    Evidence Lower Bound
    Variational Lower Bound
    ELBO Loss
    VLB
    Updated: 2/11/2026

    ELBO is the lower bound on the log-likelihood in variational inference – the central objective function for VAEs and diffusion models.

    Quick Summary

    ELBO = reconstruction minus KL divergence – the mathematical objective function that makes VAEs and diffusion models trainable.

    Explanation

    ELBO = reconstruction term (how well the input is reconstructed) - KL divergence (how far the learned posterior deviates from the prior). Maximizing ELBO approximates maximum likelihood training. In diffusion models, ELBO is decomposed into T denoising steps.

    Marketing Relevance

    ELBO is the key metric for generative model quality – understand ELBO and you understand why VAEs and diffusion models work.

    Example

    During VAE training: ELBO increases → reconstruction improves AND latent space becomes more structured. ELBO decomposition shows which term dominates.

    Common Pitfalls

    ELBO is only a lower bound – good ELBO doesn't guarantee good samples. KL divergence term can cause posterior collapse.

    Origin & History

    ELBO originates from variational inference (Jordan et al., 1999). Kingma & Welling (2013) made ELBO practically relevant through the VAE. Ho et al. (2020) showed that the DDPM loss is a weighted ELBO decomposition.

    Comparisons & Differences

    ELBO (Evidence Lower Bound) vs. Maximum Likelihood

    Maximum likelihood optimizes exact likelihood; ELBO optimizes a lower bound (tractable approximation).

    ELBO (Evidence Lower Bound) vs. GAN Loss

    ELBO maximizes a likelihood approximation; GAN loss optimizes an adversarial game without explicit likelihood.

    Related Services

    Related Terms

    VAE (Variational Autoencoder)KL DivergenceVariational InferenceDDPM (Denoising Diffusion Probabilistic Model)Maximum Likelihood
    👋Questions? Chat with us!