Skip to main content
    Skip to main contentSkip to navigationSkip to footer
    Artificial Intelligence

    SiLU / Swish

    Also known as:
    SiLU
    Swish
    Sigmoid Linear Unit
    Swish Activation
    Updated: 2/12/2026

    SiLU/Swish = x · σ(x) – a smooth, self-gated activation function that outperforms ReLU in many benchmarks and is the basis of SwiGLU.

    Quick Summary

    SiLU/Swish = x · sigmoid(x) – smoother than ReLU, outperforms it in many benchmarks and is the basis of SwiGLU in modern LLMs.

    Explanation

    Swish was discovered by Google Brain (2017) through automated search: f(x) = x · sigmoid(βx), where β = 1 is standard (= SiLU). Smoother than ReLU, non-monotonic, unbounded above. SwiGLU (Shazeer, 2020) combines Swish with Gated Linear Units for even better LLM results.

    Marketing Relevance

    SiLU/Swish is the bridge from ReLU to SwiGLU – central for understanding modern LLM architectures (LLaMA, PaLM).

    Common Pitfalls

    More expensive than ReLU. β as hyperparameter rarely tuned (β=1 almost always optimal). Already superseded by SwiGLU in latest LLMs.

    Origin & History

    Ramachandran, Zoph & Le (Google Brain, 2017) found Swish through automated search over activation functions. SiLU (Elfwing et al., 2018) was independently proposed. PyTorch and JAX standardized on SiLU. SwiGLU (Shazeer, 2020) became the dominant variant in LLaMA and PaLM.

    Comparisons & Differences

    SiLU / Swish vs. ReLU

    ReLU: piecewise linear, hard 0-thresholding; SiLU/Swish: smooth, non-monotonic, self-gated – better results at more computational cost.

    SiLU / Swish vs. GELU

    GELU uses normal distribution for weighting; SiLU/Swish uses sigmoid. Practically similar performance, SiLU slightly faster.

    Related Services

    Related Terms

    👋Questions? Chat with us!