Skip to main content
    Skip to main contentSkip to navigationSkip to footer
    Artificial Intelligence
    (PReLU (Parametric ReLU))

    PReLU (Parametric Rectified Linear Unit)

    Also known as:
    Parametric ReLU
    PReLU
    Parametric Rectifier
    Updated: 2/12/2026

    A ReLU variant with a learnable negative slope parameter – the leak factor is optimized during training.

    Quick Summary

    PReLU makes the leak factor of Leaky ReLU learnable – the network finds the optimal negative slope parameter itself.

    Explanation

    PReLU: f(x) = x for x > 0, f(x) = aᵢx for x ≤ 0. The parameter aᵢ is learned per channel or per layer. He et al. showed PReLU improved accuracy on ImageNet in ResNets.

    Marketing Relevance

    Showed that activation functions can also have learnable parameters – a step toward NAS and adaptive architectures.

    Origin & History

    He et al. (2015) introduced PReLU in "Delving Deep into Rectifiers" – along with Kaiming initialization. The paper surpassed human accuracy on ImageNet for the first time.

    Comparisons & Differences

    PReLU (Parametric Rectified Linear Unit) vs. Leaky ReLU

    Leaky ReLU: fixed α value (hyperparameter); PReLU: α learned from data (more flexibility, minimally more parameters).

    Related Services

    Related Terms

    👋Questions? Chat with us!