Log Loss
A loss function evaluating the quality of predicted probabilities – exponentially penalizes wrong but confident predictions.
Log Loss evaluates probabilistic predictions – exponentially penalizes wrong-confident predictions.
Explanation
Log Loss = -1/n × Σ[y·log(p) + (1-y)·log(1-p)]. Perfect predictions give loss=0.
Marketing Relevance
Log Loss is the standard metric for probabilistic classification – measures calibration quality.
Common Pitfalls
Sensitive to outlier predictions. Clipping required. Hard to interpret without baseline.
Origin & History
Log Loss is based on maximum likelihood estimation and is mathematically identical to binary cross-entropy.
Comparisons & Differences
Log Loss vs. AUC
AUC measures ranking; Log Loss measures calibration. Both together give the complete picture.