Skip to main content
    Skip to main contentSkip to navigationSkip to footer
    Data & Analytics
    (ROC-Kurve)

    ROC Curve

    Also known as:
    ROC
    Receiver Operating Characteristic
    ROC Plot
    Updated: 2/12/2026

    A plot showing the True Positive Rate vs False Positive Rate across all classification thresholds.

    Quick Summary

    The ROC curve shows TPR vs FPR across all thresholds – AUC summarizes classification performance in one number.

    Explanation

    The ROC curve visualizes the tradeoff between sensitivity and specificity. The area under the curve (AUC) summarizes overall performance in one number (0.5 = random, 1.0 = perfect).

    Marketing Relevance

    ROC curve and AUC are standard for classification model comparisons – threshold-independent and intuitively communicable.

    Example

    A fraud detector with AUC=0.95 can detect 90% of fraud cases at only 5% false positive rate.

    Common Pitfalls

    ROC can be overly optimistic with severe class imbalance. PR curve is more informative in such cases.

    Origin & History

    The ROC curve was developed during WWII for radar signal detection and became an ML standard in the 1990s.

    Comparisons & Differences

    ROC Curve vs. PR-Kurve

    ROC shows TPR vs FPR; PR curve shows precision vs recall. PR is more informative with class imbalance.

    Related Services

    Related Terms

    👋Questions? Chat with us!