1. Binary AUROC Recap

  • AUROC = Probability that a randomly chosen positive sample has a higher predicted score than a randomly chosen negative sample.
  • Well-defined only for binary classification.

2. Multiclass Case

When we have K classes, AUROC is not directly defined.

  • Solution: Extend the binary idea using One-vs-Rest (OvR).
  • For each class $i$:
    • Treat class i = positive.
    • Treat all other classes = negative.
    • Compute AUROCi_ii​ for this binary problem.

3. Definition

$AUROC_{OvR}(i) = AUROC(\text{class}_i \; vs \; rest)$

Then you can:

  • Report AUROC$_i$​ for each class separately.
  • Or average them to get Macro AUROC:

$AUROC_{macro} = \frac{1}{K} \sum_{i=1}^K AUROC_{OvR}(i)$


4. Example

Suppose we have 3 classes (A, B, C):

  • Compute OvR AUROC:
    • AUROC(A vs B+C) = 0.83
    • AUROC(B vs A+C) = 0.76
    • AUROC(C vs A+B) = 0.70
  • Each AUROC$_i$​ tells you how well the model separates that class from all others.
  • Macro AUROC = (0.83 + 0.76 + 0.70) / 3 = 0.763

5. Interpretation

  • One-vs-Rest AUROC gives a class-specific discrimination measure.
  • Useful when you want to know: “How well does the model detect THIS class compared to the rest?”
  • Especially important in imbalanced data, e.g., fraud detection (fraud = rare class).

Summary

  • One-vs-Rest AUROC = AUROC computed for one class vs all others.
  • Extends AUROC from binary → multiclass.
  • You can report per-class values or average (macro AUROC).
  • Tells you how well each class is separable from the rest.