Skip to contents

Calculates the Area Under the Receiver Operating Characteristic (ROC) Curve from prediction probabilities and true binary outcomes. AUC is a measure of the ability of a classifier to distinguish between classes and is used as a summary of the ROC curve.

Usage

dx_auc(truth, predprob, detail = "full")

Arguments

truth

Vector of true binary class outcomes (0 and 1).

predprob

Vector of prediction probabilities corresponding to the true outcomes.

detail

Character string specifying the level of detail in the output: "simple" for just the AUC value, "full" for the AUC value along with confidence intervals.

Value

Depending on the detail parameter, returns a single numeric value of AUC or a data frame with the AUC and its confidence intervals.

Examples

# Assuming you have a vector of true class labels and predicted probabilities
true_classes <- c(1, 0, 1, 1, 0, 0, 1)
predicted_probs <- c(0.9, 0.1, 0.8, 0.75, 0.33, 0.25, 0.67)
simple_auc <- dx_auc(true_classes, predicted_probs, detail = "simple")
#> Warning: ci.auc() of a ROC curve with AUC == 1 is always 1-1 and can be misleading.
#> Warning: ci.auc() of a ROC curve with AUC == 1 is always 1-1 and can be misleading.
detailed_auc <- dx_auc(true_classes, predicted_probs)
#> Warning: ci.auc() of a ROC curve with AUC == 1 is always 1-1 and can be misleading.
#> Warning: ci.auc() of a ROC curve with AUC == 1 is always 1-1 and can be misleading.
print(simple_auc)
#> [1] 1
print(detailed_auc)
#> # A tibble: 1 × 8
#>   measure summary           estimate conf_low conf_high fraction conf_type notes
#>   <chr>   <chr>                <dbl>    <dbl>     <dbl> <chr>    <chr>     <chr>
#> 1 AUC ROC 1.000 (1.000, 1.…        1        1         1 ""       DeLong    ""