Evaluation metrics


For the performance evaluation, we selected the recently standardized ISO/IEC 30107-3 metrics: Attack Presentation Classification Error Rate (APCER), Normal Presentation Classification Error Rate (NPCER) and Average Classification Error Rate (ACER) as the evaluation metric, in which APCER and BPECER are used to measure the error rate of fake or live samples, respectively. Inspired by face recognition, the Receiver Operating Characteristic (ROC) curve is introduced for large-scale face Anti-spoofing in our dataset, which can be used to select a suitable threshold to trade off the false positive rate (FPR) and true positive rate (TPR) according to the requirement of real applications.

The FINAL evaluation metric is: the value of TPR @FPR=10E-4.

We will also give other metrics, such as TPR@FPR=10E-2, 10E-3 and ACER.

Other metrics used for face Anti-spoofing will also given:

  • Attack Presentation Classification Error Rate (APCER ):

APCER = FP / (TN + FP)

  • Normal Presentation Classification Error Rate (NPCER ):

NPCER = FN/(FN + TP)

  • Average Classification Error Rate (ACER):

ACER = (APCER + NPCER) / 2

  • False Positive Rate (FPR):

FPR = FP / (FP + TN)

  • True Positive Rate (TPR):

TPR = TP / (TP + FN)

News


There are no news registered in