Movatterモバイル変換


[0]ホーム

URL:


Skip to content
DEV Community
Log in Create account

DEV Community

Cover image for Custom metrics for Keras/TensorFlow
Arnaldo Gualberto
Arnaldo Gualberto

Posted on • Originally published atMedium

Custom metrics for Keras/TensorFlow

Recently, I published an article about binary classification metrics that you can checkhere. The article gives a brief explanation of the most traditional metrics and presents less famous ones like NPV, Specificity, MCC and EER. If you don't know some of these metrics, take a look at the article. It's only 7 minutes to read. I'm sure it will be useful for you.

In this article, I decided to share the implementation of these metrics for Deep Learning frameworks. It includes recall, precision, specificity, negative predictive value (NPV), f1-score, and Matthews' Correlation Coefficient (MCC). You can use it in both Keras or TensorFlow v1/v2.

The Code

Here's the complete code for all metrics:

importnumpyasnpimporttensorflowastffromkerasimportbackendasKdefrecall(y_true,y_pred):true_positives=K.sum(K.round(K.clip(y_true*y_pred,0,1)))possible_positives=K.sum(K.round(K.clip(y_true,0,1)))recall_keras=true_positives/(possible_positives+K.epsilon())returnrecall_kerasdefprecision(y_true,y_pred):true_positives=K.sum(K.round(K.clip(y_true*y_pred,0,1)))predicted_positives=K.sum(K.round(K.clip(y_pred,0,1)))precision_keras=true_positives/(predicted_positives+K.epsilon())returnprecision_kerasdefspecificity(y_true,y_pred):tn=K.sum(K.round(K.clip((1-y_true)*(1-y_pred),0,1)))fp=K.sum(K.round(K.clip((1-y_true)*y_pred,0,1)))returntn/(tn+fp+K.epsilon())defnegative_predictive_value(y_true,y_pred):tn=K.sum(K.round(K.clip((1-y_true)*(1-y_pred),0,1)))fn=K.sum(K.round(K.clip(y_true*(1-y_pred),0,1)))returntn/(tn+fn+K.epsilon())deff1(y_true,y_pred):p=precision(y_true,y_pred)r=recall(y_true,y_pred)return2*((p*r)/(p+r+K.epsilon()))deffbeta(y_true,y_pred,beta=2):y_pred=K.clip(y_pred,0,1)tp=K.sum(K.round(K.clip(y_true*y_pred,0,1)),axis=1)fp=K.sum(K.round(K.clip(y_pred-y_true,0,1)),axis=1)fn=K.sum(K.round(K.clip(y_true-y_pred,0,1)),axis=1)p=tp/(tp+fp+K.epsilon())r=tp/(tp+fn+K.epsilon())num=(1+beta**2)*(p*r)den=(beta**2*p+r+K.epsilon())returnK.mean(num/den)defmatthews_correlation_coefficient(y_true,y_pred):tp=K.sum(K.round(K.clip(y_true*y_pred,0,1)))tn=K.sum(K.round(K.clip((1-y_true)*(1-y_pred),0,1)))fp=K.sum(K.round(K.clip((1-y_true)*y_pred,0,1)))fn=K.sum(K.round(K.clip(y_true*(1-y_pred),0,1)))num=tp*tn-fp*fnden=(tp+fp)*(tp+fn)*(tn+fp)*(tn+fn)returnnum/K.sqrt(den+K.epsilon())defequal_error_rate(y_true,y_pred):n_imp=tf.count_nonzero(tf.equal(y_true,0),dtype=tf.float32)+tf.constant(K.epsilon())n_gen=tf.count_nonzero(tf.equal(y_true,1),dtype=tf.float32)+tf.constant(K.epsilon())scores_imp=tf.boolean_mask(y_pred,tf.equal(y_true,0))scores_gen=tf.boolean_mask(y_pred,tf.equal(y_true,1))loop_vars=(tf.constant(0.0),tf.constant(1.0),tf.constant(0.0))cond=lambdat,fpr,fnr:tf.greater_equal(fpr,fnr)body=lambdat,fpr,fnr:(t+0.001,tf.divide(tf.count_nonzero(tf.greater_equal(scores_imp,t),dtype=tf.float32),n_imp),tf.divide(tf.count_nonzero(tf.less(scores_gen,t),dtype=tf.float32),n_gen))t,fpr,fnr=tf.while_loop(cond,body,loop_vars,back_prop=False)eer=(fpr+fnr)/2returneer
Enter fullscreen modeExit fullscreen mode

Almost all the metrics in the code are described in the article previously mentioned. Therefore, you can find a detailed explanation there.

How to use in Keras or TensorFlow

If you use Keras or TensorFlow (especially v2), it’s quite easy to use such metrics. Here’s an example:

model=...# define you model as usualmodel.compile(optimizer="adam",# you can use any other optimizerloss='binary_crossentropy',metrics=["accuracy",precision,recall,f1,fbeta,specificity,negative_predictive_value,matthews_correlation_coefficient,equal_error_rate])model.fit(...)# train your model
Enter fullscreen modeExit fullscreen mode

As you can see, you can compute all the custom metrics at once. Please, remember that:

  • as they are binary classification metrics, you can only use them in binary classification problems. Maybe you’ll have some results for multiclass or regression problems, but they will be incorrect.
  • they are supposed to be used as metrics only. It means you can’t use them as losses. In fact, your loss must always be “binary_crossentropy”, since it's a binary classification problem.

Final Words

You can also check my work in:

Top comments(0)

Subscribe
pic
Create template

Templates let you quickly answer FAQs or store snippets for re-use.

Dismiss

Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment'spermalink.

For further actions, you may consider blocking this person and/orreporting abuse

  • Joined

Trending onDEV CommunityHot

DEV Community

We're a place where coders share, stay up-to-date and grow their careers.

Log in Create account

[8]ページ先頭

©2009-2025 Movatter.jp