> Error metric

In my opinion, misclassification does not look like a perfect measure of model performance in this task, because it does not take into account many cases when model predictions are quite close to targets (i.e., when either the dominant or complementary emotion is predicted correctly). For instance, let's assume that the target is 'angrily contempt', model1 prediction is 'angrily disgusted' and model2 prediction is 'happy'. Model1 should be evaluated as a better model, but it will not if misclassification is used.
So, is it possible to change the current error metric?
My suggestions:
- cross-entropy loss
- top-k accuracy like to top-5
- combined cross-entropy loss (3 terms): (cross-entropy loss for dominant prediction) + (cross-entropy loss for complementary prediction) + (additional term measuring if dominant and complementary emotions are in the right order) or something like that

Best,
Boris

Posted by: bknyaz @ Jan. 24, 2017, 3:44 p.m.

Dear Boris
Thanks for your message. Indeed there are several ways to define metric. I strongly recommend you to perform your suggested further metrics analysis for the workshop paper and submit it to our workshop.
Best
Shahab

Posted by: icv @ Jan. 24, 2017, 6:37 p.m.
Post in this thread