HAHA@IberLEF2021: Humor Analysis based on Human Annotation Forum

Go back to competition Back to thread list Post in this thread

> Seems like there is a problem with humor_rating

From what I see in the LB and my own models performance, I think the values of ratings are random or misplaced.

Posted by: moradnejad @ May 24, 2021, 7:02 p.m.

Hi,

Humor rating might be very subjective, as different people have different ideas about how funny a joke is. In our case we use the average rating of many annotators (at least three, ideally five or more), but of course this could still vary greatly. Besides the average rating we calculated, you can also see the individual votes for each tweet in the training data.

As mentioned in a previous post (https://competitions.codalab.org/forums/26786/5626/) our baseline algorithm is a SVM regression that uses tfidf features calculated over the training corpus. Although no one has beaten the baseline score yet, some participants have achieved similar results to the baseline.

Regards,
Luis

Posted by: luischir @ May 24, 2021, 10:42 p.m.

Mr. Chiruzzo
Thank you for your fast reply.

Posted by: moradnejad @ May 25, 2021, 6:35 a.m.
Post in this thread