Task 10 - SemEval-2020

Organized by shallowLearner - Current server time: Aug. 25, 2019, 4:18 p.m. UTC

Current

Practice
July 30, 2019, midnight UTC

Next

Train
Sept. 4, 2019, midnight UTC

End

Competition Ends
Never

SemEval 2020 - Task 10: Emphasis Selection For Written Text in Visual Media

**NOTE: This is a Natural Language Processing task and No experience in Computer Vision or Graphic Design is needed.

Overview

Visual communication relies heavily on images and short texts. Whether it is flyers, posters, ads, social media posts or motivational messages, it is usually highly designed to grab a viewer’s attention and convey a message in the most efficient way. For text, word emphasis is used to better capture the intent, removing the ambiguity that may exist in plain text. Word Emphasis can clarify or even change the meaning of a sentence by drawing attention to some specific information, and it can be done with Colors, Backgrounds, or Fonts, Italic and Boldface. Our shared task is designed to invite research in this area. We are expecting to see a variety of traditional and modern NLP techniques to model emphasis. Whether you are an expert or new in Natural Language Processing, we encourage you to participate in this fun new task.

Task

The purpose of this shared task is to design automatic methods for emphasis selection, i.e. choosing candidates for emphasis in short written text, to enable automated design assistance in authoring.

Here are some examples from our dataset:

  • Hard work never killed a man.
  • Never give up on the things that make you smile.
  • Throw like a Girl

Challenges

No additional context from the user or the rest of the design such as background image is provided. The datasets contain very short texts, usually fewer than 10 words. Word emphasis patterns are author- and domain-specific. Without knowing the author’s intent and only considering the input text, multiple emphasis selections are valid. A good model, however, should be able to capture the inter-subjectivity or common sense within the given annotations and finally label words according to higher agreements.

Important Notes

We will announce the best paper award for each of the following categories:

  • The winner(s) of the task – based on the ranking score
  • The best system description paper (best results interpretation)
  • The best negative results paper

We encourage all teams to describe their submission in a SemEval-2020 paper (ACL format), including teams with negative results.
We encourage all teams to open source their implementations.
During the evaluation phase, only the final valid submission on CodaLab will be taken as the official submission to the competition.

 

Important Dates

  • Trial data ready ------------- July 31, 2019
  • Training data ready ------------- September 4, 2019
  • Test data ready ------------- December 3, 2019
  • Evaluation start ------------- January 10, 2020
  • Evaluation end ------------- January 31, 2020
  • Paper submission due ------------- February 23, 2020
  • Notification to authors -------------March 29, 2020
  • Camera ready due ------------- April 5, 2020
  • SemEval workshop ------------- Summer 2020

 

Register and Participate

Get started by filling out this form and then register your team at "Participate" tab. You can now download the dataset and evaluation script.

Feel free join the Google group for task-related news and discussions: semeval-2020-task-10-all@googlegroups.com

Competition website: http://ritual.uh.edu/semeval2020-task10-emphasis-selection/

 

References

“Learning Emphasis Selection for Written Text in Visual Media from Crowd-Sourced Label Distributions”, 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019)

 

Organizers

Reza (Amirreza) Shirani, University of Houston
Franck Dernoncourt, Adobe Research
Jose Echevarria, Adobe Research
Nedim Lipka, Adobe Research
Paul Asente, Adobe Research
Seokhwan Kim, Amazon Alexa AI
Thamar Solorio, University of Houston

Evaluation Criteria

Matchm: For each instance X in the test set Dtest, we select a set Sm(x) of m ∊ (1. . .4) words with the top m probabilities according to the ground truth. Analogously, we select a prediction set set S^m(x) for each m, based on the predicted probabilities.

We define the metric Matchm as follows:

Terms and Conditions

This page enumerated the terms and conditions of the competition.

Practice

Start: July 30, 2019, midnight

Train

Start: Sept. 4, 2019, midnight

Evaluation

Start: Jan. 10, 2020, midnight

Post-Evaluation

Start: Jan. 31, 2020, midnight

Competition Ends

Never

You must be logged in to participate in competitions.

Sign In
# Username Score
1 shallowLearner 0.815
2 task10test 0.812