Task 10 - SemEval-2020

Organized by shallowLearner - Current server time: April 2, 2025, 3:54 p.m. UTC

Previous

Evaluation
Feb. 19, 2020, midnight UTC

Current

Post-Evaluation
March 12, 2020, midnight UTC

End

Competition Ends
Never

SemEval 2020 - Task 10: Emphasis Selection For Written Text in Visual Media

**NOTE: We have released the test set you, can find it on the Github page.

**NOTE: The SemEval-2020 workshop has been accepted to co-locate with COLING-2020 and will take place on September 13-14. The evaluation will start on Feb 19 but not Jan 10. Please find all our new tentative dates below. 

**NOTE: This is a Natural Language Processing task and No experience in Computer Vision or Graphic Design is needed.

**NOTE: Training and development sets are available to download.

Overview

Visual communication relies heavily on images and short texts. Whether it is flyers, posters, ads, social media posts or motivational messages, it is usually highly designed to grab a viewer’s attention and convey a message in the most efficient way. For text, word emphasis is used to better capture the intent, removing the ambiguity that may exist in plain text. Word Emphasis can clarify or even change the meaning of a sentence by drawing attention to some specific information, and it can be done with Colors, Backgrounds, or Fonts, Italic and Boldface. Our shared task is designed to invite research in this area. We are expecting to see a variety of traditional and modern NLP techniques to model emphasis. Whether you are an expert or new in Natural Language Processing, we encourage you to participate in this fun new task.

Task

The purpose of this shared task is to design automatic methods for emphasis selection, i.e. choosing candidates for emphasis in short written text, to enable automated design assistance in authoring.

Here are some examples from our dataset:

  • Hard work never killed a man.
  • Never give up on the things that make you smile.
  • Throw like a Girl

Challenges

No additional context from the user or the rest of the design such as background image is provided. The datasets contain very short texts, usually fewer than 10 words. Word emphasis patterns are author- and domain-specific. Without knowing the author’s intent and only considering the input text, multiple emphasis selections are valid. A good model, however, should be able to capture the inter-subjectivity or common sense within the given annotations and finally label words according to higher agreements.

Important Notes

We will announce the best paper award for each of the following categories:

  • The winner(s) of the task – based on the ranking score
  • The best system description paper (best results interpretation)
  • The best negative results paper

We encourage all teams to describe their submission in a SemEval-2020 paper (ACL format), including teams with negative results.
We encourage all teams to open source their implementations.
During the evaluation phase, only the final valid submissions on CodaLab will be taken as the official submissions to the competition.

 

Important Dates

  • 8 January 2020: Make testset available to SemEval Co-Organizers 
    19 February 2020: Evaluation start* (release test data to participants) 
    11 March 2020: Evaluation end* 
    21 March 2020: Results posted 
    15 May 2020: System description paper submissions due 
    22 May 2020: Task description paper submissions due 
    24 June 2020: Author notifications 
    8 July 2020: Camera ready submissions due 
    12-13 December 2020:  SemEval 2020

 

Register and Participate

Get started by filling out this form and then register your team at "Participate" tab. You can now download the dataset and evaluation script.

Feel free join the Google group for task-related news and discussions: semeval-2020-task-10-all@googlegroups.com

Competition website: http://ritual.uh.edu/semeval2020-task10-emphasis-selection/

 

References

“Learning Emphasis Selection for Written Text in Visual Media from Crowd-Sourced Label Distributions”, 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019)

 

Organizers

Reza (Amirreza) Shirani, University of Houston
Franck Dernoncourt, Adobe Research
Jose Echevarria, Adobe Research
Nedim Lipka, Adobe Research
Paul Asente, Adobe Research
Thamar Solorio, University of Houston

Evaluation Criteria

Matchm: For each instance X in the test set Dtest, we select a set Sm(x) of m ∊ (1. . .4) words with the top m probabilities according to the ground truth. Analogously, we select a prediction set set S^m(x) for each m, based on the predicted probabilities.

We define the metric Matchm as follows:

Terms and Conditions

This page enumerated the terms and conditions of the competition.

Guidelines for SemEval System Papers

 

**Note: System papers are due May 15, 2020 by 23:59 anywhere on earth time (AOE). 

Participants of SemEval-2020 are encouraged to submit a system-description paper to provide details of their system, resources used, results, and analysis. These description papers will be part of the official SemEval-2020 proceedings, which will be published on http://aclanthology.info/.

 Dates

  • COLING 2020, the host conference for SemEval 2020 has been postponed to December and have shifted their deadlines accordingly. SemEval will keep the schedule mostly the same, to avoid too much overlap with SemEval 2021. Deadlines for system description papers, task description papers, notification, and camera-ready submissions have all been shifted by two weeks. Updated timeline:

    • 15 May 2020 - System description paper submissions due

    • 22 May 2020 - Task description paper submissions due

    • 24 June 2020 - Author notifications

    • 8 July 2020 - Camera-ready submissions due

Guidelines and Formatting

  • All papers for SemEval 2020 should follow the COLING camera-ready formatting guidelines. NOTE: this is a change from the previously-recommended formatting! Updated style files can be found here: http://alt.qcri.org/semeval2020/index.php?id=papers

  • System description papers should be maximum 5 pages long, not including bibliography (no page limit for references). An extra page will be given for the camera-ready version to incorporate reviewer suggestions.

  • You can find important details about system description paper submissions such as submission site, Best Task and Best Paper Awards here.

  • Papers should follow the SemEval guidelines for writing system description papers.

  • Note that SemEval submissions are not anonymous; author names should be included.

  • Paper titles should be in this format: "<team name> at SemEval-2020 Task 10: [Some More Title Text]"

  • It may also be helpful to look at some of the papers from past SemEval competitions, e.g., here.

  • We strongly encourage authors to add results on dev and test sets when trained only on the training set. In this way, future works will be more easily compared with the SemEval's task/system papers.

  • We'd very much like to see interesting interpretations of the results as well as discussion on the subjective quality of the used approach in the paper. 

  • You do not have to repeat the details of the task and data. Just summarize the task and cite the task description paper (details below). Then you can get into details of your submissions, experiments, results, and analyses.

Citation

  • Task description paper:

    • @InProceedings{shirani2020semeval, author = {Shirani, Amirreza and Dernoncourt, Franck and Lipka, Nedim and Asente, Paul and Echevarria, Jose and Solorio, Thamar}, title = {SemEval-2020 Task 10: Emphasis selection for written text in visual media}, booktitle = {Proceedings of the 14th International Workshop on Semantic Evaluation}, year = {2020}}

  • All system papers must cite the task paper. Additionally, they should also cite the ACL 2019 paper below, which introduces the task and dataset:

  • Shirani, Amirreza, Franck Dernoncourt, Paul Asente, Nedim Lipka, Seokhwan Kim, Jose Echevarria, and Thamar Solorio. "Learning emphasis selection for written text in visual media from crowd-sourced label distributions." In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 1167-1172. 2019. [BIBTEX]

Awards

​Please consider submitting your system description paper regardless of your rank in the leaderboard, as ​any results, including negative results, are valuable scientific contributions to the community. 

In addition to the winners on the leaderboard, we will announce the following:

  • The best result interpretation paper

  • The best negative results paper

 

Practice

Start: July 30, 2019, midnight

Train

Start: Sept. 4, 2019, midnight

Evaluation

Start: Feb. 19, 2020, midnight

Post-Evaluation

Start: March 12, 2020, midnight

Competition Ends

Never

You must be logged in to participate in competitions.

Sign In