Predicting Emphasis in Presentation Slides Shared Task

Organized by shallowLearner - Current server time: Nov. 30, 2020, 5:21 p.m. UTC

Previous

Evaluation
Nov. 23, 2020, midnight UTC

Current

Post-Evaluation
Nov. 26, 2020, midnight UTC

End

Competition Ends
Never

CAD21@AAAI21 Shared Task on Predicting Emphasis in Presentation Slides 

**NOTE: The Evaluation phase will be started on Nov 23. 

**NOTE: The second part of the training set has been released. 

**NOTE: This shared task is part of The AAAI-21 Workshop on Content Authoring and Design (CAD21) workshop. 

**NOTE: This is a Natural Language Processing task and No experience in Computer Vision or Graphic Design is needed.

**NOTE: Publishing outside CAD21 on this shared task’s dataset is not allowed until after December 2020. 

 

Overview

The use of presentation slides has become so commonplace that researchers have developed resources meant to guide presenters in the design of effective slides. However, these guidelines cover only advice with respect to the overall styles, such as colors and font sizes to ensure text is readable from a distance, as well as considerations with respect to graphical representations of content. In this shared task, we ask participants to design automated approaches to predict emphasis in presentation slides with the goal of improving comprehension and visual appeal of the slides. By emphasis, we mean to use special formatting (e.g. Boldface or italics) to make a word or set of words stand out from the rest.

This shared task builds on our recent SemEval 2020 shared task on “Emphasis Selection in short texts”, which attracted 197 participants and received submissions from 31 teams. Unlike most existing works on slide generation, the shared task emphasizes on the design aspect of presentation slides and efforts to automate the task. 




Task

The purpose of this shared task is to design automatic methods for emphasis selection, i.e. choosing candidates for emphasis in presentation slide texts, to enable automated design assistance in authoring.

Participants are expected to leverage semantic features from the content in order to predict which fragments are appropriate to highlight and provide design assistance to slide creators. As an example of expected results, consider the slides shown in the figure below. The slide on the left is plain, and while the text is readable, the slide on the right is easier for the audience to process. Emphasis can guide the audience into focusing attention on a few words. Instead of reading the entire slide, the audience can read only the emphasized parts and retain their attention to the speaker.

Important Notes

We will announce the best paper award for each of the following categories:

 

The winner(s) of the task – based on the ranking score

The best system description paper (best results interpretation)

The best negative results paper

We encourage all teams to describe their submission in an AAAI format, including teams with negative results.

We encourage all teams to open source their implementations.

During the evaluation phase, only the final valid submissions on CodaLab will be taken as the official submissions to the competition.

 

Challenges

No additional context from the user or the rest of the design such as background image is provided. Word emphasis patterns are author- and domain-specific. Without knowing the author’s intent and only considering the input text, multiple emphasis selections are valid. A good model, however, should be able to capture the inter-subjectivity or common sense within the given annotations and finally label words according to higher agreements.



Shared task Dates

  • Release Train data to participants: October 29, 2020

  • Evaluation start* (release test data to participants): November 23, 2020

  • Evaluation end: November 25, 2020

  • Results posted: November 26, 2020

  • System description paper submissions due: December 3, 2020

  • Camera-ready submissions due: December 15, 2020

  • Workshop date: February 8-9, 2021



Register and Participate

Get started by filling out this form and then register your team at "Participate" tab. You can now download the dataset and evaluation script.

Feel free join the Google group for task-related news and discussions: cad21@googlegroups.com

The workshop website: https://ritual.uh.edu/aaai-21-workshop-on-content-authoring-and-design/

 

References

 

Organizers

Reza (Amirreza) Shirani, University of Houston
Franck Dernoncourt, Adobe Research
Jose Echevarria, Adobe Research
Nedim Lipka, Adobe Research
Paul Asente, Adobe Research
Thamar Solorio, University of Houston
--------------------------------------------
Giai Tran, University of Houston
Hieu Trinh, University of Houston

Evaluation Criteria

Matchm: For each instance X in the test set Dtest, we select a set Sm(x) of m ∊ (1,5,10) words with the top m probabilities according to the ground truth. Analogously, we select a prediction set set S^m(x) for each m, based on the predicted probabilities.

We define the metric Matchm as follows:

Term & Conditions.

Train

Start: Oct. 28, 2020, midnight

Evaluation

Start: Nov. 23, 2020, midnight

Post-Evaluation

Start: Nov. 26, 2020, midnight

Competition Ends

Never

You must be logged in to participate in competitions.

Sign In
# Username Score
1 SreyanGhosh 0.543
2 zouwuhe 0.530
3 hugq 0.525