SemEval-2021 Task 4: Reading Comprehension of Abstract Meaning

Organized by BoyuanZheng - Current server time: Nov. 30, 2020, 5:18 p.m. UTC


Practice (Training data ready)
Oct. 1, 2020, midnight UTC


Evaluation (Test data ready)
Jan. 10, 2021, midnight UTC


Competition Ends

ReCAM: Reading Comprehension of Abstract Meaning

Boyuan Zheng, Xiaoyu Yang, Yu-Ping Ruan, Quan Liu, Zhen-Hua Ling, Si Wei, Xiaodan Zhu

We consider that computers' ability in understanding, representing, and expressing abstract meaning is a basic component of true natural language understanding. In the past decade significant advancement has been achieved or claimed in representation learning on many NLP problems. How such a success helps develop models for abstract meaning understanding and modelling? 

The aim of this shared task is to provide a benchmark for studying machines' ability in representing and understanding abstract concepts. Specifically, computers are given passages to read and understand. If computers can digest the passages as human do, we expect they can predict abstract words that people use to write summaries after understanding the given passages. Note that unlike some previous datasets such as CNN/Daily Mail (Hermann et al., 2015) that request computers to predict concrete concepts, e.g., named entities, our task here asks algorithms to fill out abstract words removed from human written summaries. 


Our shared task has three subtasks. Subtask 1 and 2 focus on evaluating machine learning models' performance with regard to two definitions of abstractness (Spreen and Schulz, 1966; Changizi, 2008), which we call imperceptibility and nonspecificity, respectively. Subtask 3 aims to provide some insights to their relationships.

• Subtask 1: ReCAM-Imperceptibility

Concrete words refer to things, events, and properties that we can perceive directly with our senses (Spreen and Schulz, 1966; Coltheart 1981; Turney et al., 2011), e.g., donut, trees, and red. In contrast, abstract words refer to ideas and concepts that are distant from immediate perception. Examples include objective, culture, and economy. In subtask 1, the participanting systems are required to perform reading comprehension of abstract meaning for imperceptible concepts.

Below is an example. Given a passage and a question, your model needs to choose from the five candidates the best one for replacing @placeholder.


• Subtask 2: ReCAM-Nonspecificity
Subtask 2 focuses on a different type of definition. Compared to concrete concepts like groundhog and whale, hypernyms such as vertebrate are regarded as more abstract (Changizi, 2008). 
• Subtask 3: ReCAM-Interaction

Subtask 3 aims to provide more insights to the relationship of the two views on abstractness, In this subtask, we test the performance of a system that is trained on one definition and evaluted on the other. 

Important Dates

Trail data ready: July 31, 2020

Training data ready: October 1, 2020

Test data ready: December 3, 2020

Evaluation start: January 10, 2021

Evaluation end: January 31, 2021

Paper submission due: February 23, 2021

Notification to authors: March 29, 2021

Camera ready due: April 5, 2021

SemEval workshop: Summer 2021

Contact Us



 [1] Hermann, Karl Moritz and Kocisky, Tomas and Grefenstette, Edward and Espeholt, Lasse and Kay, Will and Suleyman, Mustafa and Blunsom, Phil. "Teaching Machines to Read and Comprehend."  Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, Quebec, Canada, 2015, December, 1693-1701 [PDF]

 [2] Otfried Spreen and Rudolph W. Schulz. "Parameters of abstraction, meaningfulness, and pronunciability for 329 nouns.Journal of Verbal Learning and VeFrbal Behavior, 5(5), 1966, 459-468 [PDF]

 [3] Mark A. Changizi. "Economically organized hierarchies in WordNet and the Oxford English Dictionary." Cogn. Syst. Res., 9(3), 2008, 214-228 [PDF]

 [4] Coltheart, Max. "The MRC psycholinguistic database." The Quarterly Journal of Experimental Psychology Section A, 33(4), 1981, 497-505 [PDF]

 [5] Turney, Peter D. and Neuman, Yair and Assaf, Dan and Cohen, Yohai. "Literal and Metaphorical Sense Identification through Concrete and Abstract Context."  Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, Edinburgh, United Kingdom, 2011, July, 680-690 [PDF]

Evaluation Method

Accuracy is used as evaluation metric for all three subtasks.

Participants need submit the files of trained model.

(Details will come soon)

Terms & Conditions

By submitting results to this competition, you consent to the public release of your scores at the SemEval-2021 workshop and in the associated proceedings, at the task organizers' discretion. Scores may include but are not limited to, automatic and manual quantitative judgments, qualitative judgments, and such other metrics as the task organizers see fit. You accept that the ultimate decision of metric choice and score value is that of the task organizers.

You further agree that the task organizers are under no obligation to release scores and that scores may be withheld if it is the task organizers' judgment that the submission was incomplete, erroneous, deceptive, or violated the letter or spirit of the competition's rules. Inclusion of a submission's scores is not an endorsement of a team or individual's submission, system, or science.

You further agree that your system may be named according to the team name provided at the time of submission, or to a suitable shorthand as determined by the task organizers.

You agree not to redistribute the test data except in the manner prescribed by its license.


Since the three subtasks share the same task format, baselines are applicable to all subtasks.

GA Reader

Instruction for GA Reader

You are free to build a system from scratch using any available software packages and resources, as long as they are not against the spirit of fair competition. In order to assist testing of ideas, we also provide GA Reader that you can build on. The use of this system is completely optional. The system is available.


  • Boyuan Zheng
  • Northeastern University


  • Xiaoyu Yang
  • Queen's University


  • Yu-Ping Ruan
  • University of Science and Technology of China


  • Quan Liu
  • iFlytek Research


  • Zhen-Hua Ling
  • University of Science and Technology of China


  • Xiaodan Zhu
  • Queen's University


Practice (Training data ready)

Start: Oct. 1, 2020, midnight

Evaluation (Test data ready)

Start: Jan. 10, 2021, midnight


Start: Jan. 31, 2021, midnight

Competition Ends


You must be logged in to participate in competitions.

Sign In