Word-level Post-Editing Effort QE shared task 2020

Organized by fblain - Current server time: May 31, 2020, 7:24 a.m. UTC

Previous

English-German
April 19, 2020, midnight UTC

Current

English-Chinese
April 19, 2020, midnight UTC

End

Competition Ends
Never

QE Shared Task 2020

The official shared task on Quality Estimation aims to further examine automatic methods for estimating the quality of neural machine translation output at run-time, without relying on reference translations. As in previous years, we cover estimation at various levels. Important elements introduced this year include: a new task where sentences are annotated with Direct Assessment (DA) scores instead of labels based on post-editing; a new multilingual sentence-level dataset mainly from Wikipedia articles, where the source articles can be retrieved for document-wide context; the availability of NMT models to explore system-internal information for the task.

In addition to generally advancing the state of the art at all prediction levels for modern neural MT, our specific goals are:

  • to create a new set of public benchmarks for tasks in quality estimation,
  • to investigate models for predicting DA scores and their relationship with models trained for predicting post-editing effort,
  • to study the feasibility of mulilingual (or even language independent) approaches to QE, and
  • to study the influence of source-language document-level context for the task of QE, and
  • to analyse the aplicabiity of NMT model information for QE.

Offical task webpage: QE Shared Task 2020

This submission platform covers Task 2: Word-level *Post-Editing Effort*.

In Task 2, participating systems are required to detect errors both on source side (to detect which words caused errors) and target side (to detect mistranslated or missing words):

  • Target. Each token is tagged as either OK or BAD. Additionally, each gap between two words is tagged as BAD if one or more missing words should have been there, and OK otherwise. Note that number of tags for each target sentence is 2*N+1, where N is the number of tokens in the sentence.
  • Source. Tokens are tagged as OK if they were correctly translated, and BAD otherwise. Gaps are not tagged.

Submission Format

We request up to three separate files, one for each type of label: MT words, MT gaps and source words. You can submit for either of these tasks or all of them, independently. The output of your system for each type of label should be labels at the word-level formatted in the following way:

<LANGUAGE ID> <METHOD NAME> <TYPE> <SEGMENT NUMBER> <WORD INDEX> <WORD> <BINARY SCORE>

Where:

  • LANGUAGE PAIR is the ID (e.g., en-de) of the language pair.
  • METHOD NAME is the name of your quality estimation method.
  • TYPE is the type of label predicted: mt, gap or source.
  • SEGMENT NUMBER is the line number of the plain text translation file you are scoring (starting at 0).
  • WORD INDEX is the index of the word in the tokenised sentence, as given in the training/test sets (starting at 0). This will be the word index within the MT sentence or the source sentence, or the gap index for MT gaps.
  • WORD actual word. For the 'gap' submission, use a dummy symbol: 'gap'.
  • BINARY SCORE is either 'OK' for no issue or 'BAD' for any issue.

Each field should be delimited by a single tab character.

To allow the automatic evaluation of your predictions, please submit them in a file named as follows:

  • Words in the MT: predictions_mt.txt
  • Source words: predictions_src.txt
  • Gaps in the MT: predictions_gaps.txt

and package them in a single zipped file (.zip).

If you don't have predictions for either one of the sub-tasks, only include what you have in your submission. If one of the files is missing, the scoring program will simply assign the score of 0 to the missing predictions. 

Submissions will be evaluated in terms of MCC (Matthews correlation coefficient).

The data is publicly available but since it has been provided by our industry partners it is subject to specific terms and conditions. However, these have no practical implications on the use of this data for research purposes.

Participants are allowed to explore any additional data and resources deemed relevant.

The provided QE labelled data is publicly available under Creative Commons Attribution Share Alike 4.0 International (https://github.com/facebookresearch/mlqe/blob/master/LICENSE).

Each participating team can submit at most 30 systems for each of the language pairs of each subtask (max 5 a day).

English-German

Start: April 19, 2020, midnight

English-Chinese

Start: April 19, 2020, midnight

Competition Ends

Never

You must be logged in to participate in competitions.

Sign In