The SVC-onGoing competition is based on the ICDAR 2021 Competition on On-Line Signature Verification (SVC 2021). Due to the importance of the competition for the research community, we have decided to establish SVC 2021 as an on-going competition (SVC-onGoing), where researchers can easily benchmark their systems against the state of the art in an open web platform (CodaLab) using large-scale public databases such as DeepSignDB and SVC2021_EvalDB, and standard experimental protocols.
The goal of SVC-onGoing is to evaluate the limits of on-line signature verification systems using large-scale public databases and popular scenarios (office/mobile), and the stylus/finger as writing input. On-line signature verification technology is evolving fast in the last years due to several factors such as: i) the evolution in the acquisition technology going from the original Wacom devices specifically designed to acquire handwriting and signature in office-like scenarios through a pen stylus to the current touch screens of mobile scenarios in which signatures can be captured anywhere using our own personal smartphone through the finger, and ii) the extended usage of deep learning technology in many different areas, overcoming traditional handcrafted approaches and even human performance.
Therefore, in this competition the goal is to carry out a benchmark evaluation of the latest on-line signature verification technology using large-scale public databases and both traditional office-like scenarios (pen stylus), but also the challenging mobile scenarios with signatures performed using the finger over a touch screen. SVC-onGoing provides a complete panorama of the state of the art in the on-line signature verification field under realistic scenarios.
• Task 1: Analysis of office scenarios using the stylus as input.
• Task 2: Analysis of mobile scenarios using the finger as input.
• Task 3: Analysis of both office and mobile scenarios simultaneously.
In addition, SVC-onGoing simulates realistic operational conditions considering random and skilled forgeries simultaneously in each task.
Participants need to register to take part in the competition. Please, follow the instructions:
1) Fill up this form including your information.
2) Sing up in CodaLab using the same email introduced in step 1).
3) Join in CodaLab to the SVC-onGoing competition. Just click in the “Participate” tab for the registration.
The evaluation metric considered will be the popular Equal Error Rate (%) similar to most on-line signature verification studies in the literature. We expect to receive scores close to 1 for impostor comparisons and close to 0 for genuine comparisons.
The SVC-onGoing competition is based on two different databases: DeepSignDB and SVC2021_EvalDB. Both databases contain a subset of evaluation that researchers can use to test their systems on CodaLab (DeepSignDB and SVC2021_EvalDB tabs).
Development (DeepSignDB database):
In order to simulate similar conditions considered in the final evaluation stage of SVC-onGoing (based on SVC2021_EvalDB), we divide the DeepSignDB database into training and evaluation datasets.
The training dataset is based on 1,084 subjects whereas the evaluation dataset comprises the remaining 442 subjects of the database. For the training of the systems (1,084 subjects), no instructions are given to the participants. They can use the data as they like. Nevertheless, for the evaluation of the systems (442 subjects), we provide the participants with the signature comparisons to run (you will receive an email with them after your registration in SVC-onGoing).
Participants can run their on-line signature verification systems using the signature comparisons files provided to obtain the scores and test the EER performance on the public web platform (CodaLab) created for the competition (DeepSignDB tab). This way participants can obtain a quantitative measure of the performance of the developed systems for the final evaluation stage of SVC-onGoing, using a different database (SVC2021_EvalDB).
Results are updated in CodaLab in real time and they are visible to everyone in a ranking dashboard.
Final Evaluation (SVC2021_EvalDB database):
Results are updated in CodaLab in real time and they are visible to everyone in a ranking dashboard.
The organizers might need to verify the truthfulness of the scores submitted if necessary.
As commented before, this is an on-going competition. Participants can test their developed systems using the DeepSignDB and SVC2021_EvalDB datasets and corresponding signature comparison files through the CodaLab platform of the competition. Validation results are updated in CodaLab in real-time!
A valid submission for CodaLab is a zip-compressed file including the .txt files containing the score predictions made for each task you want to participate in (i.e., one .txt file per task). We expect to receive scores close to 1 for impostor comparisons and close to 0 for genuine comparisons.
The signature comparison files (one .txt file per task) provided together with the evaluation dataset (442 users) must be used to obtain the score predictions.
Note that even if you upload results from multiple submissions on to the leaderboard, only your latest submission is displayed on the leaderboard.
Submitted .txt files included in a zip-compressed file must have the following nomenclature:
In case you want to participate only in one task (e.g., task 1), submit the zip-compressed file including only the .txt file associated to that task (e.g., task1_predictions.txt). The result of that specific task will be updated in the leaderboard whereas the value 999.999 will appear in the other tasks, indicating that no results have been submitted.
Finally, in each prediction .txt file, we expect to have one prediction per row (column format) and with the same length as the number of comparisons included in the signature comparison files provided (one .txt per task). The following links provide:
The submission procedure follows the same rules indicated below for the DeepSignDB database.
Biometrics and Data Pattern Analytics - BiDA Lab (UAM)
For further information, please contact: svc2021.contact@gmail.com
Start: March 22, 2021, midnight
Description: This benchmark is based on the evaluation dataset of DeepSignDB (442 users). Participants can obtain the DeepSignDB database after sending us the corresponding license agreement. In addition, we provide the participants with the signature comparisons to run in this benchmark after we confirm your registration to SVC-onGoing. The ranking is carried out based on the Equal Error Rate (%), considering both random and skilled forgeries simultaneously. Anyway, for completeness, participants can see the corresponding EER results for skilled and random forgeries on the main web: https://sites.google.com/view/SVC2021. In case you do not submit the predictions file of a specific Task (e.g. Task 2), this will be indicated in the corresponding table with the result 999.999. We expect to receive scores close to 1 for impostor comparisons and close to 0 for genuine comparisons.
Start: March 22, 2021, midnight
Description: This benchmark is based on the complete dataset of SVC2021_EvalDB. Participants can obtain the SVC2021_EvalDB database after sending us the corresponding license agreement. In addition, we provide the participants with the signature comparisons to run in this benchmark after we confirm your registration to SVC-onGoing. The ranking is carried out based on the Equal Error Rate (%), considering both random and skilled forgeries simultaneously. Anyway, for completeness, participants can see the corresponding EER results for skilled and random forgeries on the main web: https://sites.google.com/view/SVC2021. In case you do not submit the predictions file of a specific Task (e.g. Task 2), this will be indicated in the corresponding table with the result 999.999. We expect to receive scores close to 1 for impostor comparisons and close to 0 for genuine comparisons.
March 24, 2030, midnight
You must be logged in to participate in competitions.
Sign In