The 2nd edition of AIM: Advances in Image Manipulation workshop will be held on August 28th, 2020 in conjunction with ECCV 2020 in Glasgow, United Kingdom.
Image manipulation is a key computer vision tasks, aiming at the restoration of degraded image content, the filling in of missing information, or the needed transformation and/or manipulation to achieve the desired target (with respect to perceptual quality, contents, or performance of apps working on such images). Recent years have witnessed an increased interest from the vision and graphics communities in these fundamental topics of research. Not only has there been a constantly growing flow of related papers, but also substantial progress has been achieved.
Each step forward eases the use of images by people or computers for the fulfillment of further tasks, as image manipulation serves as an important frontend. Not surprisingly then, there is an ever-growing range of applications in fields such as surveillance, the automotive industry, electronics, remote sensing, or medical image analysis, etc. The emergence and ubiquitous use of mobile and wearable devices offer another fertile ground for additional applications and faster methods.
This workshop aims to provide an overview of the new trends and advances in those areas. Moreover, it will offer an opportunity for academic and industrial attendees to interact and explore collaborations.
Jointly with AIM workshop, we have an AIM challenge on example-based Video Temporal Super-Resolution, that is, the task of super-resolving in the temporal domain (increasing the number of frames) an input video with an upsampling factor x4 based on a set of prior examples of high frame-rate videos. The aim of the challenge is to obtain a solution capable to produce high temporal resolution results with the best fidelity (PSNR, SSIM) to the ground truth.
The top-ranked participants will be awarded and invited to follow the ECCV submission guide for workshops to describe their solution and to submit to the associated AIM workshop at ECCV 2020.
More details are found in the data section of the competition.
To learn more about each competition, to participate in the challenge, and to access the dataset, everybody is invited to register.
The training data is available to the registered participants.
We evaluate the video temporal super-resolution result by comparing it with the ground truth video frames.
The final goal of this challenge is to generate 60 fps video frames from input 15 fps video frames. By filling in the intermediate frames, higher-frame-rate videos are generated. During the development and testing phase, challenge participants will submit generated 30 fps video frames. After the testing phase, 60 fps video frames will be submitted.
To measure the fidelity, we use the standard Peak Signal to Noise Ratio (PSNR) and, complementarily, the Structural Similarity (SSIM) index as they are often employed in the literature. PSNR and SSIM implementations are found in most of the image processing toolboxes. We report the average results over all the processed frames belonging to the evaluation dataset.
import imageio from skimage.metrics import peak_signal_noise_ratio, structural_similarity ref_img = imageio.imread('ref_img_name.png') res_img = imageio.imread('res_img_name.png') psnr = peak_signal_noise_ratio(ref_img, res_img) ssim = structural_similarity(ref_img, res_img, multichannel=True, gaussian_weights=True, use_sample_covariance=False)
The provided dataset, REDS_VTSR is generated from original videos captured at 120 fps. To generate low-frame-rate videos, the videos are subsampled with other processings. Thus, the provided 15, 30, and 60 fps videos have 8, 4, and 2 frame gaps between subsequent frames, respectively. The frames numbers are named by reflecting the temporal gap. For example, a 15 fps video having 3-seconds duration has 0, 8, 16, ..., 352, 360th frames (46 frames in total). Likewise, a 30 fps video consists of 0, 4, 8, ..., 352, 356, 360th frames (91 frames in total) and 60 fps video contains 0, 2, 4, 6, ..., 354, 356, 358, 360th frames (181 frames in total). All the frames are named by frame number in 8-digit format.
The submission protocol is in two-folds. During the development and testing phases, the participants will upload their intermediate results, 30 fps video frames, to this CodaLab competition site. To submit the results, participants should follow the rules:
The readme.txt file should contain the following lines filled in with the runtime per image (in seconds) of the solution, 1 or 0 accordingly if employs CPU or GPU at runtime, and 1 or 0 if employs extra data for training the models or not.
runtime per image [s] : 10.43 CPU / GPU : 1 Extra Data  / No Extra Data  : 1 Other description : Solution based on A+ of Timofte et al. ACCV 2014. We have a Matlab/C++ implementation, and report single core CPU runtime. The method was trained on Train 91 of Yang et al. and BSDS 200 of the Berkeley segmentation dataset.
The last part of the file can have any description you want about the code producing the provided results (dependencies, link, scripts, etc.) The provided information is very important both during the validation period when different teams can compare their results/solutions but also for establishing the final ranking of the teams and their methods.
After the testing phase, the participants will submit 60fps video frames to all of the challenge organizers: Submission account (aim2020.vtsr [at] gmail.com), Sanghyun Son (thstkdgus35 [at] snu.ac.kr), Seungjun Nah (seungjun.nah [at] gmail.com), Jaerin Lee (ironjr [at] snu.ac.kr) and Radu Timofte (Radu.Timofte [at] vision.ee.ethz.ch) by email. The final submission should be made by the following rules:
Please use the following format to submit your final results, fact sheet, code, model (with trained parameters). We will run the testing code to reproduce the results. Training code doesn't necessarily have to be included. The code and the model is to be posted on the AIM 2020 website.
To: email@example.com; firstname.lastname@example.org; email@example.com; firstname.lastname@example.org; email@example.com
Title: AIM 2020 Video Temporal Super-Resolution Challenge - TEAM_NAME
Body contents should include:
a) the challenge name
b) team name
c) team leader's name and email address
d) rest of the team members
e) team members with AIM2020 sponsors (if any)
f) team name and user names on AIM2020 CodaLab competitions
g) executable/source code attached or download links.
h) fact sheet attached
i) download link to the results of all of the test frames (135 x 30 = 4050 frames)
These are the official rules (terms and conditions) that guvern how the AIM challenge on example-based video temporal super-resolution 2019 will operate. This challenge will be simply reffered to as the "challenge" or the "contest" throghout the remaining part of these rules and may be named as "AIM" or "REDS" benchmark, challenge, or contest, elsewhere (our webpage, our documentation, other publications).
In these rules, "we", "our", and "us" refer to the organizers (Sanghyun Son (thstkdgus35 [at] snu.ac.kr), Seungjun Nah (seungjun.nah [at] gmail.com), Jaerin Lee (ironjr [at] snu.ac.kr) and Radu Timofte (Radu.Timofte [at] vision.ee.ethz.ch)) of AIM challenge and "you" and "yourself" refer to an eligible contest participant.
Note that these official rules can change during the contest until the start of the final phase. If at any point during the contest the registered participant considers that can not anymore meet the eligibility criteria or does not agree with the changes in the official terms and conditions then it is the responsability of the participant to send an email to the organizers such that to be removed from all the records. Once the contest is over no change is possible in the status of the registered participants and their entries.
This is a skill-based contest and chance plays no part in the determination of the winner (s).
The goal of the contest is to super-resolve an input video to an output video in the temporal domain with a upsampling factor x4 and the challenge is called video temporal super-resolution.
Focus of the contest: It will be made available REDS_VTSR dataset adapted for the specific needs of the challenge. The images have a large diversity of contents. We will refer to this dataset, its partition, and related materials as REDS_VTSR. The dataset is divided into training, validation and testing data. The aim is to achieve temporally super-resolved output videos with the highest fidelity (PSNR) to the ground truth. The participants will not have access to the ground truth images from the test data. The ranking of the participants is according to the performance of their methods on the test data. The participants will provide descriptions of their methods, details on (run)time complexity, platform and (extra) data used for modeling. The winners will be determined according to their entries, the reproducibility of the results and uploaded codes or executables, and the above mentioned criteria as judged by the organizers.
You are eligible to register and compete in this contest only if you meet all the following requirements:
This contest is void wherever it is prohibited by law.
Entries submitted but not qualified to enter the contest, it is considered voluntary and for any entry you submit, AIM reserves the right to evaluate it for scientific purposes, however, under no circumstances will such entries qualify for sponsored prizes. If you are an employee, affiliated with or representant of any of the AIM challenge sponsors then you are allowed to enter in the contest and get ranked, however, if you will rank among the winners with eligible entries you will receive only a diploma award and none of the sponsored money, products or travel grants.
NOTE: Industry and research labs are allowed to submit entries and to compete in both the validation phase and final test phase. However, in order to get officially ranked on the final test leaderboard and to be eligible for awards the reproducibility of the results is a must and, therefore, the participants need to make available and submit their codes or executables. All the top entries will be checked for reproducibility and marked accordingly.
We will have 3 categories of entries in the final test ranking:
1) checked with publicly released codes
2) checked with publicly released executable
3) unchecked (with or without released codes or executables)
In order to be eligible for judging, an entry must meet all the following requirements:
Entry contents: The participants are required to submit image results and code or executables. To be eligible for prizes, the top ranking participants should publicly release their code or executables under a license of their choice, taken among popular OSI-approved licenses (http://opensource.org/licenses) and make their code or executables online accessible for a period of not less than one year following the end of the challenge (applies only for top three ranked participants of the competition). To enter the final ranking the participants will need to fill out a survey (fact sheet) briefly describing their method. All the participants are also invited (not mandatory) to submit a paper for peer-reviewing and publication at the AIM Workshop and Challenges (to be held on August 28, 2020, Glasgow, UK). To be eligible for prizes, the participants' score must improve the baseline performance provided by the challenge organizers.
Use of the provided data: All data provided by AIM are freely available to the participants from the website of the challenge under license terms provided with the data. The data are available only for open research and educational purposes, within the scope of the challenge. AIM and the organizers make no warranties regarding the database, including but not limited to warranties of non-infringement or fitness for a particular purpose. The copyright of the images remains in the property of their respective owners. By downloading and making use of the data, you accept full responsibility for using the data. You shall defend and indemnify AIM and the organizers, including their employees, Trustees, officers and agents, against any and all claims arising from your use of the data. You agree not to redistribute the data without this notice.
Test data: The organizers will use the test data for the final evaluation and ranking of the entries. The ground truth test data will no be made available to the participants during the contest.
Training and validation data: The organizers will make available to the participants a training and a validation dataset with ground truth images.
Post-challenge analyses: The organizers may also perform additional post-challenge analyses using extra-data, but without effect on the challenge ranking.
Submission: The entries will be online submitted via the CodaLab web platform. During the development phase, while the validation server is online, the participants will receive immediate feedback on validation data. The final evaluation will be computed automatically on the test data submissions, but the final scores will be released after the challenge is over.
Original work, permissions: In addition, by submitting your entry into this contest you confirm that, to the best of your knowledge: - your entry is your own original work; and - your entry only includes material that you own, or that you have permission to use.
Other than what is set forth below, we are not claiming any ownership rights to your entry. However, by submitting your entry, you:
Are granting us an irrevocable, worldwide right and license, in exchange for your opportunity to participate in the contest and potential prize awards, for the duration of the protection of the copyrights to:
If you do not want to grant us these rights to your entry, please do not enter this contest.
The participants will follow the instructions on the CodaLab website to submit entries
The participants will be registered as mutually exclusive teams. Each CodaLab account is allowed to submit only one single final entry. Multiple submissions from a single team are valid only when they are significantly different. It is recommended to check the validity with the organizers in advance. We are not responsible for entries that we do not receive for any reason, or for entries that we receive but do not work properly.
The participants must follow the instructions and the rules. We will automatically disqualify incomplete or invalid entries.
The board of AIM will select a panel of judges to judge the entries; all judges will be forbidden to enter the contest and will be experts in causality, statistics, machine learning, computer vision, or a related field, or experts in challenge organization. A list of the judges will be made available upon request. The judges will review all eligible entries received and select three winners for each of the two competition tracks based upon the prediction score on test data. The judges will verify that the winners complied with the rules, including that they documented their method by filling out a fact sheet.
The decisions of these judges are final and binding. The distribution of prizes according to the decisions made by the judges will be made within three (3) months after completion of the last round of the contest. If we do not receive a sufficient number of entries meeting the entry requirements, we may, at our discretion based on the above criteria, not award any or all of the contest prizes below. In the event of a tie between any eligible entries, the tie will be broken by giving preference to the earliest submission, using the time stamp of the submission platform.
The financial sponsors of this contest are listed on AIM 2020 workshop web page. There will be economic incentive prizes and travel grants for the winners (based on availability) to boost contest participation; these prizes will not require participants to enter into an IP agreement with any of the sponsors, to disclose algorithms, or to deliver source code to them. The participants affiliated with the industry sponsors agree to not receive any sponsored money, product or travel grant in the case they will be among the winners.
Incentive Prizes for each track competitions (tentative, the prizes depend on attracted funds from the sponsors)
1st place: ?00$ + ?GPU + award certificate
2nd place: ?00$ + ?GPU + award certificate
3rd place: ?00$ + award certificate
Publishing papers is optional and will not be a condition to entering the challenge or winning prizes. The top ranking participants are invited to submit a paper following ECCV 2020 author rules for peer-reviewing to AIM workshop.
The results of the challenge will be published together with AIM 2020 workshop papers in the 2020 ECCV Workshops proceedings.
The top ranked participants and participants contributing interesting and novel methods to the challenge will be invited to be co-authors of the challenge report paper which will be published in the 2020 ECCV Workshops proceedings. A detailed description of the ranked solution as well as the reproducibility of the results are a must to be an eligible co-author.
If there is any change to data, schedule, instructions of participation, or these rules, the registered participants will be notified at the email they provided with the registration.
Within seven days following the determination of winners, we will send a notification to the potential winners. If the notification that we send is returned as undeliverable, or you are otherwise unreachable for any reason, we may award the prize to an alternate winner, unless forbidden by applicable law.
The prize such as money, product, or travel grant will be delivered to the registered team leader given that the team is not affiliated with any of the sponsors. It's up to the team to share the prize. If this person becomes unavailable for any reason, the prize will be delivered to be the authorized account holder of the e-mail address used to make the winning entry.
If you are a potential winner, we may require you to sign a declaration of eligibility, use, indemnity and liability/publicity release and applicable tax forms. If you are a potential winner and are a minor in your place of residence, and we require that your parent or legal guardian will be designated as the winner, and we may require that they sign a declaration of eligibility, use, indemnity and liability/publicity release on your behalf. If you, (or your parent/legal guardian if applicable), do not sign and return these required forms within the time period listed on the winner notification message, we may disqualify you (or the designated parent/legal guardian) and select an alternate selected winner.
The terms and conditions are inspired by and use verbatim text from the `Terms and conditions' of ChaLearn Looking at People Challenges and of the NTIRE 2017, 2018, 2019, and 2020 challenges and the AIM 2019 challenges.
The AIM challenge on Video Temporal Super-Resolution is organized jointly with the AIM 2020 workshop. The results of the challenge will be published at AIM 2020 workshop and in the ECCV 2020 Workshops proceedings.
Sanghyun Son (thstkdgus35 [at] snu.ac.kr), Seungjun Nah (seungjun.nah [at] gmail.com), Jaerin Lee (ironjr [at] snu.ac.kr) and Radu Timofte (Radu.Timofte [at] vision.ee.ethz.ch) are the contact persons and direct managers of the AIM challenge.
More information about AIM workshop and challenge organizers is available here: https://data.vision.ee.ethz.ch/cvl/aim20/
Start: May 1, 2020, midnight
Description: Development phase - submit the results on the validation data.
Start: July 10, 2020, midnight
Description: Testing phase - submit the results on the test data.
July 17, 2020, 11:59 p.m.
You must be logged in to participate in competitions.Sign In