Real-time distortion classification in laparoscopic videos

Organized by xak - Current server time: July 10, 2020, 3:59 p.m. UTC

First phase

Testing Phase
May 31, 2020, midnight UTC


Competition Ends
June 15, 2020, 7 a.m. UTC

Brief Introduction

Laparoscopic videos may be affected by different kinds of distortions during the surgery, resulting in a loss of visual quality. To prevent any disruptions in surgery due to video quality issues, there is a great need of having automated video enhancement systems. For any automated video enhancement system, the feedback loop plays an important part whereby any change in video quality is handled by applying the correct enhancement approach [1]. In this feedback loop, one of the most critical steps is the identification of distortion [2] affecting the video in real-time to allow timely application of enhancement. The purpose of this challenge is to target this problem by developing a fast, unified and effective algorithm for real-time classification of distortions within a laparoscopic video. In order to complete this challenge, we will provide our own dataset of shortduration laparoscopic videos called the Laparoscopic Video Quality (LVQ) database. These videos have been carefully selected from an existing public dataset and have been distorted with either single or multiple distortions simultaneously at different levels. In total, 800 such videos would be provided out of which a sample data of 200 is already available publicly.


Challenge Significance

A good video quality is an essential requirement for laparoscopic surgery. The distortions in a laparoscopic video not only affect a surgeon’s visibility but also degrade the results of subsequent computational tasks in robot-assisted surgery and imageguided navigation systems. These tasks include segmentation, instrument tracking [3] and augmented or mixed reality [4]. The distortions in a laparoscopic video appear either because of technical problems in the equipment [5] or due to side-effects of the instruments being used (e.g. smoke with diathermy). In order to tackle such problems, most of the existing solutions rely on making some changes to the technical equipment using one of the many available troubleshooting options. However, all such solutions are time-consuming and may not always solve the problem at hand requiring eventually a specialist technician or a change in equipment. To handle these problems more effectively, automated video enhancement systems need to be employed.

Evaluation Criteria

The submissions would be judged on two following criteria:

1. Speed of the algorithm:  The submissions will be run on Windows OS on an intel core i-7 system with 32 GB RAM and NVIDIA GeForce GTX 1050. A smaller running time would be given a higher score provided that the algorithm scores well on the second criteria

2. Classification criteria:  The submissions would be judged on their implementation using a classification score based on a weighted combination of classification accuracy and F1 score. Equal weight would be given to both. The algorithms would also be tested with a different set of laparoscopic video data than that provided. Moreover, the performance of the algorithm would also be judged separately for videos with single distortion and for those with multiple distortions. More weightage would be given to a method that performs well for the multi-distorted videos.

Terms and Conditions

The LVQ dataset would be made available to the participants once the challenge opens. The participants would be required to use this database to develop a single classification algorithm that is able to classify distortions in all the videos in real-time. The participants would be required to submit an easily readable code of their algorithm (preferably in Matlab or Python) with comments along with a document describing a brief summary and steps of their method. Moreover, each solution should also contain a demo code which could be used to run the submitted solution using a test video. The classification results should also be displayed in real-time on the tested video (or on a console window/terminal alongside) while the video is being played. In case of multiple distortions, all classes should be displayed. The participants must also share the speed of their code and the system specifications on which it was run.

Testing Phase

Start: May 31, 2020, midnight

Final Evaluation Phase

Start: June 13, 2020, midnight

Competition Ends

June 15, 2020, 7 a.m.

You must be logged in to participate in competitions.

Sign In