In this competition, MAFAT’s DDR&D (Directorate of Defense Research & Development) would like to tackle the challenge of classifying living, non-rigid objects detected by doppler-pulse radar systems.
MAFAT Radar Challenge 1st Place Prize Winner Announcement
MAFAT Radar Challenge has recently ended. This was the second competition in the MAFAT Challenge series. The competition focused on the classification of living, non-rigid objects, detected by doppler-pulse radar systems.
We've had more than 1,000 registered participants and more than 4,300 submissions. During the competition, participants have received a training data set containing 6,656 radar segments, labeled as either animals or humans, in addition to a supportive (auxiliary) data set containing 49,071 segments. Participants trained their machine learning predictive models based on the training data, and were asked to make predictions for this binary classification task on untagged data (the test set).
After the private test phase has come to its conclusion, the participants' submissions were evaluated against the true labels of the private test set (labels that were unknown to the participants). Results are available under the results tab.
Since the competition ended, we have been evaluating the submissions of the leaders and verifying their prize-winning eligibility.
Today, we are delighted to formally announce GSI Technology as the competition winners!! They have put a lot of effort and creativity into this challenge, and they have passed all our formal and technical eligibility tests.
If you want to study their technical approach in this challenge, take a look at this blog post that was published by Daphna Idelson from GSI.
We would like to congratulate Axon Pulse on a second-place finish, their technical approach to this challenge is described in the following blog post.
Some of you might recognise his Codalab leaderboard username “rogueneuron”, congratulations Ido!
MAFAT would like to thank all of the participants for embracing the challenge and for their enthusiastic participation and cooperation!
We would like to remind you that MAFAT plans to launch additional competitions. You can subscribe to MAFAT Challenge's mailing list here and get information regarding future competitions straight to your inbox.
Private Leaderboard Revealed!
Dear participants,
Thank you very much for competing in MAFAT Radar Challenge.
The private leaderboard is now available at the results tab, and as usual in data science competitions - the results are interesting!
What an intensive competition! We’ve had more than 1000 registered participants and more than 4300 submissions (in the public and private phases).
We want to congratulate the leaders.
In the following few weeks we will conduct a thorough validation of the leaders winning-eligibility. Once done, the validated leaders will officially become winners and the competition winners will be officially announced.
In the next few days we will contact the leading contestants with further details and requests.
MAFAT would like to thank all of the competitors for a great effort and great results.
We hope you enjoyed this MAFAT Challenge and we hope to see you soon in our future challenges (register here in order to receive updates regarding upcoming MAFAT Challenge competitions).
MAFAT Challenge Team
The Competition Has Ended
Dear participants,
Congratulations to everyone who participated in MAFAT RADAR Challenge. We want to thank you for participating in this competition!
We are very proud of this non-trivial and unique data challenge. Over 1,000 participants took part in the challenge.
We are looking forward to analyzing the results and hearing about your approaches and will work hard in the next weeks on validating your submissions. We hope we will be able to publish the private leaderboard during the next two weeks (we will send you a notice).
Thank You and Good Luck,
MAFAT Challenge Team
Important - MAFAT Challenge - Stage 2 - Private test
Dear participants.
The private phase will start on October 8th and will last until October 15th, 11:00 AM Israel time (GMT + 3).
During the week of the private phase, each team or competitor will be allowed to submit only 2 submissions, so choose your two final submissions carefully. Again, It is important to notice that only 2 submissions in total in the entire week of the private phase are allowed (not 2 submissions per day - as was in the public phase) *.
On October 8th, two new datasets will be available for you in the competition's "DATA" folder in a new folder "Phase 2 - Private":
1) The private test set - which includes 248 unlabeled segments. The private set will be used for final judging.**
2) A full public test set - This set contains the full tracks of the public test set from which you have received only 106 segments in the public phase. This set contains a total of 284 segments. The full public test set will include the labels (human / animal) of all 284 segments.***
You can use the full, labeled, public test set, in addition to the training set and the auxiliary set, to retrain your model, but you must not use the private test set in any way for training your final model or preliminary analysis. The private test set needs to be used only for making a final prediction.
After submitting your 2 final submissions you won't be able to see the final score until the competition ends and winners announced. candidates for winning will be contacted with more details and requests.
Important Notes:
* Teams, please make sure that you submit only 2 submissions as a team. The automatic mechanism that prevents submitting more than 2 successful submissions per user is not active at the team level. Therefore, in case that the team members will submit more than 2 submissions, only the first 2 submissions will be taken into account, the other submissions will be disqualified.
** The segments in the private test set are selected in a way that guarantees a track has at most a single segment, with the exception of very long tracks that may contribute more than one segment.
The segments in the private test set, the public test set and the training set are not from the same tracks.
*** The segment_id in the full public test set is not the same as it is in the public test set. This change is to guarantee that the segment_id sequence will reflect the order of the segments in each track. A mapping file will be attached (in CSV format) so you will be able to identify the 106 selected segments out of the 284 segments in the full public test set.
Our team presented all aspects of the competition in the "MAFAT Radar Challenge Revealed" online meetup. The meetup broadcasted live on the Israeli Ministry of Defense YouTube channel on July 20th. meetup recording is available at https://youtu.be/mFYHInlwOL8?t=109. You can watch a specific part of the meetup in the following links:
1st MAFAT Challenge #1 place solution
In order to participate in the competition, you must complete the following stages:
1. Apply for the competition using the mafatchellenge.mod.gov.il application form.
2. Register to the competition with your CodaLab user in the competition "participate" tab.
Recent developments in radar technologies allow for the development of powerful doppler-pulse radars. Those radars are used in different areas and applications, including security. Radars monitor large areas, and detect and track moving objects. Object classification, i.e., determination of the type of the tracked object, is essential for systems such as these. While some object types are easily distinguishable from one another by traditional signal processing techniques, distinguishing between humans and animals, which are non-rigid objects, tracked in radars, is a difficult task.
Today, the task of classifying radar-tracked, non-rigid objects is mostly done by human operators and requires the integration of radar and optical systems. The growing workload of those operators and the complexity of the systems is a major constraint for improving the systems’ efficiency. The competition’s objective is to explore automated, novel solutions that will enable classification for humans and animals that are tracked by radar systems with a high degree of confidence and accuracy.
The participants’ goal is to classify segments of radar tracks of humans or animals using the I/Q signal matrix as an input. The task at hand is a binary classification task; the tracked objects are either humans or animals.
The data is real-world data, gathered from diverse geographical locations, different times, sensors, and qualities (high- and low-signal to noise ratio—SNR).
For this competition, there is one target variable:
Target – whether the segment of the tracked object is an animal or a human.
This is a binary variable: 0 = Animal, 1 = Human.
Classification of radar-tracked objects is traditionally done by using well-studied radar signal features. For example, the Doppler effect, also known as the Doppler shift, and the radar cross-section of an object can be utilized for the classification task, however, from the radar system’s perspective, looking at the tracked objects through the lens of those traditional features, humans and animals appear very similar.
Microwave signals travel at the speed of light but still obey the Doppler effect. Microwave radars receive a Doppler frequency shifted reflection from a moving object. Frequency is shifted higher for approaching objects and lower for receding objects. The Doppler effect is a strong feature for some classification tasks, e.g., separating moving vehicles from animals. However, humans and animals are typically moving at the same range of velocities.
Humans and animals are non-rigid objects. While walking, different body parts move at different velocities and frequencies (arms, legs, body center or torso, etc.). The micro-Doppler signature of a tracked object is a time-varying frequency-modulated contribution that is caused by the relative movement of separate parts of the moving objects. Potentially, the micro-Doppler phenomenon can produce features that are useful for the task of classifying non-rigid moving objects.
The radar cross-section (RCS) is a measure of how detectable an object is by radar. An object reflects a limited amount of radar energy back to the source, and this reflected energy is used to calculate the RCS of an object. A larger RCS indicates that an object is more easily detected by radars. Multiple factors contribute to the RCS of an object, including its size, material, shape, orientation, and more. The RCS is a classic feature for classifying the tracked object. However, it turns out that the RCS of humans is similar to the RCS of many animals; thus, on its own, RCS is not a good enough separating feature as well.
The task of automatically distinguishing between humans and animals based on their radar signature is, therefore, a challenging task!
The objective of this competition is to explore whether creative approaches and techniques, including deep convolutional neural networks, recurrent neural networks, transformers, classical machine learning, classical signal processing, and more, can be leveraged to provide better solutions for this difficult task. We are especially interested in approaches that are inspired by non-radar fields, including computer vision, audio analysis, sequential data analysis, and so on.
The competition has two stages:
Stage 1 – participants are asked to train their models on the training set and submit their results, predicting the labels of the public test set.
At the end of stage 1 we will release the labels of the public test set and we will release a new, unseen and unlabeled, private test set.
Stage 2 – participants are allowed to re-train their models on the combined training set (including the original training set from stage 1 and the labeled public test set).
During stage 2, participants are required to submit up to 2 submissions for final judging, giving the prediction of the segments in the private test set.
Moving from stage 1 to stage 2, participants will not be required to upload their models.
Submissions are evaluated on the Area Under the Receiver Operating Characteristic Curve (ROC AUC) between the predicted probability and the observed target as calculated by roc_auc_score in scikit-learn (v 0.23.1).
Participants are asked to give a probability score for each segment in the provided test set, wherein humans are classified as 1 and animals as 0.
The submission file must be a “.csv” file packed as a “.zip” file. The names of the columns must be “segment_id” and “prediction".
segment_id | prediction |
1 | 0.19 |
2 | 0.03 |
3 | 0.45 |
4 | 0.23 |
You can download a template submission file here: Submission_Template.
1. Competition title: "MAFAT Radar Challenge – Can you distinguish between humans and animals in Radar tracks?"
2. This competition is organized by the Israeli Ministry Of Defense (“Competition Organizer”). Webiks and Matrix shall assist the Competition Organizer with the execution of this competition, including disbursement of the award to the competition winners.
3. This competition is public, but the Competition Organizer approves each user’s request to participate and may elect to disallow participation according to its own considerations. You must register on the Competition Website prior to the Entry Deadline (specified in the competition website).
4. Submission format: Zipped CSV file containing participant’s predictions.
5. The competition has two stages. At the end of stage 1 we will release the labels of the public test set and we will release a new, unseen and unlabeled, private test set. During stage 2, participants are allowed to re-train their models on the combined training set (including the original training set from stage 1 and the labeled public test set). At the end of stage 2 participants are required to submit up to 2 submissions for final judging, giving the prediction of the segments in the private test set. Moving from stage 1 to stage 2, participants will not be required to upload their models.
6. Users: Each participant must create a CodaLab account to register. Only one account per user is allowed.
7. If you are entering as a representative of a company, educational institution, or other legal entity, or on behalf of your employer, these rules are binding for you individually and/or for the entity you represent or are an employee of. If you are acting within the scope of your employment as an employee, contractor, or agent of another party, you affirm that such party has full knowledge of your actions and has consented thereof, including your potential receipt of a prize. You further affirm that your actions do not violate your employer’s or entity’s policies and procedures.
8. Teams: Participants are allowed to form teams. There are no limitations on the number of participants on the team. You may not participate in more than one team. Each team member must be a single individual operating a separate CodaLab account. Team formation requests will not be permitted after the date specified on the competition website. Participants who would like to form a team should review the ‘Competition Teams’ section on CodaLab’s ‘user_teams’ Wiki page. In order to form a valid team, the total submission count of all a team’s participants must be less than or equal to the maximum allowed as of the merge date. The maximum allowed is the number of submissions per day multiplied by the number of days the competition has been running.
9. Team mergers are allowed and can be performed by the team leader. Team merger requests will not be permitted after the "Team mergers deadline" deadline listed on the competition website. In order to merge, the combined team must have a total submission count less than or equal to the maximum allowed as of the merge date. The maximum allowed is the number of submissions per day multiplied by the number of days the competition has been running. The organizers don’t provide any assistance regarding team mergers.
10. External data: You may use data other than the competition data to develop and test your models and submissions. However, any such external data you use for this purpose must be available for use by all other competition participants. Thus, if you use external data, you must make it publicly available and declare it in the competition discussion forum no later than the date specified in the competition website.
11. Submissions may not use or incorporate information from hand labeling or human prediction of the training dataset or test dataset for the competition’s target labels. Ergo, solutions involving human labeling of one of the columns in the submission CSV file will be disqualified.
12. The private test set should be used as is, for prediction generation and submissions only. Using the private test set data in order to train the model (“pseudo-labeling” or any other technique that exploits the test data in the training process) is strictly prohibited.
13. The delivered software code is expected to be capable of generating the winning submission and to operate automatically on new, unseen data without significant loss of performance.
14. The operation of the last will be checked prior to decision on the winning algorithm.
15. The training set includes multiple segments from the same track. Nevertheless, the classification should be done on a single segment level. I.e., the trained models should get a single segment as input and predict the class of this segment as an output. The class of every segment should be inferred separately, based on the features that are extracted only from this specific segment, regardless of any other segment in the data set.
16. Competition Duration: 3 months (July 15th to October 15th).
17. Total Prize Amount (USD): $ 40,000
18. Prize Allocation:
1st Place: $25,000
2nd Place: $10,000
3rd Place: $5,000
19. Upon being awarded a prize:
19.1. The prize winner must deliver to the Competition Organizer the final model’s software code as used to generate the winning submission and associated documentation written in English. The delivered software code must be capable of generating the winning submission and contain a description of resources required to build and/or run the executable code successfully.
19.2. The prize winner must also deliver the software code well documented.
19.3. The prize winner must agree to an interview, in which the winning solution will be discussed.
19.4. The prize winner will grant to the Competition Organizer a nonexclusive license to the winning model’s software code and represent that the winner has the unrestricted right to grant that license.
19.5. The prize winner will sign and return all prize acceptance documents as may be required by the Competition Organizer.
20. If a team wins a monetary prize, Competition Organizer will allocate the prize money in even shares between team members unless the team unanimously contacts the Competition Organizer within three business days following the submission deadline to request an alternative prize distribution.
1. This competition is organized by the Israeli Ministry of Defense. Therefore, participation in this competition is subjected to Israeli law.
2. The competition is open worldwide with the exception of residents of Crimea, Cuba, Iran, Syria, North Korea, Sudan, Lebanon, Iraq, or those who are subject to Israel export controls or sanctions as set forth in Israeli law.
3. The competition is public, but the Competition Organizer may elect to disallow participation according to its own considerations.
4. The Competition Organizer reserves the right to disqualify any entrant from the competition if, in the Competition Organizer’s sole discretion, it reasonably believes that the entrant has attempted to undermine the legitimate operation of the competition through cheating, deception, or other unfair playing practices.
5. Submissions are void if they are in whole or part illegible, incomplete, damaged, altered, counterfeit, obtained through fraudulent means, or late. The Competition Organizer reserves the right, in its sole discretion, to disqualify any entrant who makes a submission that does not adhere to all requirements.
6. Officers, directors, employees, and advisory board members (and their immediate families and members of the same household) of the Competition Organizers (Webiks, Matrix, and Elta) and their respective affiliates are not eligible to participate in the competition.
7. Officers, directors, employees, and advisory board members (and their immediate families and members of the same household) of the Israeli Ministry of Defense and their respective affiliates are eligible to participate in the competition but not eligible to receive any prize.
8. The competition prize winners will be the Highest ranked competitors that are eligible to receive any prize.
9. You agree to use reasonable and suitable measures to prevent persons who have not formally agreed to these rules from gaining access to the competition data. You agree not to transmit, duplicate, publish, redistribute, or otherwise provide or make the data available to any party not participating in the competition. You agree to notify Competition Organizer immediately upon learning of any possible unauthorized transmission or unauthorized access of the data and agree to work with Competition Organizer to rectify any unauthorized transmission. You agree that participation in the competition shall not be construed as having or being granted a license (expressly, by implication, estoppel, or otherwise) under, or any right of ownership in, any of the data.
10. By downloading the data for this competition you agree to the following terms:
10.1. You will not distribute the data.
10. 2. You accept full responsibility for your use of the data and shall defend and indemnify the Competition Organizer, against any and all claims arising from your use of the data.
11. By joining the competition, you affirm and acknowledge that you may not infringe upon any copyrights, intellectual property, or patent of another party for the software you develop in the course of the competition.
12. The Competition Organizer reserves the right to verify eligibility and to adjudicate on any dispute at any time. If you provide any false information relating to the competition concerning your identity, residency, mailing address, telephone number, e-mail address, right of ownership, or information required for entering the competition, you may be immediately disqualified from the competition.
13. If you wish to use external data, you may do so provided that you declare it in the competition forum and provided that such public sharing does not violate the intellectual property rights of any third party. Adding and declaring external data is allowed no later than the "External data posting deadline" date specified on the competition website. Adding external data later than this date or using such data is cause for disqualification from the competition.
14. Participants grant to the Competition Organizer the right to use your winning submissions and the source code used to generate the submission for any purpose whatsoever and without further approval.
15. Prizes are subject to the Competition Organizer’s review and verification of the entrant’s eligibility and compliance with these rules as well as the compliance of the winning submissions with the submission requirements.
16. Prize winnings will be transferred to the winner by a third party.
17. Competition prizes do not include tax payment. Any potential winner is solely responsible for all applicable taxes related to accepting the prize.
18. This competition does not constitute an obligation on behalf of the Israeli Ministry of Defense to either purchase products or to continue working with any of the participants.
19. Right to cancel, modify, or disqualify. The Competition Organizer reserves the right at its sole discretion to terminate, modify, or suspend the competition.
20. For Israeli Ministry of Defense funded participants only: the knowledge and/or code presented by the participants of the competition is for the sole and exclusive use of the Competition Organizer. The Competition Organizer hereby commits not to transfer the knowledge and/or code to any third party for commercial use.
Full Dataset – Available to registered users only; please refer to "participate" tab.
Competition train set, public set and provate set.
Descriptive Statistics – Available to registered users only; please refer to "participate" tab.
In this notebook, you can learn about the distribution of the Training and the Auxiliary datasets as well as other characteristics of the datasets. Use this for a quick start.
The dataset consists of signals recorded by ground doppler-pulse radars. Each radar “stares” at a fixed, wide area of interest. Whenever an animal or a human moves within the radar’s covered area, it is detected and tracked. The dataset contains records of those tracks. The tracks in the dataset are split into 32 time-unit segments. Each record in the dataset represents a single segment. The dataset is split to training and test sets; the training set contains the actual labels (humans or animals).
A segment consists of a matrix with I/Q values and metadata. The matrix of each segment has a size of 32x128. The X-axis represents the pulse transmission time, also known as “slow-time”. The Y-axis represents the reception time of signals with respect to pulse transmission time divided into 128 equal sized bins, also known as “fast-time”. The Y-axis is usually referred to as “range” or “velocity” as wave propagation depends on the speed of light. For example, for Pulse Repetition Interval (PRI) of 128 ms, each Y-axis is a bin of 1 ms. For pulse sent in t(n) and a signal received in t(n+m) where 0<m<=128 the signal is set in the “m” bin of pulse n (the numbers are not the real numbers and are given only for the sake of the example).
The radar’s raw, original received signal is a wave defined by amplitude, frequency, and phase. Frequency and phase are treated as a single-phase parameter. Amplitude and phase are represented in polar coordinates relative to the transmitted burst/wave. Polar coordinate calculations require frequent sine operations, making calculations time-consuming. Therefore, upon reception, the raw data is converted to cartesian coordinates, i.e., I/Q values. The values in the matrix are complex numbers: I represents the real part, and Q represents the imaginary part.
The I/Q matrices that are supplied to participants have been standardized, but they have not been transformed or processed in any other way. Therefore, the data represents the raw signal. Different preprocessing and transformation methods, such as Fourier transform, can and should be used in order to model the data and extract meaningful features. For more information, see “Signal Processing” methods or view the links at the bottom for more information.
The metadata of a segment includes track id, location id, location type, day index, sensor id and the SNR level. The segments were collected from several different geographic locations, a unique id was given per location. Each location consists of one or more sensors, a sensor belongs to a single location. A unique id was given per sensor. Each sensor has been used in one or more days, each day is represented by an index. A single track appears in a single location, sensor and day. The segments were taken from longer tracks, each track was given a unique id.
The data is divided into 14 files (data + metadata). 10 files are available at stage 1 of the competition. The last 4 files will be available at stage 2.
Stage 1:
5 Pickle files for the Training set, Public Test set (Public), and Auxiliary set (3 files).
The fields in the pickle files are: ‘segment_id’, 'doppler_burst', and 'iq_sweep_burst'.
5 CSV files for metadata of the Training set, Public Test set, and Auxiliary set (3 files).
In the Test set metadata CSV file there are two fields 'segment_id' and 'snr_type'.
In all other CSV metadata files there are in addition to the Test set metadata fields the following fields:
'track_id', 'geolocation_type', 'geolocation_id', 'sensor_id', 'date_index', 'target_type'.
Stage 2:
In the second stage of the competition, the public test set will be available with full tracks. The unused and unseen adjacent segments of the segments in the public test will be available (as in the training and the auxiliary data sets).
The fields in the pickle files are: ‘segment_id’, 'doppler_burst', and 'iq_sweep_burst'.
In the Private Test set metadata CSV file there are two fields 'segment_id' and 'snr_type'.
The Public test set with the full tracks CSV metadata file has all other metadata fields:
'track_id', 'geolocation_type', 'geolocation_id', 'sensor_id', 'date_index', 'target_type'.
All datasets segments have the following properties:
'segment_id' – The unique identifier of the segment.
'iq_sweep_burst' – The I/Q signal matrix of the segment.
'doppler_burst' – A time vector, X-axis, defining which velocity cell, Y-axis, contains the tracked object’s center-of-mass. This can be used to better understand the data and focus the model’s “attention” on the object.
'snr_type' (HighSNR or LowSNR or SynthSNR) – Indicates for the quality of the segment, whether it is a high or low signal-to-noise ratio segment.
The Training and Auxiliary data sets have additional fields with additional information on each segment:
'track_id' – Tracks are split into segments. This field identifies and groups the different segments of the same track with a unique identifier of the track.
'geolocation_type' – Identifies the surrounding terrain type of each location. There are 4 types of terrain (values A, B, C, D).
'geolocation_id' – Each location has a unique ID (in the training set and auxiliary it is a numerical value 1-8).
'sensor_id' – Each Radar has a unique ID (numerical values from 1-16). In some of the locations was more than one radar.
'date_index' – Numerical, each number represents a unique calendar day.
‘target_type’ – ‘human’ (1) or ‘animal’ (0) - the identified object in the segment.
Baseline Model – Available to registered users only; please refer to "participate" tab.
In this notebook, you can see the script used for creating the baseline model.
In addition to the Training set described above, the data set also contains auxiliary data:
Synthetic low SNR segments that were created by transforming the high SNR signals from the train set.
“Background” segments – Segments that were recorded by a sensor in parallel to segments with tracks but at a different range. These segments contain the recorded “noise.” Each segment also contains a field mapping to the original High or Low SNR track id.
“Experiment” locations – In these locations, only humans were recorded in a controlled environment, which doesn't necessarily reflect a “natural” recording.
All the Auxiliary datasets have the same fields as the Training set. The Experiment and Synthetic datasets do not represent real-world (nor the Test) data but can be used to better understand the data. The Background dataset is available only in 2 locations in the Training set, which contain only animal recordings and 2 locations in the Experiment locations dataset.
The Auxiliary datasets can be used only in the context of training. The Test set does not include any such auxiliary data. Participants are encouraged to exploit the Auxiliary datasets in order to understand the domain better, to discover confounding variables, to explore interesting correlations, and more—all for the purpose of training models that generalize well to new, unseen data from new geographic locations.
Participants are provided with the following assisting python scripts:
Loading and using the data + Saving the data as a *.mat file – Available to registered users only; please refer to "participate" tab.
This is a script for loading and using the dataset pickle files into Python. In the end, there is a script for converting the pickle files into MATLAB files to participants who would like to use MATLAB.
Spectrogram Generator – Available to registered users only, please refer to "participate" tab.
The task of classifying signals to humans and animals is a hard task, and it is harder in short segments and Low SNR signals. One way to view the data is to visualize the signals as a spectrogram. A spectrogram is depicted as a heat map with intensity shown by a color palette.
You can use this Python script to create and visualize spectrograms. You can also learn the process of transforming the I/Q matrix into a spectrogram. In this specific script, the I/Q matrix is transformed and processed using Hann windowing, FFT (Fast Fourier Transform), calculating the median and setting it as the minimum value of the I/Q matrix and at the end pseudo-coloring.
The images shown below are spectrograms of low and high SNR segments of animals and humans. The white dots are the doppler burst vector which marks the target’s center-of-mass.
Use segments, not tracks
Adjacent segments that can be combined to a whole track can be found in the Training and Auxiliary datasets but not in the Test set. The participants’ goal is to classify every tracked object correctly based on a single segment and not to use the correlation that exists between multiple segments in a track for the classification task. Therefore, most of the records in the Test set are single segments that were randomly selected from a full track. In cases where the track was long enough, more than one segment of the same track may be in the test set. Note that they won’t be sequential.
The classification should be done on a single segment level. i.e., the trained models should receive a single segment as input and predict the class of this segment as an output. The class of every segment should be inferred separately based on the features that are extracted only from this specific segment, regardless of any other segment in the Test set. The prediction should also be stable, given that the same segment the same output is expected.
Generalize to new, unseen, geographic locations
Positioning a radar in a new location changes many things. The terrain, the weather, the objects in the location, reflections—all these factors may vary from one location to another. The ability to classify a tracked object correctly should be resilient to the changes involved in positioning a radar in new locations. The trained models will be challenged to classify humans or animals on radar tracks that were captured in new location sites, unrepresented in the Training set.
The Training and Test sets contain the following:
1,510 tracks in the Training set – use the 'track_id' in order to identify all the segments in a single track in the Training set. Note that the goal is to classify by segment and not by track correlation.
106 segments in the Public Test set and 6,656 segments in the Training set.
In total, there are 566 high SNR tracks and 1,144 low SNR tracks in the Training set. *200 tracks are High SNR in one part and Low SNR in the other.
In total, there are 2,465 high SNR segments and 4,191 low SNR segments in the Training set.
Segments are taken from multiple locations. A location is not guaranteed to be a single dataset but as the goal is to train models that can generalize well to new, unseen, locations – rest assured that several locations are in train or test datasets only.
It should be mentioned that the data in the Training set and in the Test set does not necessarily come from the same distribution. Participants are encouraged to split the Training set into Training and Validation sets (via Cross-Validation or other methods) in such way that the Validation set will resemble the Test set.
Radar
How radar works | Uses of radar
Signal
Frequency, Cycle, Wavelength, Amplitude and Phase
Radar signal characteristics (Wikipedia)
SNR – Signal to Noise Ratio
Signal-to-Noise ratio (abbreviated SNR or S/N) is a measure used in science and engineering that compares the level of a desired signal to the level of background noise. SNR is defined as the ratio of signal power to the noise power, often expressed in decibels. A ratio higher than 1:1 (greater than 0 dB) indicates more signal than noise.
Signal-to-noise ratio (Wikipedia)
Signal Processing
MIT Lecture
Introduction to Radar Systems – Lecture 8 – Signal Processing – Part 1 (Video)
Introduction to Radar Systems – Lecture 8 – Signal Processing – Part 2 (Video)
Introduction to Radar Systems – Lecture 8 – Signal Processing – Part 3 (Video)
Doppler
The Doppler effect (or the Doppler shift) is the change in frequency of a wave in relation to an observer who is moving relative to the wave source.
A Doppler radar is a specialized radar that uses the Doppler effect to produce velocity data about objects at a distance. It does this by bouncing a microwave signal off a desired target and analyzing how the object's motion has altered the frequency of the returned signal. This variation gives direct and highly accurate measurements of the radial component of a target's velocity relative to the radar.
Using and Understanding Doppler Radar
IQ
IQ modulation is an efficient way to transfer information. It captures the amplitude, the frequency and the phase in a single complex number.
Basics of IQ Signals and IQ modulation & demodulation – A tutorial
Understanding I/Q Signals and Quadrature Modulation
Spectrogram
A spectrogram is a visual representation of the spectrum of frequencies of a signal as it varies with time. A spectrogram is usually depicted as a heat map, the intensity shown by varying the color in the spectrogram.
Step by step through a spectrogram (Video)
FFT
A fast Fourier transform (FFT) is an algorithm that computes the discrete Fourier transform (DFT) of a sequence, or its inverse (IDFT). Fourier analysis converts a signal from its original domain (often time or space) to a representation in the frequency domain and vice versa.
But what is the Fourier Transform? A visual introduction.
Understanding FFTs and Windowing
Windowing
In signal processing, windowing is the application of a window function to a signal. In signal processing and statistics, a window function is a mathematical function that is zero-valued outside of some chosen interval, normally symmetric around the middle of the interval, usually near a maximum in the middle, and usually tapering away from the middle. The main purpose of window functions is tapering.
Understanding FFTs and Windowing
Q: How do I register for the ֵcompetition?
A: In order to participate in the competition, you must complete the following stages:
1. Apply for the competition using mafatchallenge.mod.gov.il application form.
2. Register to the competition with your CodaLab user under the competition "participate" tab.
Q: What is the Doppler vector?
A: The Doppler effect is the change in frequency of a wave when its source is moving relative to the observer. The Doppler vector is the center of mass location of the detected object on the spectrogram.
Q: What is IQ Data (“iq_sweep_burst”)?
A: The I/Q signal matrix of the segment. Further reading.
Q: What are the labels of the targets?
A: Human - 1, Animal - 0.
Q: In what format should I submit my file?
A: The submission file must be a “.csv” file packed as a “.zip” file.
You can see the template file here.
Here is an example of a submission file format:
segment_id | prediction |
1 | 0.19 |
2 | 0.03 |
3 | 0.45 |
4 | 0.23 |
Q: How do I submit a result?
A: participate > submit/view results > fill in the required fields > submit > select the zip file you want to upload.
Q: Why can't I submit my zip file?
A: Please check the following –
Q: Why is my submission’s status not changing?
A: Submission results may take some time, usually a few minutes, and on rare occasions may even take a few hours.
Please try refreshing the page or navigate to participate tab > submit/view results > at the table select the wanted result and click refresh status.
Q: How many results can I submit per day?
A: The limit per day is 2.
Q: How many results can I submit throughout the competition?
A: In the first stage of the competition, you can submit twice per day.
In the second stage, after getting the additional data in the last week of the competition you can submit two final submissions.
Q: What is the result that decides my place in the leaderboard?
A: The results with the highest scores will place on the leaderboard.
Q: Where can I find the results for my submissions?
A: At the bottom of “Submit / View Results” under the “Participate” tab, you can view all your submissions.
Q: What does a two stages competition mean?
A: Two stage competition is composed of two stages:
In stage one, you will get the public test set without labels. When submitting your results on this public test, your best result will be displayed on the leaderboard.
In stage two, you will receive the labels, geolocation_id, sensor id, etc., of the public test set, allowing you to train your model on it or use it as validation. In this stage, you will also receive the private test set, used to give the final evaluation of your models. Your mission is to submit to us your 2 best results, and we will take your best submission as the final score of your model.
Note: In the second stage, there is no leaderboard. You cannot view your score on the private test set until the competition ends and we publish the final results of the competition
Q: Can I use external data?
A: You may use data other than the competition data to develop and test your models and submissions. However, any such external data you use for this purpose must be available for use by all other competition participants. Thus, if you use external data, you must make it publicly available and declare it in the competition discussion forum, no later than the date specified in the competition website.
Q: Is teaming allowed?
A: Yes, teaming is allowed. Participants are allowed to form teams. There are no limitations on the number of participants in the team. You may not participate in more than one team. Each team member must be a single individual operating a separate CodaLab account. Team formation requests will not be permitted after the date specified in the competition website. If you are interested in forming a team, kindly advise the ‘Competition Teams’ section in CodaLab’s ‘user_teams’ Wiki page.
Team mergers are allowed and can be performed by the team leader. Team merger requests will not be permitted after the date specified on the competition website. In order to merge, the combined team must have a total submission count less than or equal to the maximum allowed as of the merge date. The maximum allowed is the number of submissions per day multiplied by the number of days the competition has been running. The organizers do not provide any assistance regarding the team mergers.
Q: What is the maximum number of members on a team?
A: There is no limit to the number of team members.
Q: Does the number of possible submissions per day depends on the number of members in the team?
A: No, each team can submit up to 2 successful submissions per day, regardless of the number of team members. Please note that the CodaLab Submission mechanism will not enforce this limit automatically, each team must make sure they adhere to the per-team submission limits (2 entries per day per team during stage 1).
Q: How do you determine the score of my model?
A: The metric used to evaluate the models is the area under the Receiver Operating Characteristic Curve (ROC AUC), as calculated by roc_auc_score in scikit-learn (v 0.23.1). You can read more about this metric here.
Q: How are the winners decided?
A: The winners will be decided based on their score on the private test set, which we will release a week before the competition ends during the second stage of the competition.
Q: How can I contact you?
A: You can e-mail us at: team@mafatchallenge.com or start a thread in the competition forum.
July 15th, 2020, 11:00 AM (GMT+3) – Competition starts!
July 20th, 2020, 6:30 PM (GMT+3) – Competition online meetup broadcasted live on the Israeli Ministry of Defense YouTube channel, watch at https://youtu.be/mFYHInlwOL8?t=109.
September 15th, 2020 – External data posting deadline.
October 1st, 2020 – Team mergers deadline.
October 1st, 2020 – Competition entry deadline.
October 8th, 2020 – Stage 2 (Private) dataset will be published, Stage 1 (Public) dataset labels will be published.
October 15th, 2020, 11:00 AM (GMT+3) – Competition ends.
November 1st, 2020 – Private leaderboard published and winners announced.
Start: July 15, 2020, 8 a.m.
Description: Participants are asked to train their models on the training set and submit their results, predicting the labels of the public test set. Participants are asked to give a probability score for each segment in the provided test set, where humans are classified as 1 and animals as 0. The submission file must be a “.csv” file packed as a “.zip” file. The names of the columns must be “segment_id” and “prediction".
Start: Oct. 8, 2020, 8 a.m.
Description: The private test set includes 248 unlabeled segments. Participants are allowed to re-train their models on the combined training set (including the original training set from stage 1 and the full, labeled, public test set). During stage 2, participants are required to submit up to 2 submissions for final judging, giving the prediction of the segments in the private test set. Teams, please make sure that you submit only 2 submissions as a team. This phase will last until October 15th, 11:00 AM Israel time (GMT + 3).
Oct. 15, 2020, 8 a.m.
You must be logged in to participate in competitions.
Sign In