0% found this document useful (0 votes)
7 views25 pages

02 Object-Detection Slide

Uploaded by

陳航希
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views25 pages

02 Object-Detection Slide

Uploaded by

陳航希
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

DataLab Cup 2:

Object Detection

Datalab

1
Outline
• Competition Information
• Evaluation metric
– Mean Average Precision (mAP)
• Hints
• Precautions
• Competition Timeline

2
Competition Information
• Object Detection
– In this competition, we are going to train an object
detection model to detect objects in an image.

3
Competition Information

4
Competition Information

5
Competition Information

6
Evaluation Metric
Mean Average Precision (mAP)

• Intersection over Union (IoU)


– A metric to evaluate the effectiveness of predict
bounding box comparing to the ground truth.

7
Evaluation Metric
Mean Average Precision (mAP)

8
Evaluation Metric
Mean Average Precision (mAP)

9
Evaluation Metric
Mean Average Precision (mAP)

• Precision x Recall curve


– An object detector of a particular class is
considered good if its precision stays high as recall
increases.
– It means that if you vary the confidence threshold,
the precision and recall will still be high.

10
Evaluation Metric
Mean Average Precision (mAP)

• Precision x Recall curve

False positive increases

True positive increases

11
Confidence threshold decreases
Evaluation Metric
Mean Average Precision (mAP)

• Average Precision (AP)


– Smooth the Precision-recall curve and calculate
the area under curve (AUC).

12
Evaluation Metric
Mean Average Precision (mAP)

• Average Precision (AP)

13
Evaluation Metric
Mean Average Precision (mAP)

• Mean Average Precision (mAP)


– Calculate the Average Precision for every class and
average them.

14
Evaluation Metric
Mean Average Precision (mAP)

• Mean Average Precision (mAP)


– In this competition, we divide testing data into 10
groups and calculate the mAP of all classes.
– After deriving the mAP of each class in 10 groups,
we compare the result with ground truth and use
the mean square error as the final score.
Group 0

-> AP of class 2 in group 0 Calculate MSE

Class 2
15
Evaluation Metric
Mean Average Precision (mAP)

• Mean Average Precision (mAP)


– For more detailed explanation of mAP, please see
https://github.com/rafaelpadilla/Object-Detection-Metrics

16
Hints
1. Transfer learning
2. Data augmentation
3. Training strategy
4. Other object detection models

17
Hints
1. Transfer learning
– Training from scratch is nearly impossible for
object detection
– How to load pre-trained model is already
described in lab: style transfer
– You can see all the pre-trained models provided
by Keras here:
https://www.tensorflow.org/api_docs/python/tf/keras/applications

18
Hints
1. Transfer learning
– Feel free to replace the feature extractor with
other pre-trained model
– Be careful that different models require different
data preprocess
Feature Extractor YOLO-specific design

19
Hints
2. Data augmentation
– The dataset we are using in this competition is
the combination of training and validation set
from VOC 2007
– It contains only 5012 images in total.
Furthermore, the labels are highly imbalanced
– Doing data augmentation not only helps your
model generalizing to testing data but also easing
the training process

20
Hints
2. Data augmentation
– Note that the bounding box coordinates have to
be changed accordingly if the image was
transformed

21
Hints
3. Training strategy
– Check bugs
– Be patient

22
Hints
4. Other object detection models
– Feel free to try other object detection models
– It is ok to read other’s code on GitHub, but you
have to implement it yourself
– It’s not allowed to load other’s pre-trained model
which was already trained on object detection
task

23
Precautions
1. The final score will be only based on your ranking on private
leaderboard (80%) and report (20%)
2. Training on the datasets not provided by us is forbidden
3. Loading the model pre-trained on ImageNet is allowed,
while loading the model trained on object detection task is
not allowed
4. Plagiarism gets you 0 point
5. Using ground truth to generate output will get you 0 point
6. Cloning codes from GitHub will you get 0 point

24
Competition Timeline
• Kaggle

• Timeline
– 2023/11/09 (Thu) competition announced
– 2023/11/23 (Thu) 23:59 competition due
– 2023/11/26 (Sun) 23:59 report due
– 2023/11/30 (Thu) top 3 team sharing

25

You might also like