Object Detection - Quick Start¶
Object detection is the process of identifying and localizing objects in an image and is an important task in computer vision. Follow this tutorial to learn how to use AutoGluon for object detection.
Tip: If you are new to AutoGluon, review Image Prediction - Quick Start first to learn the basics of the AutoGluon API.
Our goal is to detect motorbike in images by YOLOv3 model. A tiny dataset is collected from VOC dataset, which only contains the motorbike category. The model pretrained on the COCO dataset is used to fine-tune our small dataset. With the help of AutoGluon, we are able to try many models with different hyperparameters automatically, and return the best one as our final model.
To start, import autogluon.vision and ObjectDetector:
import autogluon.core as ag
from autogluon.vision import ObjectDetector
Tiny_motorbike Dataset¶
We collect a toy dataset for detecting motorbikes in images. From the VOC dataset, images are randomly selected for training, validation, and testing - 120 images for training, 50 images for validation, and 50 for testing. This tiny dataset follows the same format as VOC.
Using the commands below, we can download this dataset, which is only
23M. The name of unzipped folder is called tiny_motorbike
. Anyway,
the task dataset helper can perform the download and extraction
automatically, and load the dataset according to the detection formats.
url = 'https://autogluon.s3.amazonaws.com/datasets/tiny_motorbike.zip'
dataset_train = ObjectDetector.Dataset.from_voc(url, splits='trainval')
tiny_motorbike/
├── Annotations/
├── ImageSets/
└── JPEGImages/
Fit Models by AutoGluon¶
In this section, we demonstrate how to apply AutoGluon to fit our detection models. We use mobilenet as the backbone for the YOLOv3 model. Two different learning rates are used to fine-tune the network. The best model is the one that obtains the best performance on the validation dataset. You can also try using more networks and hyperparameters to create a larger searching space.
We fit
a classifier using AutoGluon as follows. In each experiment
(one trial in our searching space), we train the model for 5 epochs to
avoid bursting our tutorial runtime.
time_limit = 60*30 # at most 0.5 hour
detector = ObjectDetector()
hyperparameters = {'epochs': 5, 'batch_size': 8}
hyperparamter_tune_kwargs={'num_trials': 2}
detector.fit(dataset_train, time_limit=time_limit, hyperparameters=hyperparameters, hyperparamter_tune_kwargs=hyperparamter_tune_kwargs)
WARNING:gluoncv.auto.tasks.object_detection:The number of requested GPUs is greater than the number of available GPUs.Reduce the number to 1
INFO:gluoncv.auto.tasks.object_detection:Randomly split train_data into train[152]/validation[18] splits.
INFO:gluoncv.auto.tasks.object_detection:Starting fit without HPO
INFO:SSDEstimator:modified configs(<old> != <new>): {
INFO:SSDEstimator:root.gpus (0, 1, 2, 3) != (0,)
INFO:SSDEstimator:root.ssd.data_shape 300 != 512
INFO:SSDEstimator:root.ssd.base_network vgg16_atrous != resnet50_v1
INFO:SSDEstimator:root.dataset_root ~/.mxnet/datasets/ != auto
INFO:SSDEstimator:root.valid.batch_size 16 != 8
INFO:SSDEstimator:root.num_workers 4 != 8
INFO:SSDEstimator:root.train.batch_size 16 != 8
INFO:SSDEstimator:root.train.seed 233 != 326
INFO:SSDEstimator:root.train.epochs 20 != 5
INFO:SSDEstimator:root.dataset voc_tiny != auto
INFO:SSDEstimator:}
INFO:SSDEstimator:Saved config to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/docs/_build/eval/tutorials/object_detection/ef54d9cb/.trial_0/config.yaml
INFO:SSDEstimator:Using transfer learning from ssd_512_resnet50_v1_coco, the other network parameters are ignored.
INFO:SSDEstimator:Start training from [Epoch 0]
INFO:SSDEstimator:[Epoch 0] Training cost: 10.151042, CrossEntropy=3.281443, SmoothL1=1.028610
INFO:SSDEstimator:[Epoch 0] Validation:
dog=0.017241379310344827
person=0.747531396193823
bus=0.19941348973607037
bicycle=0.25487012987012986
car=0.602106439893001
cow=0.6363636363636365
motorbike=0.7710154714589239
pottedplant=0.0
chair=0.0
boat=1.0000000000000002
mAP=0.422854194282593
INFO:SSDEstimator:[Epoch 0] Current best map: 0.422854 vs previous 0.000000, saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/docs/_build/eval/tutorials/object_detection/ef54d9cb/.trial_0/best_checkpoint.pkl
INFO:SSDEstimator:Pickled to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/docs/_build/eval/tutorials/object_detection/ef54d9cb/.trial_0/best_checkpoint.pkl
INFO:SSDEstimator:[Epoch 1] Training cost: 9.090432, CrossEntropy=2.535556, SmoothL1=1.139871
INFO:SSDEstimator:[Epoch 1] Validation:
dog=0.0
person=0.7864001231618044
bus=0.6474953617810761
bicycle=0.4117369117369118
car=0.804983479402084
cow=0.37590771210589213
motorbike=0.8416858727679033
pottedplant=0.0
chair=1.0000000000000002
boat=1.0000000000000002
mAP=0.5868209460955672
INFO:SSDEstimator:[Epoch 1] Current best map: 0.586821 vs previous 0.422854, saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/docs/_build/eval/tutorials/object_detection/ef54d9cb/.trial_0/best_checkpoint.pkl
INFO:SSDEstimator:Pickled to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/docs/_build/eval/tutorials/object_detection/ef54d9cb/.trial_0/best_checkpoint.pkl
INFO:SSDEstimator:[Epoch 2] Training cost: 9.053885, CrossEntropy=2.503177, SmoothL1=1.139994
INFO:SSDEstimator:[Epoch 2] Validation:
dog=0.33333333333333326
person=0.7593827969764868
bus=1.0000000000000002
bicycle=0.4954398352456605
car=0.787568115154322
cow=0.6542473919523102
motorbike=0.8693405617377369
pottedplant=0.026704545454545457
chair=1.0000000000000002
boat=1.0000000000000002
mAP=0.6926016579854396
INFO:SSDEstimator:[Epoch 2] Current best map: 0.692602 vs previous 0.586821, saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/docs/_build/eval/tutorials/object_detection/ef54d9cb/.trial_0/best_checkpoint.pkl
INFO:SSDEstimator:Pickled to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/docs/_build/eval/tutorials/object_detection/ef54d9cb/.trial_0/best_checkpoint.pkl
INFO:SSDEstimator:[Epoch 3] Training cost: 9.160626, CrossEntropy=2.320724, SmoothL1=1.049550
INFO:SSDEstimator:[Epoch 3] Validation:
dog=0.029411764705882353
person=0.8075884467739728
bus=0.8545454545454547
bicycle=0.567829457364341
car=0.7564697802005156
cow=0.6818181818181819
motorbike=0.8629850674224324
pottedplant=0.0
chair=1.0000000000000002
boat=1.0000000000000002
mAP=0.656064815283078
INFO:SSDEstimator:[Epoch 4] Training cost: 8.961474, CrossEntropy=2.366092, SmoothL1=1.014402
INFO:SSDEstimator:[Epoch 4] Validation:
dog=0.33333333333333326
person=0.7329560734257754
bus=1.0000000000000002
bicycle=0.5239098300073911
car=0.726373819889452
cow=0.8181818181818181
motorbike=0.8556214914087439
pottedplant=0.0
chair=1.0000000000000002
boat=1.0000000000000002
mAP=0.6990376366246513
INFO:SSDEstimator:[Epoch 4] Current best map: 0.699038 vs previous 0.692602, saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/docs/_build/eval/tutorials/object_detection/ef54d9cb/.trial_0/best_checkpoint.pkl
INFO:SSDEstimator:Pickled to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/docs/_build/eval/tutorials/object_detection/ef54d9cb/.trial_0/best_checkpoint.pkl
INFO:gluoncv.auto.tasks.object_detection:Finished, total runtime is 82.00 s
INFO:gluoncv.auto.tasks.object_detection:{ 'best_config': { 'batch_size': 8,
'dist_ip_addrs': None,
'epochs': 5,
'final_fit': False,
'gpus': [0],
'log_dir': '/var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/docs/_build/eval/tutorials/object_detection/ef54d9cb',
'lr': 0.001,
'ngpus_per_trial': 8,
'nthreads_per_trial': 128,
'num_trials': 1,
'num_workers': 8,
'search_strategy': 'random',
'seed': 326,
'time_limits': 1800,
'transfer': 'ssd_512_resnet50_v1_coco',
'wall_clock_tick': 1615351067.0110185},
'total_time': 68.49089241027832,
'train_map': 0.6990376366246513,
'valid_map': 0.6990376366246513}
<autogluon.vision.detector.detector.ObjectDetector at 0x7efc579d92d0>
Note that num_trials=2
above is only used to speed up the tutorial.
In normal practice, it is common to only use time_limit
and drop
num_trials
. Also note that hyperparameter tuning defaults to random
search. Model-based variants, such as search_strategy='bayesopt'
or
search_strategy='bayesopt_hyperband'
can be a lot more
sample-efficient.
After fitting, AutoGluon automatically returns the best model among all models in the searching space. From the output, we know the best model is the one trained with the second learning rate. To see how well the returned model performed on test dataset, call detector.evaluate().
dataset_test = ObjectDetector.Dataset.from_voc(url, splits='test')
test_map = detector.evaluate(dataset_test)
print("mAP on test dataset: {}".format(test_map[1][-1]))
tiny_motorbike/
├── Annotations/
├── ImageSets/
└── JPEGImages/
mAP on test dataset: 0.10755951138911266
Below, we randomly select an image from test dataset and show the
predicted class, box and probability over the origin image, stored in
predict_class
, predict_rois
and predict_score
columns,
respectively. You can interpret predict_rois
as a dict of (xmin
,
ymin
, xmax
, ymax
) proportional to original image size.
image_path = dataset_test.iloc[0]['image']
result = detector.predict(image_path)
print(result)
INFO:numexpr.utils:NumExpr defaulting to 8 threads.
predict_class predict_score 0 person 0.994695 1 motorbike 0.939915 2 car 0.824434 3 motorbike 0.167160 4 bicycle 0.111445 .. ... ... 95 person 0.028716 96 person 0.028395 97 person 0.028312 98 car 0.028189 99 person 0.027940 predict_rois 0 {'xmin': 0.38874635100364685, 'ymin': 0.282121... 1 {'xmin': 0.32917776703834534, 'ymin': 0.421286... 2 {'xmin': 0.0011439260561019182, 'ymin': 0.6437... 3 {'xmin': 0.0025899142492562532, 'ymin': 0.6339... 4 {'xmin': 0.31063953042030334, 'ymin': 0.461097... .. ... 95 {'xmin': 0.7272146940231323, 'ymin': 0.3592517... 96 {'xmin': 0.30756649374961853, 'ymin': 0.166207... 97 {'xmin': 0.9600787162780762, 'ymin': 0.4397867... 98 {'xmin': 0.005186837166547775, 'ymin': 0.49647... 99 {'xmin': 0.7025169730186462, 'ymin': 0.4499764... [100 rows x 3 columns]
Prediction with multiple images is permitted:
bulk_result = detector.predict(dataset_test)
print(bulk_result)
predict_class predict_score 0 person 0.994695 1 motorbike 0.939915 2 car 0.824434 3 motorbike 0.167160 4 bicycle 0.111445 ... ... ... 3802 person 0.024518 3803 person 0.024514 3804 chair 0.024256 3805 person 0.024207 3806 person 0.023793 predict_rois 0 {'xmin': 0.38874635100364685, 'ymin': 0.282121... 1 {'xmin': 0.32917776703834534, 'ymin': 0.421286... 2 {'xmin': 0.0011439260561019182, 'ymin': 0.6437... 3 {'xmin': 0.0025899142492562532, 'ymin': 0.6339... 4 {'xmin': 0.31063953042030334, 'ymin': 0.461097... ... ... 3802 {'xmin': 0.362677663564682, 'ymin': 0.05919848... 3803 {'xmin': 0.3049873411655426, 'ymin': 0.5233206... 3804 {'xmin': 0.4679498076438904, 'ymin': 0.2670428... 3805 {'xmin': 0.2853032350540161, 'ymin': 0.0, 'xma... 3806 {'xmin': 0.537401020526886, 'ymin': 0.54520970... image 0 /var/lib/jenkins/.gluoncv/datasets/tiny_motorb... 1 /var/lib/jenkins/.gluoncv/datasets/tiny_motorb... 2 /var/lib/jenkins/.gluoncv/datasets/tiny_motorb... 3 /var/lib/jenkins/.gluoncv/datasets/tiny_motorb... 4 /var/lib/jenkins/.gluoncv/datasets/tiny_motorb... ... ... 3802 /var/lib/jenkins/.gluoncv/datasets/tiny_motorb... 3803 /var/lib/jenkins/.gluoncv/datasets/tiny_motorb... 3804 /var/lib/jenkins/.gluoncv/datasets/tiny_motorb... 3805 /var/lib/jenkins/.gluoncv/datasets/tiny_motorb... 3806 /var/lib/jenkins/.gluoncv/datasets/tiny_motorb... [3807 rows x 4 columns]
We can also save the trained model, and use it later.
savefile = 'detector.ag'
detector.save(savefile)
new_detector = ObjectDetector.load(savefile)
/var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/venv/lib/python3.7/site-packages/mxnet/gluon/block.py:1512: UserWarning: Cannot decide type for the following arguments. Consider providing them as input:
data: None
input_sym_arg_type = in_param.infer_type()[0]