Object Detection - Quick Start¶
Object detection is the process of identifying and localizing objects in an image and is an important task in computer vision. Follow this tutorial to learn how to use AutoGluon for object detection.
Tip: If you are new to AutoGluon, review Image Prediction - Quick Start first to learn the basics of the AutoGluon API.
Our goal is to detect motorbike in images by YOLOv3 model. A tiny dataset is collected from VOC dataset, which only contains the motorbike category. The model pretrained on the COCO dataset is used to fine-tune our small dataset. With the help of AutoGluon, we are able to try many models with different hyperparameters automatically, and return the best one as our final model.
To start, import autogluon.vision and ObjectDetector:
import autogluon.core as ag
from autogluon.vision import ObjectDetector
Tiny_motorbike Dataset¶
We collect a toy dataset for detecting motorbikes in images. From the VOC dataset, images are randomly selected for training, validation, and testing - 120 images for training, 50 images for validation, and 50 for testing. This tiny dataset follows the same format as VOC.
Using the commands below, we can download this dataset, which is only
23M. The name of unzipped folder is called tiny_motorbike
. Anyway,
the task dataset helper can perform the download and extraction
automatically, and load the dataset according to the detection formats.
url = 'https://autogluon.s3.amazonaws.com/datasets/tiny_motorbike.zip'
dataset_train = ObjectDetector.Dataset.from_voc(url, splits='trainval')
tiny_motorbike/
├── Annotations/
├── ImageSets/
└── JPEGImages/
Fit Models by AutoGluon¶
In this section, we demonstrate how to apply AutoGluon to fit our detection models. We use mobilenet as the backbone for the YOLOv3 model. Two different learning rates are used to fine-tune the network. The best model is the one that obtains the best performance on the validation dataset. You can also try using more networks and hyperparameters to create a larger searching space.
We fit
a classifier using AutoGluon as follows. In each experiment
(one trial in our searching space), we train the model for 5 epochs to
avoid bursting our tutorial runtime.
time_limit = 60*30 # at most 0.5 hour
detector = ObjectDetector()
hyperparameters = {'epochs': 5, 'batch_size': 8}
hyperparamter_tune_kwargs={'num_trials': 2}
detector.fit(dataset_train, time_limit=time_limit, hyperparameters=hyperparameters, hyperparamter_tune_kwargs=hyperparamter_tune_kwargs)
WARNING:gluoncv.auto.tasks.object_detection:The number of requested GPUs is greater than the number of available GPUs.Reduce the number to 1
INFO:gluoncv.auto.tasks.object_detection:Randomly split train_data into train[155]/validation[15] splits.
INFO:gluoncv.auto.tasks.object_detection:Starting fit without HPO
INFO:SSDEstimator:modified configs(<old> != <new>): {
INFO:SSDEstimator:root.gpus (0, 1, 2, 3) != (0,)
INFO:SSDEstimator:root.dataset voc_tiny != auto
INFO:SSDEstimator:root.valid.batch_size 16 != 8
INFO:SSDEstimator:root.num_workers 4 != 8
INFO:SSDEstimator:root.train.epochs 20 != 5
INFO:SSDEstimator:root.train.seed 233 != 649
INFO:SSDEstimator:root.train.batch_size 16 != 8
INFO:SSDEstimator:root.dataset_root ~/.mxnet/datasets/ != auto
INFO:SSDEstimator:root.ssd.base_network vgg16_atrous != resnet50_v1
INFO:SSDEstimator:root.ssd.data_shape 300 != 512
INFO:SSDEstimator:}
INFO:SSDEstimator:Saved config to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/docs/_build/eval/tutorials/object_detection/d15a8966/.trial_0/config.yaml
INFO:SSDEstimator:Using transfer learning from ssd_512_resnet50_v1_coco, the other network parameters are ignored.
INFO:SSDEstimator:Start training from [Epoch 0]
INFO:SSDEstimator:[Epoch 0] Training cost: 10.234645, CrossEntropy=3.587573, SmoothL1=0.992548
INFO:SSDEstimator:[Epoch 0] Validation:
person=0.7191111598816625
chair=0.0
cow=0.4242424242424242
car=0.6399427763755694
bus=0.4703557312252963
pottedplant=0.03248906980250263
boat=1.0000000000000002
dog=0.25000000000000006
motorbike=0.6941025642763334
bicycle=0.09696969696969697
mAP=0.4327213422773485
INFO:SSDEstimator:[Epoch 0] Current best map: 0.432721 vs previous 0.000000, saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/docs/_build/eval/tutorials/object_detection/d15a8966/.trial_0/best_checkpoint.pkl
INFO:SSDEstimator:Pickled to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/docs/_build/eval/tutorials/object_detection/d15a8966/.trial_0/best_checkpoint.pkl
INFO:SSDEstimator:[Epoch 1] Training cost: 9.216348, CrossEntropy=2.771490, SmoothL1=1.246909
INFO:SSDEstimator:[Epoch 1] Validation:
person=0.7141394277265268
chair=0.0
cow=0.6363636363636365
car=0.7333225108225108
bus=0.6363636363636365
pottedplant=0.010570824524312896
boat=1.0000000000000002
dog=1.0000000000000002
motorbike=0.852766748600083
bicycle=0.050156739811912224
mAP=0.5633683524212619
INFO:SSDEstimator:[Epoch 1] Current best map: 0.563368 vs previous 0.432721, saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/docs/_build/eval/tutorials/object_detection/d15a8966/.trial_0/best_checkpoint.pkl
INFO:SSDEstimator:Pickled to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/docs/_build/eval/tutorials/object_detection/d15a8966/.trial_0/best_checkpoint.pkl
INFO:SSDEstimator:[Epoch 2] Training cost: 8.993011, CrossEntropy=2.486901, SmoothL1=1.214838
INFO:SSDEstimator:[Epoch 2] Validation:
person=0.8220565747255918
chair=0.0
cow=1.0000000000000002
car=0.7825914752792223
bus=0.909090909090909
pottedplant=0.023047375160051214
boat=1.0000000000000002
dog=1.0000000000000002
motorbike=0.8662740586298205
bicycle=0.1400243814785323
mAP=0.6543084774364128
INFO:SSDEstimator:[Epoch 2] Current best map: 0.654308 vs previous 0.563368, saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/docs/_build/eval/tutorials/object_detection/d15a8966/.trial_0/best_checkpoint.pkl
INFO:SSDEstimator:Pickled to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/docs/_build/eval/tutorials/object_detection/d15a8966/.trial_0/best_checkpoint.pkl
INFO:SSDEstimator:[Epoch 3] Training cost: 9.618990, CrossEntropy=2.290904, SmoothL1=1.053405
INFO:SSDEstimator:[Epoch 3] Validation:
person=0.8351563063225981
chair=0.0
cow=1.0000000000000002
car=0.8100527161011032
bus=1.0000000000000002
pottedplant=0.009372071227741332
boat=1.0000000000000002
dog=1.0000000000000002
motorbike=0.8790635653047766
bicycle=0.3182826449614109
mAP=0.6851927303917631
INFO:SSDEstimator:[Epoch 3] Current best map: 0.685193 vs previous 0.654308, saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/docs/_build/eval/tutorials/object_detection/d15a8966/.trial_0/best_checkpoint.pkl
INFO:SSDEstimator:Pickled to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/docs/_build/eval/tutorials/object_detection/d15a8966/.trial_0/best_checkpoint.pkl
INFO:SSDEstimator:[Epoch 4] Training cost: 8.878208, CrossEntropy=2.189218, SmoothL1=0.926934
INFO:SSDEstimator:[Epoch 4] Validation:
person=0.8220954104848455
chair=0.33333333333333326
cow=1.0000000000000002
car=0.7320522902052239
bus=1.0000000000000002
pottedplant=0.1188811188811189
boat=1.0000000000000002
dog=1.0000000000000002
motorbike=0.8897252553882458
bicycle=0.6953299226026499
mAP=0.7591417330895418
INFO:SSDEstimator:[Epoch 4] Current best map: 0.759142 vs previous 0.685193, saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/docs/_build/eval/tutorials/object_detection/d15a8966/.trial_0/best_checkpoint.pkl
INFO:SSDEstimator:Pickled to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/docs/_build/eval/tutorials/object_detection/d15a8966/.trial_0/best_checkpoint.pkl
INFO:gluoncv.auto.tasks.object_detection:Finished, total runtime is 84.43 s
INFO:gluoncv.auto.tasks.object_detection:{ 'best_config': { 'batch_size': 8,
'dist_ip_addrs': None,
'epochs': 5,
'final_fit': False,
'gpus': [0],
'log_dir': '/var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/docs/_build/eval/tutorials/object_detection/d15a8966',
'lr': 0.001,
'ngpus_per_trial': 8,
'nthreads_per_trial': 128,
'num_trials': 1,
'num_workers': 8,
'search_strategy': 'random',
'seed': 649,
'time_limits': 1800,
'transfer': 'ssd_512_resnet50_v1_coco',
'wall_clock_tick': 1614630986.9654143},
'total_time': 70.4141263961792,
'train_map': 0.7591417330895418,
'valid_map': 0.7591417330895418}
<autogluon.vision.detector.detector.ObjectDetector at 0x7f326af5da90>
Note that num_trials=2
above is only used to speed up the tutorial.
In normal practice, it is common to only use time_limit
and drop
num_trials
. Also note that hyperparameter tuning defaults to random
search. Model-based variants, such as search_strategy='bayesopt'
or
search_strategy='bayesopt_hyperband'
can be a lot more
sample-efficient.
After fitting, AutoGluon automatically returns the best model among all models in the searching space. From the output, we know the best model is the one trained with the second learning rate. To see how well the returned model performed on test dataset, call detector.evaluate().
dataset_test = ObjectDetector.Dataset.from_voc(url, splits='test')
test_map = detector.evaluate(dataset_test)
print("mAP on test dataset: {}".format(test_map[1][-1]))
tiny_motorbike/
├── Annotations/
├── ImageSets/
└── JPEGImages/
mAP on test dataset: 0.20662772445358008
Below, we randomly select an image from test dataset and show the
predicted class, box and probability over the origin image, stored in
predict_class
, predict_rois
and predict_score
columns,
respectively. You can interpret predict_rois
as a dict of (xmin
,
ymin
, xmax
, ymax
) proportional to original image size.
image_path = dataset_test.iloc[0]['image']
result = detector.predict(image_path)
print(result)
INFO:numexpr.utils:NumExpr defaulting to 8 threads.
predict_class predict_score 0 person 0.995367 1 motorbike 0.983526 2 car 0.667862 3 motorbike 0.153885 4 person 0.065406 .. ... ... 95 person 0.022533 96 person 0.022508 97 person 0.022442 98 car 0.022428 99 car 0.022378 predict_rois 0 {'xmin': 0.3976413309574127, 'ymin': 0.2700316... 1 {'xmin': 0.3170107901096344, 'ymin': 0.4040747... 2 {'xmin': 0.00544890109449625, 'ymin': 0.649622... 3 {'xmin': 0.003495187032967806, 'ymin': 0.64832... 4 {'xmin': 0.38507580757141113, 'ymin': 0.353036... .. ... 95 {'xmin': 0.8610648512840271, 'ymin': 0.3698972... 96 {'xmin': 0.6233610510826111, 'ymin': 0.0663688... 97 {'xmin': 0.5761957168579102, 'ymin': 0.7911864... 98 {'xmin': 0.06833697110414505, 'ymin': 0.767751... 99 {'xmin': 0.0017972586210817099, 'ymin': 0.3919... [100 rows x 3 columns]
Prediction with multiple images is permitted:
bulk_result = detector.predict(dataset_test)
print(bulk_result)
predict_class predict_score 0 person 0.995367 1 motorbike 0.983526 2 car 0.667862 3 motorbike 0.153885 4 person 0.065406 ... ... ... 4730 person 0.030338 4731 person 0.030266 4732 person 0.030258 4733 person 0.030160 4734 person 0.029918 predict_rois 0 {'xmin': 0.3976413309574127, 'ymin': 0.2700316... 1 {'xmin': 0.3170107901096344, 'ymin': 0.4040747... 2 {'xmin': 0.00544890109449625, 'ymin': 0.649622... 3 {'xmin': 0.003495187032967806, 'ymin': 0.64832... 4 {'xmin': 0.38507580757141113, 'ymin': 0.353036... ... ... 4730 {'xmin': 0.11550429463386536, 'ymin': 0.369952... 4731 {'xmin': 0.4281652271747589, 'ymin': 0.5111058... 4732 {'xmin': 0.3011176884174347, 'ymin': 0.1622413... 4733 {'xmin': 0.8681222200393677, 'ymin': 0.7616365... 4734 {'xmin': 0.44481369853019714, 'ymin': 0.769294... image 0 /var/lib/jenkins/.gluoncv/datasets/tiny_motorb... 1 /var/lib/jenkins/.gluoncv/datasets/tiny_motorb... 2 /var/lib/jenkins/.gluoncv/datasets/tiny_motorb... 3 /var/lib/jenkins/.gluoncv/datasets/tiny_motorb... 4 /var/lib/jenkins/.gluoncv/datasets/tiny_motorb... ... ... 4730 /var/lib/jenkins/.gluoncv/datasets/tiny_motorb... 4731 /var/lib/jenkins/.gluoncv/datasets/tiny_motorb... 4732 /var/lib/jenkins/.gluoncv/datasets/tiny_motorb... 4733 /var/lib/jenkins/.gluoncv/datasets/tiny_motorb... 4734 /var/lib/jenkins/.gluoncv/datasets/tiny_motorb... [4735 rows x 4 columns]
We can also save the trained model, and use it later.
savefile = 'detector.ag'
detector.save(savefile)
new_detector = ObjectDetector.load(savefile)
/var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/venv/lib/python3.7/site-packages/mxnet/gluon/block.py:1512: UserWarning: Cannot decide type for the following arguments. Consider providing them as input:
data: None
input_sym_arg_type = in_param.infer_type()[0]