.. _sec_object_detection_quick: Object Detection - Quick Start ============================== Object detection is the process of identifying and localizing objects in an image and is an important task in computer vision. Follow this tutorial to learn how to use AutoGluon for object detection. **Tip**: If you are new to AutoGluon, review :ref:`sec_imgquick` first to learn the basics of the AutoGluon API. Our goal is to detect motorbike in images by `YOLOv3 model `__. A tiny dataset is collected from VOC dataset, which only contains the motorbike category. The model pretrained on the COCO dataset is used to fine-tune our small dataset. With the help of AutoGluon, we are able to try many models with different hyperparameters automatically, and return the best one as our final model. To start, import ObjectDetector: .. code:: python from autogluon.vision import ObjectDetector .. parsed-literal:: :class: output /var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/venv/lib/python3.7/site-packages/gluoncv/__init__.py:40: UserWarning: Both `mxnet==1.7.0` and `torch==1.9.0+cu102` are installed. You might encounter increased GPU memory footprint if both framework are used at the same time. warnings.warn(f'Both `mxnet=={mx.__version__}` and `torch=={torch.__version__}` are installed. ' Tiny\_motorbike Dataset ----------------------- We collect a toy dataset for detecting motorbikes in images. From the VOC dataset, images are randomly selected for training, validation, and testing - 120 images for training, 50 images for validation, and 50 for testing. This tiny dataset follows the same format as VOC. Using the commands below, we can download this dataset, which is only 23M. The name of unzipped folder is called ``tiny_motorbike``. Anyway, the task dataset helper can perform the download and extraction automatically, and load the dataset according to the detection formats. .. code:: python url = 'https://autogluon.s3.amazonaws.com/datasets/tiny_motorbike.zip' dataset_train = ObjectDetector.Dataset.from_voc(url, splits='trainval') .. parsed-literal:: :class: output tiny_motorbike/ ├── Annotations/ ├── ImageSets/ └── JPEGImages/ Fit Models by AutoGluon ----------------------- In this section, we demonstrate how to apply AutoGluon to fit our detection models. We use mobilenet as the backbone for the YOLOv3 model. Two different learning rates are used to fine-tune the network. The best model is the one that obtains the best performance on the validation dataset. You can also try using more networks and hyperparameters to create a larger searching space. We ``fit`` a classifier using AutoGluon as follows. In each experiment (one trial in our searching space), we train the model for 5 epochs to avoid bursting our tutorial runtime. .. code:: python time_limit = 60*30 # at most 0.5 hour detector = ObjectDetector() hyperparameters = {'epochs': 5, 'batch_size': 8} hyperparameter_tune_kwargs={'num_trials': 2} detector.fit(dataset_train, time_limit=time_limit, hyperparameters=hyperparameters, hyperparameter_tune_kwargs=hyperparameter_tune_kwargs) .. parsed-literal:: :class: output The number of requested GPUs is greater than the number of available GPUs.Reduce the number to 1 Randomly split train_data into train[150]/validation[20] splits. Starting HPO experiments .. parsed-literal:: :class: output 0%| | 0/2 [00:00 != ): { root.num_workers 4 != 8 root.dataset_root ~/.mxnet/datasets/ != auto root.dataset voc_tiny != auto root.train.batch_size 16 != 8 root.train.early_stop_baseline 0.0 != -inf root.train.early_stop_max_value 1.0 != inf root.train.early_stop_patience -1 != 10 root.train.seed 233 != 715 root.train.epochs 20 != 5 root.gpus (0, 1, 2, 3) != (0,) root.valid.batch_size 16 != 8 root.ssd.base_network vgg16_atrous != resnet50_v1 root.ssd.data_shape 300 != 512 } Saved config to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/docs/_build/eval/tutorials/object_detection/8b829587/.trial_0/config.yaml Using transfer learning from ssd_512_resnet50_v1_coco, the other network parameters are ignored. Start training from [Epoch 0] [Epoch 0] Training cost: 8.862641, CrossEntropy=3.547565, SmoothL1=0.999755 [Epoch 0] Validation: person=0.6550085253092772 cow=nan bus=1.0000000000000002 bicycle=0.5000000000000001 dog=0.0 chair=nan boat=nan motorbike=0.7510034809067803 car=1.0000000000000002 pottedplant=0.0 mAP=0.5580017151737225 [Epoch 0] Current best map: 0.558002 vs previous 0.000000, saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/docs/_build/eval/tutorials/object_detection/8b829587/.trial_0/best_checkpoint.pkl [Epoch 1] Training cost: 8.162877, CrossEntropy=2.745820, SmoothL1=1.159730 [Epoch 1] Validation: person=0.7453687408338571 cow=nan bus=1.0000000000000002 bicycle=0.03636363636363636 dog=1.0000000000000002 chair=nan boat=nan motorbike=0.7745454545454544 car=1.0000000000000002 pottedplant=0.0 mAP=0.6508968331061354 [Epoch 1] Current best map: 0.650897 vs previous 0.558002, saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/docs/_build/eval/tutorials/object_detection/8b829587/.trial_0/best_checkpoint.pkl [Epoch 2] Training cost: 8.205384, CrossEntropy=2.389947, SmoothL1=1.118457 [Epoch 2] Validation: person=0.8148368996407662 cow=nan bus=1.0000000000000002 bicycle=0.03896103896103896 dog=0.33333333333333326 chair=nan boat=nan motorbike=0.833822091886608 car=1.0000000000000002 pottedplant=0.0 mAP=0.5744219091173923 [Epoch 3] Training cost: 8.239273, CrossEntropy=2.275549, SmoothL1=0.931290 [Epoch 3] Validation: person=0.8376956617222866 cow=nan bus=1.0000000000000002 bicycle=0.0 dog=1.0000000000000002 chair=nan boat=nan motorbike=0.7817730838067444 car=1.0000000000000002 pottedplant=0.0 mAP=0.6599241065041473 [Epoch 3] Current best map: 0.659924 vs previous 0.650897, saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/docs/_build/eval/tutorials/object_detection/8b829587/.trial_0/best_checkpoint.pkl [Epoch 4] Training cost: 7.828332, CrossEntropy=2.373933, SmoothL1=1.050903 [Epoch 4] Validation: person=0.7006795973767732 cow=nan bus=1.0000000000000002 bicycle=0.028708133971291867 dog=0.0 chair=nan boat=nan motorbike=0.7874027825102737 car=0.25000000000000006 pottedplant=0.0 mAP=0.3952557876940484 Applying the state from the best checkpoint... modified configs( != ): { root.num_workers 4 != 8 root.dataset_root ~/.mxnet/datasets/ != auto root.dataset voc_tiny != auto root.train.epochs 20 != 5 root.train.batch_size 16 != 8 root.train.early_stop_baseline 0.0 != -inf root.train.early_stop_max_value 1.0 != inf root.train.early_stop_patience -1 != 10 root.train.seed 233 != 715 root.gpus (0, 1, 2, 3) != (0,) root.valid.batch_size 16 != 8 } Saved config to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/docs/_build/eval/tutorials/object_detection/8b829587/.trial_1/config.yaml Using transfer learning from yolo3_darknet53_coco, the other network parameters are ignored. Start training from [Epoch 0] [Epoch 0] Training cost: 13.318, ObjLoss=8.721, BoxCenterLoss=7.626, BoxScaleLoss=2.039, ClassLoss=4.037 [Epoch 0] Validation: person=0.544292643673739 cow=nan bus=1.0000000000000002 bicycle=1.0000000000000002 dog=0.0 chair=nan boat=nan motorbike=0.7931228500146169 car=0.5000000000000001 pottedplant=0.0 mAP=0.548202213384051 [Epoch 0] Current best map: 0.548202 vs previous 0.000000, saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/docs/_build/eval/tutorials/object_detection/8b829587/.trial_1/best_checkpoint.pkl [Epoch 1] Training cost: 8.208, ObjLoss=8.763, BoxCenterLoss=7.635, BoxScaleLoss=2.637, ClassLoss=3.511 [Epoch 1] Validation: person=0.5122707801569751 cow=nan bus=1.0000000000000002 bicycle=0.6666666666666665 dog=0.5000000000000001 chair=nan boat=nan motorbike=0.817133520560294 car=0.5000000000000001 pottedplant=0.0 mAP=0.570867281054848 [Epoch 1] Current best map: 0.570867 vs previous 0.548202, saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/docs/_build/eval/tutorials/object_detection/8b829587/.trial_1/best_checkpoint.pkl [Epoch 2] Training cost: 11.691, ObjLoss=9.229, BoxCenterLoss=7.855, BoxScaleLoss=2.943, ClassLoss=3.415 [Epoch 2] Validation: person=0.4699096225412015 cow=nan bus=1.0000000000000002 bicycle=0.4000000000000001 dog=0.0 chair=nan boat=nan motorbike=0.6385327664559888 car=1.0000000000000002 pottedplant=0.0 mAP=0.5012060555710273 [Epoch 3] Training cost: 14.954, ObjLoss=9.605, BoxCenterLoss=7.870, BoxScaleLoss=3.045, ClassLoss=3.250 [Epoch 3] Validation: person=0.671712976945535 cow=nan bus=1.0000000000000002 bicycle=1.0000000000000002 dog=0.0 chair=nan boat=nan motorbike=0.7820218996689585 car=0.5000000000000001 pottedplant=0.0 mAP=0.5648192680877848 [Epoch 4] Training cost: 12.014, ObjLoss=9.704, BoxCenterLoss=7.974, BoxScaleLoss=3.082, ClassLoss=3.045 [Epoch 4] Validation: person=0.8149235857310393 cow=nan bus=1.0000000000000002 bicycle=0.05454545454545456 dog=0.0 chair=nan boat=nan motorbike=0.6814861275088547 car=1.0000000000000002 pottedplant=0.0 mAP=0.5072793096836213 Applying the state from the best checkpoint... modified configs( != ): { root.num_workers 4 != 8 root.dataset_root ~/.mxnet/datasets/ != auto root.dataset voc_tiny != auto root.train.epochs 20 != 5 root.train.batch_size 16 != 8 root.train.early_stop_baseline 0.0 != -inf root.train.early_stop_max_value 1.0 != inf root.train.early_stop_patience -1 != 10 root.train.seed 233 != 715 root.gpus (0, 1, 2, 3) != (0,) root.valid.batch_size 16 != 8 } Saved config to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/docs/_build/eval/tutorials/object_detection/8b829587/.trial_0/config.yaml Using transfer learning from yolo3_darknet53_coco, the other network parameters are ignored. Start training from [Epoch 0] [Epoch 0] Training cost: 11.484, ObjLoss=10.139, BoxCenterLoss=7.506, BoxScaleLoss=2.054, ClassLoss=4.329 [Epoch 0] Validation: person=0.5953604135422318 cow=nan bus=1.0000000000000002 bicycle=1.0000000000000002 dog=1.0000000000000002 chair=nan boat=nan motorbike=0.6650385833096322 car=0.25000000000000006 pottedplant=0.0 mAP=0.6443427138359806 [Epoch 0] Current best map: 0.644343 vs previous 0.000000, saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/docs/_build/eval/tutorials/object_detection/8b829587/.trial_0/best_checkpoint.pkl [Epoch 1] Training cost: 8.047, ObjLoss=9.730, BoxCenterLoss=7.472, BoxScaleLoss=2.511, ClassLoss=3.714 [Epoch 1] Validation: person=0.4387786531230103 cow=nan bus=1.0000000000000002 bicycle=0.5454545454545455 dog=0.0 chair=nan boat=nan motorbike=0.7694829244829245 car=0.09090909090909091 pottedplant=0.0 mAP=0.40637503056708163 [Epoch 2] Training cost: 11.591, ObjLoss=10.031, BoxCenterLoss=7.653, BoxScaleLoss=3.022, ClassLoss=3.371 [Epoch 2] Validation: person=0.7086336371095728 cow=nan bus=1.0000000000000002 bicycle=0.5454545454545455 dog=1.0000000000000002 chair=nan boat=nan motorbike=0.7479565784745732 car=0.0 pottedplant=0.0 mAP=0.5717206801483846 [Epoch 3] Training cost: 15.022, ObjLoss=10.499, BoxCenterLoss=7.891, BoxScaleLoss=3.236, ClassLoss=3.188 [Epoch 3] Validation: person=0.6266646963498219 cow=nan bus=1.0000000000000002 bicycle=1.0000000000000002 dog=0.0 chair=nan boat=nan motorbike=0.8665926262630411 car=0.33333333333333326 pottedplant=0.0 mAP=0.5466558079923137 [Epoch 4] Training cost: 11.700, ObjLoss=10.314, BoxCenterLoss=7.912, BoxScaleLoss=3.251, ClassLoss=2.969 [Epoch 4] Validation: person=0.5068840579710145 cow=nan bus=1.0000000000000002 bicycle=1.0000000000000002 dog=0.0 chair=nan boat=nan motorbike=0.8322394451426711 car=0.5000000000000001 pottedplant=0.0 mAP=0.5484462147305266 Applying the state from the best checkpoint... Finished, total runtime is 234.83 s { 'best_config': { 'dataset': 'auto', 'dataset_root': 'auto', 'estimator': , 'gpus': [0], 'horovod': False, 'num_workers': 8, 'resume': '', 'save_interval': 10, 'save_prefix': '', 'train': { 'batch_size': 8, 'early_stop_baseline': -inf, 'early_stop_max_value': inf, 'early_stop_min_delta': 0.001, 'early_stop_patience': 10, 'epochs': 5, 'label_smooth': False, 'log_interval': 100, 'lr': 0.001, 'lr_decay': 0.1, 'lr_decay_epoch': (160, 180), 'lr_decay_period': 0, 'lr_mode': 'step', 'mixup': False, 'momentum': 0.9, 'no_mixup_epochs': 20, 'no_wd': False, 'num_samples': -1, 'seed': 715, 'start_epoch': 0, 'warmup_epochs': 0, 'warmup_lr': 0.0, 'wd': 0.0005}, 'valid': { 'batch_size': 8, 'iou_thresh': 0.5, 'metric': 'voc07', 'val_interval': 1}, 'yolo3': { 'amp': False, 'anchors': ( [10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]), 'base_network': 'darknet53', 'data_shape': 416, 'filters': (512, 256, 128), 'nms_thresh': 0.45, 'nms_topk': 400, 'no_random_shape': False, 'strides': (8, 16, 32), 'syncbn': False, 'transfer': 'yolo3_darknet53_coco'}}, 'total_time': 234.83203172683716, 'train_map': 0.5028437862370443, 'valid_map': 0.6443427138359806} .. parsed-literal:: :class: output Note that ``num_trials=2`` above is only used to speed up the tutorial. In normal practice, it is common to only use ``time_limit`` and drop ``num_trials``. Also note that hyperparameter tuning defaults to random search. Model-based variants, such as ``searcher='bayesopt'`` in ``hyperparameter_tune_kwargs`` can be a lot more sample-efficient. After fitting, AutoGluon automatically returns the best model among all models in the searching space. From the output, we know the best model is the one trained with the second learning rate. To see how well the returned model performed on test dataset, call detector.evaluate(). .. code:: python dataset_test = ObjectDetector.Dataset.from_voc(url, splits='test') test_map = detector.evaluate(dataset_test) print("mAP on test dataset: {}".format(test_map[1][-1])) .. parsed-literal:: :class: output tiny_motorbike/ ├── Annotations/ ├── ImageSets/ └── JPEGImages/ mAP on test dataset: 0.1456443656975456 Below, we randomly select an image from test dataset and show the predicted class, box and probability over the origin image, stored in ``predict_class``, ``predict_rois`` and ``predict_score`` columns, respectively. You can interpret ``predict_rois`` as a dict of (``xmin``, ``ymin``, ``xmax``, ``ymax``) proportional to original image size. .. code:: python image_path = dataset_test.iloc[0]['image'] result = detector.predict(image_path) print(result) .. parsed-literal:: :class: output predict_class predict_score \ 0 person 0.670992 1 motorbike 0.548376 2 motorbike 0.465958 3 person 0.297391 4 person 0.263531 5 person 0.168833 6 person 0.101813 7 motorbike 0.080886 8 motorbike 0.073280 9 car 0.073177 10 motorbike 0.048323 11 motorbike 0.047944 12 motorbike 0.033986 13 pottedplant 0.022304 14 motorbike 0.019886 15 pottedplant 0.018582 16 car 0.018488 17 motorbike 0.017762 18 person 0.013835 19 pottedplant 0.013085 20 dog 0.012839 21 motorbike 0.012794 22 person 0.012009 23 motorbike 0.011984 24 bicycle 0.011897 25 person 0.010887 26 person 0.010562 27 person 0.010405 predict_rois 0 {'xmin': 0.3782995939254761, 'ymin': 0.2945322... 1 {'xmin': 0.0, 'ymin': 0.6371848583221436, 'xma... 2 {'xmin': 0.31195250153541565, 'ymin': 0.452559... 3 {'xmin': 0.6325993537902832, 'ymin': 0.0435023... 4 {'xmin': 0.7543973326683044, 'ymin': 0.0450115... 5 {'xmin': 0.8843181729316711, 'ymin': 0.0114150... 6 {'xmin': 0.5207630395889282, 'ymin': 0.0280970... 7 {'xmin': 0.0361197255551815, 'ymin': 0.4919169... 8 {'xmin': 0.38730472326278687, 'ymin': 0.318395... 9 {'xmin': 0.0361197255551815, 'ymin': 0.4919169... 10 {'xmin': 0.7543973326683044, 'ymin': 0.0450115... 11 {'xmin': 0.6325993537902832, 'ymin': 0.0435023... 12 {'xmin': 0.0, 'ymin': 0.0, 'xmax': 1.0, 'ymax'... 13 {'xmin': 0.0, 'ymin': 0.6371848583221436, 'xma... 14 {'xmin': 0.5207630395889282, 'ymin': 0.0280970... 15 {'xmin': 0.3782995939254761, 'ymin': 0.2945322... 16 {'xmin': 0.0, 'ymin': 0.6519644856452942, 'xma... 17 {'xmin': 0.8843181729316711, 'ymin': 0.0114150... 18 {'xmin': 0.6730436086654663, 'ymin': 0.0291886... 19 {'xmin': 0.6325993537902832, 'ymin': 0.0435023... 20 {'xmin': 0.31195250153541565, 'ymin': 0.452559... 21 {'xmin': 0.011701270937919617, 'ymin': 0.03621... 22 {'xmin': 0.5618847012519836, 'ymin': 0.0097288... 23 {'xmin': 0.7213250398635864, 'ymin': 0.3840133... 24 {'xmin': 0.31195250153541565, 'ymin': 0.452559... 25 {'xmin': 0.3077634274959564, 'ymin': 0.3562027... 26 {'xmin': 0.41763123869895935, 'ymin': 0.272201... 27 {'xmin': 0.45487532019615173, 'ymin': 0.013570... Prediction with multiple images is permitted: .. code:: python bulk_result = detector.predict(dataset_test) print(bulk_result) .. parsed-literal:: :class: output predict_class predict_score \ 0 person 0.670992 1 motorbike 0.548376 2 motorbike 0.465958 3 person 0.297391 4 person 0.263531 ... ... ... 1828 motorbike 0.033558 1829 person 0.031899 1830 person 0.017892 1831 person 0.012938 1832 motorbike 0.012641 predict_rois \ 0 {'xmin': 0.3782995939254761, 'ymin': 0.2945322... 1 {'xmin': 0.0, 'ymin': 0.6371848583221436, 'xma... 2 {'xmin': 0.31195250153541565, 'ymin': 0.452559... 3 {'xmin': 0.6325993537902832, 'ymin': 0.0435023... 4 {'xmin': 0.7543973326683044, 'ymin': 0.0450115... ... ... 1828 {'xmin': 0.27905958890914917, 'ymin': 0.137294... 1829 {'xmin': 0.29887059330940247, 'ymin': 0.221315... 1830 {'xmin': 0.295794278383255, 'ymin': 0.05811200... 1831 {'xmin': 0.018482832238078117, 'ymin': 0.46835... 1832 {'xmin': 0.018482832238078117, 'ymin': 0.46835... image 0 /var/lib/jenkins/.gluoncv/datasets/tiny_motorb... 1 /var/lib/jenkins/.gluoncv/datasets/tiny_motorb... 2 /var/lib/jenkins/.gluoncv/datasets/tiny_motorb... 3 /var/lib/jenkins/.gluoncv/datasets/tiny_motorb... 4 /var/lib/jenkins/.gluoncv/datasets/tiny_motorb... ... ... 1828 /var/lib/jenkins/.gluoncv/datasets/tiny_motorb... 1829 /var/lib/jenkins/.gluoncv/datasets/tiny_motorb... 1830 /var/lib/jenkins/.gluoncv/datasets/tiny_motorb... 1831 /var/lib/jenkins/.gluoncv/datasets/tiny_motorb... 1832 /var/lib/jenkins/.gluoncv/datasets/tiny_motorb... [1833 rows x 4 columns] We can also save the trained model, and use it later. .. code:: python savefile = 'detector.ag' detector.save(savefile) new_detector = ObjectDetector.load(savefile)