.. _sec_automm_detection_quick_start_coco:
AutoMM Detection - Quick Start on a Tiny COCO Format Dataset
============================================================
In this section, our goal is to fast finetune a pretrained model on a
small dataset in COCO format, and evaluate on its test set. Both
training and test sets are in COCO format. See
:ref:`sec_automm_detection_convert_to_coco` for how to convert other
datasets to COCO format.
Setting up the imports
~~~~~~~~~~~~~~~~~~~~~~
To start, let’s import MultiModalPredictor:
.. code:: python
from autogluon.multimodal import MultiModalPredictor
Make sure ``mmcv-full`` and ``mmdet`` are installed:
.. code:: python
!mim install mmcv-full
!pip install mmdet
.. parsed-literal::
:class: output
Looking in links: https://download.openmmlab.com/mmcv/dist/cu102/torch1.12.0/index.html
Requirement already satisfied: mmcv-full in /home/ci/opt/venv/lib/python3.8/site-packages (1.7.1)
Requirement already satisfied: packaging in /home/ci/opt/venv/lib/python3.8/site-packages (from mmcv-full) (23.0)
Requirement already satisfied: opencv-python>=3 in /home/ci/opt/venv/lib/python3.8/site-packages (from mmcv-full) (4.7.0.68)
Requirement already satisfied: pyyaml in /home/ci/opt/venv/lib/python3.8/site-packages (from mmcv-full) (5.4.1)
Requirement already satisfied: Pillow in /home/ci/opt/venv/lib/python3.8/site-packages (from mmcv-full) (9.4.0)
Requirement already satisfied: yapf in /home/ci/opt/venv/lib/python3.8/site-packages (from mmcv-full) (0.32.0)
Requirement already satisfied: addict in /home/ci/opt/venv/lib/python3.8/site-packages (from mmcv-full) (2.4.0)
Requirement already satisfied: numpy in /home/ci/opt/venv/lib/python3.8/site-packages (from mmcv-full) (1.22.4)
Requirement already satisfied: mmdet in /home/ci/opt/venv/lib/python3.8/site-packages (2.27.0)
Requirement already satisfied: six in /home/ci/opt/venv/lib/python3.8/site-packages (from mmdet) (1.16.0)
Requirement already satisfied: matplotlib in /home/ci/opt/venv/lib/python3.8/site-packages (from mmdet) (3.6.2)
Requirement already satisfied: pycocotools in /home/ci/opt/venv/lib/python3.8/site-packages (from mmdet) (2.0.6)
Requirement already satisfied: numpy in /home/ci/opt/venv/lib/python3.8/site-packages (from mmdet) (1.22.4)
Requirement already satisfied: scipy in /home/ci/opt/venv/lib/python3.8/site-packages (from mmdet) (1.8.1)
Requirement already satisfied: terminaltables in /home/ci/opt/venv/lib/python3.8/site-packages (from mmdet) (3.1.10)
Requirement already satisfied: python-dateutil>=2.7 in /home/ci/opt/venv/lib/python3.8/site-packages (from matplotlib->mmdet) (2.8.2)
Requirement already satisfied: kiwisolver>=1.0.1 in /home/ci/opt/venv/lib/python3.8/site-packages (from matplotlib->mmdet) (1.4.4)
Requirement already satisfied: fonttools>=4.22.0 in /home/ci/opt/venv/lib/python3.8/site-packages (from matplotlib->mmdet) (4.38.0)
Requirement already satisfied: pyparsing>=2.2.1 in /home/ci/opt/venv/lib/python3.8/site-packages (from matplotlib->mmdet) (3.0.9)
Requirement already satisfied: pillow>=6.2.0 in /home/ci/opt/venv/lib/python3.8/site-packages (from matplotlib->mmdet) (9.4.0)
Requirement already satisfied: packaging>=20.0 in /home/ci/opt/venv/lib/python3.8/site-packages (from matplotlib->mmdet) (23.0)
Requirement already satisfied: contourpy>=1.0.1 in /home/ci/opt/venv/lib/python3.8/site-packages (from matplotlib->mmdet) (1.0.6)
Requirement already satisfied: cycler>=0.10 in /home/ci/opt/venv/lib/python3.8/site-packages (from matplotlib->mmdet) (0.11.0)
And also import some other packages that will be used in this tutorial:
.. code:: python
import os
import time
from autogluon.core.utils.loaders import load_zip
Downloading Data
~~~~~~~~~~~~~~~~
We have the sample dataset ready in the cloud. Let’s download it:
.. code:: python
zip_file = "https://automl-mm-bench.s3.amazonaws.com/object_detection_dataset/tiny_motorbike_coco.zip"
download_dir = "./tiny_motorbike_coco"
load_zip.unzip(zip_file, unzip_dir=download_dir)
data_dir = os.path.join(download_dir, "tiny_motorbike")
train_path = os.path.join(data_dir, "Annotations", "trainval_cocoformat.json")
test_path = os.path.join(data_dir, "Annotations", "test_cocoformat.json")
.. parsed-literal::
:class: output
Downloading ./tiny_motorbike_coco/file.zip from https://automl-mm-bench.s3.amazonaws.com/object_detection_dataset/tiny_motorbike_coco.zip...
.. parsed-literal::
:class: output
100%|██████████| 21.8M/21.8M [00:00<00:00, 42.4MiB/s]
While using COCO format dataset, the input is the json annotation file
of the dataset split. In this example, ``trainval_cocoformat.json`` is
the annotation file of the train-and-validate split, and
``test_cocoformat.json`` is the annotation file of the test split.
Creating the MultiModalPredictor
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
We select the YOLOv3 with MobileNetV2 as backbone, and input resolution
is 320x320, pretrained on COCO dataset. With this setting, it is fast to
finetune or inference, and easy to deploy. And we use all the GPUs (if
any):
.. code:: python
checkpoint_name = "yolov3_mobilenetv2_320_300e_coco"
num_gpus = -1 # use all GPUs
We create the MultiModalPredictor with selected checkpoint name and
number of GPUs. We need to specify the problem_type to
``"object_detection"``, and also provide a ``sample_data_path`` for the
predictor to infer the catgories of the dataset. Here we provide the
``train_path``, and it also works using any other split of this dataset.
And we also provide a ``path`` to save the predictor. It will be saved
to a automatically generated directory with timestamp under
``AutogluonModels`` if ``path`` is not specified.
.. code:: python
# Init predictor
import uuid
model_path = f"./tmp/{uuid.uuid4().hex}-quick_start_tutorial_temp_save"
predictor = MultiModalPredictor(
hyperparameters={
"model.mmdet_image.checkpoint_name": checkpoint_name,
"env.num_gpus": num_gpus,
},
problem_type="object_detection",
sample_data_path=train_path,
path=model_path,
)
.. parsed-literal::
:class: output
processing yolov3_mobilenetv2_320_300e_coco...
.. parsed-literal::
:class: output
Output()
.. raw:: html
.. raw:: html
.. parsed-literal::
:class: output
[32mSuccessfully downloaded yolov3_mobilenetv2_320_300e_coco_20210719_215349-d18dff72.pth to /home/ci/autogluon/docs/_build/eval/tutorials/multimodal/object_detection/quick_start[0m
[32mSuccessfully dumped yolov3_mobilenetv2_320_300e_coco.py to /home/ci/autogluon/docs/_build/eval/tutorials/multimodal/object_detection/quick_start[0m
load checkpoint from local path: yolov3_mobilenetv2_320_300e_coco_20210719_215349-d18dff72.pth
The model and loaded state dict do not match exactly
size mismatch for bbox_head.convs_pred.0.weight: copying a param with shape torch.Size([255, 96, 1, 1]) from checkpoint, the shape in current model is torch.Size([45, 96, 1, 1]).
size mismatch for bbox_head.convs_pred.0.bias: copying a param with shape torch.Size([255]) from checkpoint, the shape in current model is torch.Size([45]).
size mismatch for bbox_head.convs_pred.1.weight: copying a param with shape torch.Size([255, 96, 1, 1]) from checkpoint, the shape in current model is torch.Size([45, 96, 1, 1]).
size mismatch for bbox_head.convs_pred.1.bias: copying a param with shape torch.Size([255]) from checkpoint, the shape in current model is torch.Size([45]).
size mismatch for bbox_head.convs_pred.2.weight: copying a param with shape torch.Size([255, 96, 1, 1]) from checkpoint, the shape in current model is torch.Size([45, 96, 1, 1]).
size mismatch for bbox_head.convs_pred.2.bias: copying a param with shape torch.Size([255]) from checkpoint, the shape in current model is torch.Size([45]).
Finetuning the Model
~~~~~~~~~~~~~~~~~~~~
We set the learning rate to be ``2e-4``. Note that we use a two-stage
learning rate option during finetuning by default, and the model head
will have 100x learning rate. Using a two-stage learning rate with high
learning rate only on head layers makes the model converge faster during
finetuning. It usually gives better performance as well, especially on
small datasets with hundreds or thousands of images. We also set the
epoch to be 15 and batch_size to be 32. We also compute the time of the
fit process here for better understanding the speed. We run it on a
g4.2xlarge EC2 machine on AWS, and part of the command outputs are shown
below:
.. code:: python
start = time.time()
# Fit
predictor.fit(
train_path,
hyperparameters={
"optimization.learning_rate": 2e-4, # we use two stage and detection head has 100x lr
"optimization.max_epochs": 30,
"env.per_gpu_batch_size": 32, # decrease it when model is large
},
)
train_end = time.time()
.. parsed-literal::
:class: output
Global seed set to 123
.. parsed-literal::
:class: output
loading annotations into memory...
Done (t=0.00s)
creating index...
index created!
.. parsed-literal::
:class: output
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
| Name | Type | Params
-----------------------------------------------------------------------
0 | model | MMDetAutoModelForObjectDetection | 3.7 M
1 | validation_metric | MeanMetric | 0
-----------------------------------------------------------------------
3.7 M Trainable params
0 Non-trainable params
3.7 M Total params
14.706 Total estimated model params size (MB)
/home/ci/opt/venv/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py:1892: PossibleUserWarning: The number of training batches (5) is smaller than the logging interval Trainer(log_every_n_steps=10). Set a lower value for log_every_n_steps if you want to see logs for the training epoch.
rank_zero_warn(
Epoch 0, global step 1: 'val_direct_loss' reached 35882.38281 (best 35882.38281), saving model to '/home/ci/autogluon/docs/_build/eval/tutorials/multimodal/object_detection/quick_start/tmp/f34d6a4ecb49466d8f378650244951d1-quick_start_tutorial_temp_save/epoch=0-step=1.ckpt' as top 1
Epoch 1, global step 2: 'val_direct_loss' reached 13549.84570 (best 13549.84570), saving model to '/home/ci/autogluon/docs/_build/eval/tutorials/multimodal/object_detection/quick_start/tmp/f34d6a4ecb49466d8f378650244951d1-quick_start_tutorial_temp_save/epoch=1-step=2.ckpt' as top 1
Epoch 1, global step 3: 'val_direct_loss' reached 5481.14160 (best 5481.14160), saving model to '/home/ci/autogluon/docs/_build/eval/tutorials/multimodal/object_detection/quick_start/tmp/f34d6a4ecb49466d8f378650244951d1-quick_start_tutorial_temp_save/epoch=1-step=3.ckpt' as top 1
Epoch 2, global step 4: 'val_direct_loss' reached 2577.77197 (best 2577.77197), saving model to '/home/ci/autogluon/docs/_build/eval/tutorials/multimodal/object_detection/quick_start/tmp/f34d6a4ecb49466d8f378650244951d1-quick_start_tutorial_temp_save/epoch=2-step=4.ckpt' as top 1
Epoch 2, global step 5: 'val_direct_loss' reached 1464.45410 (best 1464.45410), saving model to '/home/ci/autogluon/docs/_build/eval/tutorials/multimodal/object_detection/quick_start/tmp/f34d6a4ecb49466d8f378650244951d1-quick_start_tutorial_temp_save/epoch=2-step=5.ckpt' as top 1
Epoch 3, global step 6: 'val_direct_loss' reached 1323.19238 (best 1323.19238), saving model to '/home/ci/autogluon/docs/_build/eval/tutorials/multimodal/object_detection/quick_start/tmp/f34d6a4ecb49466d8f378650244951d1-quick_start_tutorial_temp_save/epoch=3-step=6.ckpt' as top 1
Epoch 3, global step 7: 'val_direct_loss' reached 1015.11023 (best 1015.11023), saving model to '/home/ci/autogluon/docs/_build/eval/tutorials/multimodal/object_detection/quick_start/tmp/f34d6a4ecb49466d8f378650244951d1-quick_start_tutorial_temp_save/epoch=3-step=7.ckpt' as top 1
Epoch 4, global step 8: 'val_direct_loss' was not in top 1
Epoch 4, global step 9: 'val_direct_loss' reached 1008.00885 (best 1008.00885), saving model to '/home/ci/autogluon/docs/_build/eval/tutorials/multimodal/object_detection/quick_start/tmp/f34d6a4ecb49466d8f378650244951d1-quick_start_tutorial_temp_save/epoch=4-step=9.ckpt' as top 1
Epoch 5, global step 10: 'val_direct_loss' was not in top 1
Epoch 5, global step 11: 'val_direct_loss' was not in top 1
Epoch 6, global step 12: 'val_direct_loss' reached 951.66266 (best 951.66266), saving model to '/home/ci/autogluon/docs/_build/eval/tutorials/multimodal/object_detection/quick_start/tmp/f34d6a4ecb49466d8f378650244951d1-quick_start_tutorial_temp_save/epoch=6-step=12.ckpt' as top 1
Epoch 6, global step 13: 'val_direct_loss' reached 946.80963 (best 946.80963), saving model to '/home/ci/autogluon/docs/_build/eval/tutorials/multimodal/object_detection/quick_start/tmp/f34d6a4ecb49466d8f378650244951d1-quick_start_tutorial_temp_save/epoch=6-step=13.ckpt' as top 1
Epoch 7, global step 14: 'val_direct_loss' reached 917.28357 (best 917.28357), saving model to '/home/ci/autogluon/docs/_build/eval/tutorials/multimodal/object_detection/quick_start/tmp/f34d6a4ecb49466d8f378650244951d1-quick_start_tutorial_temp_save/epoch=7-step=14.ckpt' as top 1
Epoch 7, global step 15: 'val_direct_loss' was not in top 1
Epoch 8, global step 16: 'val_direct_loss' was not in top 1
Epoch 8, global step 17: 'val_direct_loss' reached 897.77277 (best 897.77277), saving model to '/home/ci/autogluon/docs/_build/eval/tutorials/multimodal/object_detection/quick_start/tmp/f34d6a4ecb49466d8f378650244951d1-quick_start_tutorial_temp_save/epoch=8-step=17.ckpt' as top 1
Epoch 9, global step 18: 'val_direct_loss' reached 843.34253 (best 843.34253), saving model to '/home/ci/autogluon/docs/_build/eval/tutorials/multimodal/object_detection/quick_start/tmp/f34d6a4ecb49466d8f378650244951d1-quick_start_tutorial_temp_save/epoch=9-step=18.ckpt' as top 1
Epoch 9, global step 19: 'val_direct_loss' was not in top 1
Epoch 10, global step 20: 'val_direct_loss' was not in top 1
Epoch 10, global step 21: 'val_direct_loss' was not in top 1
Epoch 11, global step 22: 'val_direct_loss' reached 801.76996 (best 801.76996), saving model to '/home/ci/autogluon/docs/_build/eval/tutorials/multimodal/object_detection/quick_start/tmp/f34d6a4ecb49466d8f378650244951d1-quick_start_tutorial_temp_save/epoch=11-step=22.ckpt' as top 1
Epoch 11, global step 23: 'val_direct_loss' reached 761.23492 (best 761.23492), saving model to '/home/ci/autogluon/docs/_build/eval/tutorials/multimodal/object_detection/quick_start/tmp/f34d6a4ecb49466d8f378650244951d1-quick_start_tutorial_temp_save/epoch=11-step=23.ckpt' as top 1
Epoch 12, global step 24: 'val_direct_loss' was not in top 1
Epoch 12, global step 25: 'val_direct_loss' reached 754.84937 (best 754.84937), saving model to '/home/ci/autogluon/docs/_build/eval/tutorials/multimodal/object_detection/quick_start/tmp/f34d6a4ecb49466d8f378650244951d1-quick_start_tutorial_temp_save/epoch=12-step=25.ckpt' as top 1
Epoch 13, global step 26: 'val_direct_loss' was not in top 1
Epoch 13, global step 27: 'val_direct_loss' reached 633.62360 (best 633.62360), saving model to '/home/ci/autogluon/docs/_build/eval/tutorials/multimodal/object_detection/quick_start/tmp/f34d6a4ecb49466d8f378650244951d1-quick_start_tutorial_temp_save/epoch=13-step=27.ckpt' as top 1
Epoch 14, global step 28: 'val_direct_loss' was not in top 1
Epoch 14, global step 29: 'val_direct_loss' was not in top 1
Epoch 15, global step 30: 'val_direct_loss' was not in top 1
Epoch 15, global step 31: 'val_direct_loss' was not in top 1
Epoch 16, global step 32: 'val_direct_loss' was not in top 1
Epoch 16, global step 33: 'val_direct_loss' was not in top 1
Epoch 17, global step 34: 'val_direct_loss' was not in top 1
Epoch 17, global step 35: 'val_direct_loss' was not in top 1
Epoch 18, global step 36: 'val_direct_loss' was not in top 1
Epoch 18, global step 37: 'val_direct_loss' was not in top 1
Epoch 19, global step 38: 'val_direct_loss' was not in top 1
Epoch 19, global step 39: 'val_direct_loss' was not in top 1
Epoch 20, global step 40: 'val_direct_loss' was not in top 1
Epoch 20, global step 41: 'val_direct_loss' was not in top 1
Epoch 21, global step 42: 'val_direct_loss' was not in top 1
Epoch 21, global step 43: 'val_direct_loss' was not in top 1
Epoch 22, global step 44: 'val_direct_loss' was not in top 1
Epoch 22, global step 45: 'val_direct_loss' was not in top 1
Epoch 23, global step 46: 'val_direct_loss' was not in top 1
Epoch 23, global step 47: 'val_direct_loss' reached 608.86792 (best 608.86792), saving model to '/home/ci/autogluon/docs/_build/eval/tutorials/multimodal/object_detection/quick_start/tmp/f34d6a4ecb49466d8f378650244951d1-quick_start_tutorial_temp_save/epoch=23-step=47.ckpt' as top 1
Epoch 24, global step 48: 'val_direct_loss' reached 588.96997 (best 588.96997), saving model to '/home/ci/autogluon/docs/_build/eval/tutorials/multimodal/object_detection/quick_start/tmp/f34d6a4ecb49466d8f378650244951d1-quick_start_tutorial_temp_save/epoch=24-step=48.ckpt' as top 1
Epoch 24, global step 49: 'val_direct_loss' was not in top 1
Epoch 25, global step 50: 'val_direct_loss' was not in top 1
Epoch 25, global step 51: 'val_direct_loss' was not in top 1
Epoch 26, global step 52: 'val_direct_loss' was not in top 1
Epoch 26, global step 53: 'val_direct_loss' was not in top 1
Epoch 27, global step 54: 'val_direct_loss' was not in top 1
Epoch 27, global step 55: 'val_direct_loss' was not in top 1
Epoch 28, global step 56: 'val_direct_loss' was not in top 1
Epoch 28, global step 57: 'val_direct_loss' was not in top 1
Epoch 29, global step 58: 'val_direct_loss' was not in top 1
Epoch 29, global step 59: 'val_direct_loss' was not in top 1
`Trainer.fit` stopped: `max_epochs=30` reached.
Notice that at the end of each progress bar, if the checkpoint at
current stage is saved, it prints the model’s save path. In this
example, it’s ``./quick_start_tutorial_temp_save``.
Print out the time and we can see that it’s fast!
.. code:: python
print("This finetuning takes %.2f seconds." % (train_end - start))
.. parsed-literal::
:class: output
This finetuning takes 145.38 seconds.
Evaluation
~~~~~~~~~~
To evaluate the model we just trained, run following code.
And the evaluation results are shown in command line output. The first
line is mAP in COCO standard, and the second line is mAP in VOC standard
(or mAP50). For more details about these metrics, see `COCO’s evaluation
guideline `__. Note that for
presenting a fast finetuning we use 15 epochs, you could get better
result on this dataset by simply increasing the epochs.
.. code:: python
predictor.evaluate(test_path)
eval_end = time.time()
.. parsed-literal::
:class: output
loading annotations into memory...
Done (t=0.00s)
creating index...
index created!
.. parsed-literal::
:class: output
WARNING:automm:A new predictor save path is created.This is to prevent you to overwrite previous predictor saved here.You could check current save path at predictor._save_path.If you still want to use this path, set resume=True
.. parsed-literal::
:class: output
saving file at /home/ci/autogluon/docs/_build/eval/tutorials/multimodal/object_detection/quick_start/AutogluonModels/ag-20230111_022717/object_detection_result_cache.json
loading annotations into memory...
Done (t=0.00s)
creating index...
index created!
Loading and preparing results...
DONE (t=0.01s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type *bbox*
DONE (t=0.19s).
Accumulating evaluation results...
DONE (t=0.06s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.099
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.307
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.032
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.012
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.030
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.275
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.091
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.141
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.158
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.055
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.116
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.363
Print out the evaluation time:
.. code:: python
print("The evaluation takes %.2f seconds." % (eval_end - train_end))
.. parsed-literal::
:class: output
The evaluation takes 1.43 seconds.
We can load a new predictor with previous save_path, and we can also
reset the number of GPUs to use if not all the devices are available:
.. code:: python
# Load and reset num_gpus
new_predictor = MultiModalPredictor.load(model_path)
new_predictor.set_num_gpus(1)
.. parsed-literal::
:class: output
processing yolov3_mobilenetv2_320_300e_coco...
[32myolov3_mobilenetv2_320_300e_coco_20210719_215349-d18dff72.pth exists in /home/ci/autogluon/docs/_build/eval/tutorials/multimodal/object_detection/quick_start[0m
[32mSuccessfully dumped yolov3_mobilenetv2_320_300e_coco.py to /home/ci/autogluon/docs/_build/eval/tutorials/multimodal/object_detection/quick_start[0m
Evaluating the new predictor gives us exactly the same result:
.. code:: python
# Evaluate new predictor
new_predictor.evaluate(test_path)
.. parsed-literal::
:class: output
loading annotations into memory...
Done (t=0.00s)
creating index...
index created!
.. parsed-literal::
:class: output
WARNING:automm:A new predictor save path is created.This is to prevent you to overwrite previous predictor saved here.You could check current save path at predictor._save_path.If you still want to use this path, set resume=True
.. parsed-literal::
:class: output
saving file at /home/ci/autogluon/docs/_build/eval/tutorials/multimodal/object_detection/quick_start/AutogluonModels/ag-20230111_022722/object_detection_result_cache.json
loading annotations into memory...
Done (t=0.00s)
creating index...
index created!
Loading and preparing results...
DONE (t=0.01s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type *bbox*
DONE (t=0.19s).
Accumulating evaluation results...
DONE (t=0.06s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.099
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.307
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.032
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.012
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.030
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.275
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.091
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.141
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.158
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.055
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.116
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.363
.. parsed-literal::
:class: output
{'map': 0.0985760739835149}
If we set validation metric to ``"map"`` (Mean Average Precision), and
max epochs to ``50``, the predictor will have better performance with
the same pretrained model (YOLOv3). We trained it offline and uploaded
to S3. To load and check the result:
.. code:: python
# Load Trained Predictor from S3
zip_file = "https://automl-mm-bench.s3.amazonaws.com/object_detection/quick_start/AP50_433.zip"
download_dir = "./AP50_433"
load_zip.unzip(zip_file, unzip_dir=download_dir)
better_predictor = MultiModalPredictor.load("./AP50_433/quick_start_tutorial_temp_save")
better_predictor.set_num_gpus(1)
# Evaluate new predictor
better_predictor.evaluate(test_path)
.. parsed-literal::
:class: output
Downloading ./AP50_433/file.zip from https://automl-mm-bench.s3.amazonaws.com/object_detection/quick_start/AP50_433.zip...
.. parsed-literal::
:class: output
100%|██████████| 27.8M/27.8M [00:00<00:00, 50.6MiB/s]
/home/ci/opt/venv/lib/python3.8/site-packages/sklearn/base.py:329: UserWarning: Trying to unpickle estimator LabelEncoder from version 1.0.2 when using version 1.1.1. This might lead to breaking code or invalid results. Use at your own risk. For more info please refer to:
https://scikit-learn.org/stable/model_persistence.html#security-maintainability-limitations
warnings.warn(
/home/ci/opt/venv/lib/python3.8/site-packages/sklearn/base.py:329: UserWarning: Trying to unpickle estimator StandardScaler from version 1.0.2 when using version 1.1.1. This might lead to breaking code or invalid results. Use at your own risk. For more info please refer to:
https://scikit-learn.org/stable/model_persistence.html#security-maintainability-limitations
warnings.warn(
.. parsed-literal::
:class: output
processing yolov3_mobilenetv2_320_300e_coco...
[32myolov3_mobilenetv2_320_300e_coco_20210719_215349-d18dff72.pth exists in /home/ci/autogluon/docs/_build/eval/tutorials/multimodal/object_detection/quick_start[0m
[32mSuccessfully dumped yolov3_mobilenetv2_320_300e_coco.py to /home/ci/autogluon/docs/_build/eval/tutorials/multimodal/object_detection/quick_start[0m
loading annotations into memory...
Done (t=0.00s)
creating index...
index created!
.. parsed-literal::
:class: output
WARNING:automm:A new predictor save path is created.This is to prevent you to overwrite previous predictor saved here.You could check current save path at predictor._save_path.If you still want to use this path, set resume=True
.. parsed-literal::
:class: output
saving file at /home/ci/autogluon/docs/_build/eval/tutorials/multimodal/object_detection/quick_start/AutogluonModels/ag-20230111_022727/object_detection_result_cache.json
loading annotations into memory...
Done (t=0.00s)
creating index...
index created!
Loading and preparing results...
DONE (t=0.01s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type *bbox*
DONE (t=0.17s).
Accumulating evaluation results...
DONE (t=0.06s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.195
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.433
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.135
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.036
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.206
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.450
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.158
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.231
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.244
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.138
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.295
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.508
.. parsed-literal::
:class: output
{'map': 0.19495386487978572}
For how to set those hyperparameters and finetune the model with higher
performance, see :ref:`sec_automm_detection_high_ft_coco`.
Inference
~~~~~~~~~
Now that we have gone through the model setup, finetuning, and
evaluation, this section details the inference. Specifically, we layout
the steps for using the model to make predictions and visualize the
results.
To run inference on the entire test set, perform:
.. code:: python
pred = predictor.predict(test_path)
print(pred)
.. parsed-literal::
:class: output
loading annotations into memory...
Done (t=0.00s)
creating index...
index created!
image \
0 ./tiny_motorbike_coco/tiny_motorbike/Annotatio...
1 ./tiny_motorbike_coco/tiny_motorbike/Annotatio...
2 ./tiny_motorbike_coco/tiny_motorbike/Annotatio...
3 ./tiny_motorbike_coco/tiny_motorbike/Annotatio...
4 ./tiny_motorbike_coco/tiny_motorbike/Annotatio...
5 ./tiny_motorbike_coco/tiny_motorbike/Annotatio...
6 ./tiny_motorbike_coco/tiny_motorbike/Annotatio...
7 ./tiny_motorbike_coco/tiny_motorbike/Annotatio...
8 ./tiny_motorbike_coco/tiny_motorbike/Annotatio...
9 ./tiny_motorbike_coco/tiny_motorbike/Annotatio...
10 ./tiny_motorbike_coco/tiny_motorbike/Annotatio...
11 ./tiny_motorbike_coco/tiny_motorbike/Annotatio...
12 ./tiny_motorbike_coco/tiny_motorbike/Annotatio...
13 ./tiny_motorbike_coco/tiny_motorbike/Annotatio...
14 ./tiny_motorbike_coco/tiny_motorbike/Annotatio...
15 ./tiny_motorbike_coco/tiny_motorbike/Annotatio...
16 ./tiny_motorbike_coco/tiny_motorbike/Annotatio...
17 ./tiny_motorbike_coco/tiny_motorbike/Annotatio...
18 ./tiny_motorbike_coco/tiny_motorbike/Annotatio...
19 ./tiny_motorbike_coco/tiny_motorbike/Annotatio...
20 ./tiny_motorbike_coco/tiny_motorbike/Annotatio...
21 ./tiny_motorbike_coco/tiny_motorbike/Annotatio...
22 ./tiny_motorbike_coco/tiny_motorbike/Annotatio...
23 ./tiny_motorbike_coco/tiny_motorbike/Annotatio...
24 ./tiny_motorbike_coco/tiny_motorbike/Annotatio...
25 ./tiny_motorbike_coco/tiny_motorbike/Annotatio...
26 ./tiny_motorbike_coco/tiny_motorbike/Annotatio...
27 ./tiny_motorbike_coco/tiny_motorbike/Annotatio...
28 ./tiny_motorbike_coco/tiny_motorbike/Annotatio...
29 ./tiny_motorbike_coco/tiny_motorbike/Annotatio...
30 ./tiny_motorbike_coco/tiny_motorbike/Annotatio...
31 ./tiny_motorbike_coco/tiny_motorbike/Annotatio...
32 ./tiny_motorbike_coco/tiny_motorbike/Annotatio...
33 ./tiny_motorbike_coco/tiny_motorbike/Annotatio...
34 ./tiny_motorbike_coco/tiny_motorbike/Annotatio...
35 ./tiny_motorbike_coco/tiny_motorbike/Annotatio...
36 ./tiny_motorbike_coco/tiny_motorbike/Annotatio...
37 ./tiny_motorbike_coco/tiny_motorbike/Annotatio...
38 ./tiny_motorbike_coco/tiny_motorbike/Annotatio...
39 ./tiny_motorbike_coco/tiny_motorbike/Annotatio...
40 ./tiny_motorbike_coco/tiny_motorbike/Annotatio...
41 ./tiny_motorbike_coco/tiny_motorbike/Annotatio...
42 ./tiny_motorbike_coco/tiny_motorbike/Annotatio...
43 ./tiny_motorbike_coco/tiny_motorbike/Annotatio...
44 ./tiny_motorbike_coco/tiny_motorbike/Annotatio...
45 ./tiny_motorbike_coco/tiny_motorbike/Annotatio...
46 ./tiny_motorbike_coco/tiny_motorbike/Annotatio...
47 ./tiny_motorbike_coco/tiny_motorbike/Annotatio...
48 ./tiny_motorbike_coco/tiny_motorbike/Annotatio...
49 ./tiny_motorbike_coco/tiny_motorbike/Annotatio...
bboxes
0 [{'class': 'bicycle', 'bbox': [359.32632, 145....
1 [{'class': 'bicycle', 'bbox': [416.17175, 243....
2 [{'class': 'car', 'bbox': [282.80597, 57.05780...
3 [{'class': 'bicycle', 'bbox': [26.393429, 41.7...
4 [{'class': 'bicycle', 'bbox': [177.525, 176.06...
5 [{'class': 'bicycle', 'bbox': [175.06155, 81.5...
6 [{'class': 'car', 'bbox': [10.9417305, 68.1893...
7 [{'class': 'bicycle', 'bbox': [195.68996, 123....
8 [{'class': 'bicycle', 'bbox': [135.7765, 34.28...
9 [{'class': 'bicycle', 'bbox': [368.55136, 63.4...
10 [{'class': 'bicycle', 'bbox': [404.77032, 119....
11 [{'class': 'bicycle', 'bbox': [419.4969, 240.0...
12 [{'class': 'car', 'bbox': [239.71417, 35.63029...
13 [{'class': 'motorbike', 'bbox': [82.40418, 26....
14 [{'class': 'car', 'bbox': [228.73302, 0.935615...
15 [{'class': 'bicycle', 'bbox': [17.297007, 156....
16 [{'class': 'bicycle', 'bbox': [441.4951, 70.87...
17 [{'class': 'bicycle', 'bbox': [44.708603, 272....
18 [{'class': 'car', 'bbox': [90.37053, 64.51949,...
19 [{'class': 'bicycle', 'bbox': [152.41766, 82.7...
20 [{'class': 'bicycle', 'bbox': [81.25212, 214.0...
21 [{'class': 'motorbike', 'bbox': [25.143728, 13...
22 [{'class': 'bicycle', 'bbox': [225.83177, 183....
23 [{'class': 'bicycle', 'bbox': [347.2718, -5.37...
24 [{'class': 'bicycle', 'bbox': [197.2964, 24.43...
25 [{'class': 'bicycle', 'bbox': [412.64648, 157....
26 [{'class': 'bicycle', 'bbox': [451.42322, -0.3...
27 [{'class': 'car', 'bbox': [48.052456, -47.9774...
28 [{'class': 'car', 'bbox': [141.56615, -8.87004...
29 [{'class': 'bicycle', 'bbox': [39.597195, 298....
30 [{'class': 'bicycle', 'bbox': [29.637867, 35.7...
31 [{'class': 'bicycle', 'bbox': [26.883385, 94.5...
32 [{'class': 'motorbike', 'bbox': [145.98544, 13...
33 [{'class': 'bicycle', 'bbox': [378.38028, 55.9...
34 [{'class': 'bicycle', 'bbox': [21.606205, 25.0...
35 [{'class': 'bicycle', 'bbox': [333.2721, 43.92...
36 [{'class': 'motorbike', 'bbox': [33.2473, 236....
37 [{'class': 'bicycle', 'bbox': [427.23245, 249....
38 [{'class': 'motorbike', 'bbox': [-84.20399, 38...
39 [{'class': 'bicycle', 'bbox': [23.019634, 430....
40 [{'class': 'bicycle', 'bbox': [215.11577, 192....
41 [{'class': 'bicycle', 'bbox': [369.30945, -7.0...
42 [{'class': 'bicycle', 'bbox': [38.057922, 106....
43 [{'class': 'car', 'bbox': [152.88936, 38.80680...
44 [{'class': 'motorbike', 'bbox': [84.22375, 127...
45 [{'class': 'motorbike', 'bbox': [211.72049, 10...
46 [{'class': 'bicycle', 'bbox': [353.5749, 79.77...
47 [{'class': 'car', 'bbox': [177.35545, 29.03441...
48 [{'class': 'bicycle', 'bbox': [222.0296, 0.975...
49 [{'class': 'car', 'bbox': [14.13424, 86.834114...
The output ``pred`` is a ``pandas`` ``DataFrame`` that has two columns,
``image`` and ``bboxes``.
In ``image``, each row contains the image path
In ``bboxes``, each row is a list of dictionaries, each one representing
a bounding box:
``{"class": , "bbox": [x1, y1, x2, y2], "score": }``
Note that, by default, the ``predictor.predict`` does not save the
detection results into a file.
To run inference and save results, run the following:
.. code:: python
pred = better_predictor.predict(test_path, save_results=True)
.. parsed-literal::
:class: output
loading annotations into memory...
Done (t=0.00s)
creating index...
index created!
.. parsed-literal::
:class: output
WARNING:automm:A new predictor save path is created.This is to prevent you to overwrite previous predictor saved here.You could check current save path at predictor._save_path.If you still want to use this path, set resume=True
.. parsed-literal::
:class: output
Saved detection results to /home/ci/autogluon/docs/_build/eval/tutorials/multimodal/object_detection/quick_start/AutogluonModels/ag-20230111_022730/result.txt
Here, we save ``pred`` into a ``.txt`` file, which exactly follows the
same layout as in ``pred``. You can use a predictor initialzed in anyway
(i.e. finetuned predictor, predictor with pretrained model, etc.). Here,
we demonstrate using the ``better_predictor`` loaded previously.
Visualizing Results
~~~~~~~~~~~~~~~~~~~
To run visualizations, ensure that you have ``opencv`` installed. If you
haven’t already, install ``opencv`` by running
.. code:: python
!pip install opencv-python
.. parsed-literal::
:class: output
Requirement already satisfied: opencv-python in /home/ci/opt/venv/lib/python3.8/site-packages (4.7.0.68)
Requirement already satisfied: numpy>=1.17.0 in /home/ci/opt/venv/lib/python3.8/site-packages (from opencv-python) (1.22.4)
To visualize the detection bounding boxes, run the following:
.. code:: python
from autogluon.multimodal.utils import Visualizer
conf_threshold = 0.4 # Specify a confidence threshold to filter out unwanted boxes
image_result = pred.iloc[30]
img_path = image_result.image # Select an image to visualize
visualizer = Visualizer(img_path) # Initialize the Visualizer
out = visualizer.draw_instance_predictions(image_result, conf_threshold=conf_threshold) # Draw detections
visualized = out.get_image() # Get the visualized image
from PIL import Image
from IPython.display import display
img = Image.fromarray(visualized, 'RGB')
display(img)
.. figure:: output_quick_start_coco_f6564b_33_0.png
Testing on Your Own Image
~~~~~~~~~~~~~~~~~~~~~~~~~
You can also download an image and run inference on that single image.
The follow is an example:
Download the example image:
.. code:: python
from autogluon.multimodal import download
image_url = "https://raw.githubusercontent.com/dmlc/web-data/master/gluoncv/detection/street_small.jpg"
test_image = download(image_url)
.. parsed-literal::
:class: output
Downloading street_small.jpg from https://raw.githubusercontent.com/dmlc/web-data/master/gluoncv/detection/street_small.jpg...
.. parsed-literal::
:class: output
Run inference:
.. code:: python
pred_test_image = better_predictor.predict({"image": [test_image]})
print(pred_test_image)
.. parsed-literal::
:class: output
image bboxes
0 street_small.jpg [{'class': 'bicycle', 'bbox': [235.36739, 216....
Other Examples
~~~~~~~~~~~~~~
You may go to `AutoMM
Examples `__
to explore other examples about AutoMM.
Customization
~~~~~~~~~~~~~
To learn how to customize AutoMM, please refer to
:ref:`sec_automm_customization`.
Citation
~~~~~~~~
::
@misc{redmon2018yolov3,
title={YOLOv3: An Incremental Improvement},
author={Joseph Redmon and Ali Farhadi},
year={2018},
eprint={1804.02767},
archivePrefix={arXiv},
primaryClass={cs.CV}
}