.. _sec_automm_detection_quick_start_coco: AutoMM Detection - Quick Start on a Tiny COCO Format Dataset ============================================================ In this section, our goal is to fast finetune a pretrained model on a small dataset in COCO format, and evaluate on its test set. Both training and test sets are in COCO format. See :ref:`sec_automm_detection_convert_to_coco` for how to convert other datasets to COCO format. Setting up the imports ~~~~~~~~~~~~~~~~~~~~~~ To start, let’s import MultiModalPredictor: .. code:: python from autogluon.multimodal import MultiModalPredictor Make sure ``mmcv-full`` and ``mmdet`` are installed: .. code:: python !mim install mmcv-full !pip install mmdet .. parsed-literal:: :class: output Looking in links: https://download.openmmlab.com/mmcv/dist/cu102/torch1.12.0/index.html Requirement already satisfied: mmcv-full in /home/ci/opt/venv/lib/python3.8/site-packages (1.7.0) Requirement already satisfied: packaging in /home/ci/opt/venv/lib/python3.8/site-packages (from mmcv-full) (22.0) Requirement already satisfied: pyyaml in /home/ci/opt/venv/lib/python3.8/site-packages (from mmcv-full) (5.4.1) Requirement already satisfied: opencv-python>=3 in /home/ci/opt/venv/lib/python3.8/site-packages (from mmcv-full) (4.6.0.66) Requirement already satisfied: numpy in /home/ci/opt/venv/lib/python3.8/site-packages (from mmcv-full) (1.22.4) Requirement already satisfied: yapf in /home/ci/opt/venv/lib/python3.8/site-packages (from mmcv-full) (0.32.0) Requirement already satisfied: Pillow in /home/ci/opt/venv/lib/python3.8/site-packages (from mmcv-full) (9.3.0) Requirement already satisfied: addict in /home/ci/opt/venv/lib/python3.8/site-packages (from mmcv-full) (2.4.0) Requirement already satisfied: mmdet in /home/ci/opt/venv/lib/python3.8/site-packages (2.26.0) Requirement already satisfied: six in /home/ci/opt/venv/lib/python3.8/site-packages (from mmdet) (1.16.0) Requirement already satisfied: terminaltables in /home/ci/opt/venv/lib/python3.8/site-packages (from mmdet) (3.1.10) Requirement already satisfied: pycocotools in /home/ci/opt/venv/lib/python3.8/site-packages (from mmdet) (2.0.6) Requirement already satisfied: scipy in /home/ci/opt/venv/lib/python3.8/site-packages (from mmdet) (1.8.1) Requirement already satisfied: matplotlib in /home/ci/opt/venv/lib/python3.8/site-packages (from mmdet) (3.6.2) Requirement already satisfied: numpy in /home/ci/opt/venv/lib/python3.8/site-packages (from mmdet) (1.22.4) Requirement already satisfied: contourpy>=1.0.1 in /home/ci/opt/venv/lib/python3.8/site-packages (from matplotlib->mmdet) (1.0.6) Requirement already satisfied: python-dateutil>=2.7 in /home/ci/opt/venv/lib/python3.8/site-packages (from matplotlib->mmdet) (2.8.2) Requirement already satisfied: cycler>=0.10 in /home/ci/opt/venv/lib/python3.8/site-packages (from matplotlib->mmdet) (0.11.0) Requirement already satisfied: pyparsing>=2.2.1 in /home/ci/opt/venv/lib/python3.8/site-packages (from matplotlib->mmdet) (3.0.9) Requirement already satisfied: pillow>=6.2.0 in /home/ci/opt/venv/lib/python3.8/site-packages (from matplotlib->mmdet) (9.3.0) Requirement already satisfied: packaging>=20.0 in /home/ci/opt/venv/lib/python3.8/site-packages (from matplotlib->mmdet) (22.0) Requirement already satisfied: kiwisolver>=1.0.1 in /home/ci/opt/venv/lib/python3.8/site-packages (from matplotlib->mmdet) (1.4.4) Requirement already satisfied: fonttools>=4.22.0 in /home/ci/opt/venv/lib/python3.8/site-packages (from matplotlib->mmdet) (4.38.0) And also import some other packages that will be used in this tutorial: .. code:: python import os import time from autogluon.core.utils.loaders import load_zip Downloading Data ~~~~~~~~~~~~~~~~ We have the sample dataset ready in the cloud. Let’s download it: .. code:: python zip_file = "https://automl-mm-bench.s3.amazonaws.com/object_detection_dataset/tiny_motorbike_coco.zip" download_dir = "./tiny_motorbike_coco" load_zip.unzip(zip_file, unzip_dir=download_dir) data_dir = os.path.join(download_dir, "tiny_motorbike") train_path = os.path.join(data_dir, "Annotations", "trainval_cocoformat.json") test_path = os.path.join(data_dir, "Annotations", "test_cocoformat.json") .. parsed-literal:: :class: output Downloading ./tiny_motorbike_coco/file.zip from https://automl-mm-bench.s3.amazonaws.com/object_detection_dataset/tiny_motorbike_coco.zip... .. parsed-literal:: :class: output 100%|██████████| 21.8M/21.8M [00:00<00:00, 59.3MiB/s] While using COCO format dataset, the input is the json annotation file of the dataset split. In this example, ``trainval_cocoformat.json`` is the annotation file of the train-and-validate split, and ``test_cocoformat.json`` is the annotation file of the test split. Creating the MultiModalPredictor ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ We select the YOLOv3 with MobileNetV2 as backbone, and input resolution is 320x320, pretrained on COCO dataset. With this setting, it is fast to finetune or inference, and easy to deploy. And we use all the GPUs (if any): .. code:: python checkpoint_name = "yolov3_mobilenetv2_320_300e_coco" num_gpus = -1 # use all GPUs We create the MultiModalPredictor with selected checkpoint name and number of GPUs. We need to specify the problem_type to ``"object_detection"``, and also provide a ``sample_data_path`` for the predictor to infer the catgories of the dataset. Here we provide the ``train_path``, and it also works using any other split of this dataset. And we also provide a ``path`` to save the predictor. It will be saved to a automatically generated directory with timestamp under ``AutogluonModels`` if ``path`` is not specified. .. code:: python # Init predictor import uuid model_path = f"./tmp/{uuid.uuid4().hex}-quick_start_tutorial_temp_save" predictor = MultiModalPredictor( hyperparameters={ "model.mmdet_image.checkpoint_name": checkpoint_name, "env.num_gpus": num_gpus, }, problem_type="object_detection", sample_data_path=train_path, path=model_path, ) .. parsed-literal:: :class: output /home/ci/autogluon/multimodal/src/autogluon/multimodal/predictor.py:436: UserWarning: Running object detection. Make sure that you have installed mmdet and mmcv-full, by running 'mim install mmcv-full' and 'pip install mmdet' warnings.warn( .. parsed-literal:: :class: output processing yolov3_mobilenetv2_320_300e_coco... .. parsed-literal:: :class: output Output() .. raw:: html
.. raw:: html.. parsed-literal:: :class: output [32mSuccessfully downloaded yolov3_mobilenetv2_320_300e_coco_20210719_215349-d18dff72.pth to /home/ci/autogluon/docs/_build/eval/tutorials/multimodal/object_detection/quick_start[0m [32mSuccessfully dumped yolov3_mobilenetv2_320_300e_coco.py to /home/ci/autogluon/docs/_build/eval/tutorials/multimodal/object_detection/quick_start[0m load checkpoint from local path: yolov3_mobilenetv2_320_300e_coco_20210719_215349-d18dff72.pth The model and loaded state dict do not match exactly size mismatch for bbox_head.convs_pred.0.weight: copying a param with shape torch.Size([255, 96, 1, 1]) from checkpoint, the shape in current model is torch.Size([45, 96, 1, 1]). size mismatch for bbox_head.convs_pred.0.bias: copying a param with shape torch.Size([255]) from checkpoint, the shape in current model is torch.Size([45]). size mismatch for bbox_head.convs_pred.1.weight: copying a param with shape torch.Size([255, 96, 1, 1]) from checkpoint, the shape in current model is torch.Size([45, 96, 1, 1]). size mismatch for bbox_head.convs_pred.1.bias: copying a param with shape torch.Size([255]) from checkpoint, the shape in current model is torch.Size([45]). size mismatch for bbox_head.convs_pred.2.weight: copying a param with shape torch.Size([255, 96, 1, 1]) from checkpoint, the shape in current model is torch.Size([45, 96, 1, 1]). size mismatch for bbox_head.convs_pred.2.bias: copying a param with shape torch.Size([255]) from checkpoint, the shape in current model is torch.Size([45]). Finetuning the Model ~~~~~~~~~~~~~~~~~~~~ We set the learning rate to be ``2e-4``. Note that we use a two-stage learning rate option during finetuning by default, and the model head will have 100x learning rate. Using a two-stage learning rate with high learning rate only on head layers makes the model converge faster during finetuning. It usually gives better performance as well, especially on small datasets with hundreds or thousands of images. We also set the epoch to be 15 and batch_size to be 32. We also compute the time of the fit process here for better understanding the speed. We run it on a g4.2xlarge EC2 machine on AWS, and part of the command outputs are shown below: .. code:: python start = time.time() # Fit predictor.fit( train_path, hyperparameters={ "optimization.learning_rate": 2e-4, # we use two stage and detection head has 100x lr "optimization.max_epochs": 30, "env.per_gpu_batch_size": 32, # decrease it when model is large }, ) train_end = time.time() .. parsed-literal:: :class: output Global seed set to 123 .. parsed-literal:: :class: output loading annotations into memory... Done (t=0.00s) creating index... index created! .. parsed-literal:: :class: output GPU available: True (cuda), used: True TPU available: False, using: 0 TPU cores IPU available: False, using: 0 IPUs HPU available: False, using: 0 HPUs LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0] | Name | Type | Params ----------------------------------------------------------------------- 0 | model | MMDetAutoModelForObjectDetection | 3.7 M 1 | validation_metric | MeanMetric | 0 ----------------------------------------------------------------------- 3.7 M Trainable params 0 Non-trainable params 3.7 M Total params 14.706 Total estimated model params size (MB) /home/ci/opt/venv/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py:1892: PossibleUserWarning: The number of training batches (5) is smaller than the logging interval Trainer(log_every_n_steps=10). Set a lower value for log_every_n_steps if you want to see logs for the training epoch. rank_zero_warn( Epoch 0, global step 1: 'val_direct_loss' reached 28263.55859 (best 28263.55859), saving model to '/home/ci/autogluon/docs/_build/eval/tutorials/multimodal/object_detection/quick_start/tmp/a2cfefff684247a5b1890518015e5635-quick_start_tutorial_temp_save/epoch=0-step=1.ckpt' as top 1 Epoch 1, global step 2: 'val_direct_loss' reached 10605.58398 (best 10605.58398), saving model to '/home/ci/autogluon/docs/_build/eval/tutorials/multimodal/object_detection/quick_start/tmp/a2cfefff684247a5b1890518015e5635-quick_start_tutorial_temp_save/epoch=1-step=2.ckpt' as top 1 Epoch 1, global step 3: 'val_direct_loss' reached 4444.50098 (best 4444.50098), saving model to '/home/ci/autogluon/docs/_build/eval/tutorials/multimodal/object_detection/quick_start/tmp/a2cfefff684247a5b1890518015e5635-quick_start_tutorial_temp_save/epoch=1-step=3.ckpt' as top 1 Epoch 2, global step 4: 'val_direct_loss' reached 2138.37476 (best 2138.37476), saving model to '/home/ci/autogluon/docs/_build/eval/tutorials/multimodal/object_detection/quick_start/tmp/a2cfefff684247a5b1890518015e5635-quick_start_tutorial_temp_save/epoch=2-step=4.ckpt' as top 1 Epoch 2, global step 5: 'val_direct_loss' reached 1337.25488 (best 1337.25488), saving model to '/home/ci/autogluon/docs/_build/eval/tutorials/multimodal/object_detection/quick_start/tmp/a2cfefff684247a5b1890518015e5635-quick_start_tutorial_temp_save/epoch=2-step=5.ckpt' as top 1 Epoch 3, global step 6: 'val_direct_loss' reached 1239.52478 (best 1239.52478), saving model to '/home/ci/autogluon/docs/_build/eval/tutorials/multimodal/object_detection/quick_start/tmp/a2cfefff684247a5b1890518015e5635-quick_start_tutorial_temp_save/epoch=3-step=6.ckpt' as top 1 Epoch 3, global step 7: 'val_direct_loss' reached 971.26068 (best 971.26068), saving model to '/home/ci/autogluon/docs/_build/eval/tutorials/multimodal/object_detection/quick_start/tmp/a2cfefff684247a5b1890518015e5635-quick_start_tutorial_temp_save/epoch=3-step=7.ckpt' as top 1 Epoch 4, global step 8: 'val_direct_loss' was not in top 1 Epoch 4, global step 9: 'val_direct_loss' reached 929.80939 (best 929.80939), saving model to '/home/ci/autogluon/docs/_build/eval/tutorials/multimodal/object_detection/quick_start/tmp/a2cfefff684247a5b1890518015e5635-quick_start_tutorial_temp_save/epoch=4-step=9.ckpt' as top 1 Epoch 5, global step 10: 'val_direct_loss' was not in top 1 Epoch 5, global step 11: 'val_direct_loss' was not in top 1 Epoch 6, global step 12: 'val_direct_loss' reached 919.20398 (best 919.20398), saving model to '/home/ci/autogluon/docs/_build/eval/tutorials/multimodal/object_detection/quick_start/tmp/a2cfefff684247a5b1890518015e5635-quick_start_tutorial_temp_save/epoch=6-step=12.ckpt' as top 1 Epoch 6, global step 13: 'val_direct_loss' was not in top 1 Epoch 7, global step 14: 'val_direct_loss' reached 907.16553 (best 907.16553), saving model to '/home/ci/autogluon/docs/_build/eval/tutorials/multimodal/object_detection/quick_start/tmp/a2cfefff684247a5b1890518015e5635-quick_start_tutorial_temp_save/epoch=7-step=14.ckpt' as top 1 Epoch 7, global step 15: 'val_direct_loss' was not in top 1 Epoch 8, global step 16: 'val_direct_loss' was not in top 1 Epoch 8, global step 17: 'val_direct_loss' reached 873.87000 (best 873.87000), saving model to '/home/ci/autogluon/docs/_build/eval/tutorials/multimodal/object_detection/quick_start/tmp/a2cfefff684247a5b1890518015e5635-quick_start_tutorial_temp_save/epoch=8-step=17.ckpt' as top 1 Epoch 9, global step 18: 'val_direct_loss' reached 809.06348 (best 809.06348), saving model to '/home/ci/autogluon/docs/_build/eval/tutorials/multimodal/object_detection/quick_start/tmp/a2cfefff684247a5b1890518015e5635-quick_start_tutorial_temp_save/epoch=9-step=18.ckpt' as top 1 Epoch 9, global step 19: 'val_direct_loss' was not in top 1 Epoch 10, global step 20: 'val_direct_loss' was not in top 1 Epoch 10, global step 21: 'val_direct_loss' was not in top 1 Epoch 11, global step 22: 'val_direct_loss' reached 790.93079 (best 790.93079), saving model to '/home/ci/autogluon/docs/_build/eval/tutorials/multimodal/object_detection/quick_start/tmp/a2cfefff684247a5b1890518015e5635-quick_start_tutorial_temp_save/epoch=11-step=22.ckpt' as top 1 Epoch 11, global step 23: 'val_direct_loss' reached 757.35474 (best 757.35474), saving model to '/home/ci/autogluon/docs/_build/eval/tutorials/multimodal/object_detection/quick_start/tmp/a2cfefff684247a5b1890518015e5635-quick_start_tutorial_temp_save/epoch=11-step=23.ckpt' as top 1 Epoch 12, global step 24: 'val_direct_loss' was not in top 1 Epoch 12, global step 25: 'val_direct_loss' reached 732.31500 (best 732.31500), saving model to '/home/ci/autogluon/docs/_build/eval/tutorials/multimodal/object_detection/quick_start/tmp/a2cfefff684247a5b1890518015e5635-quick_start_tutorial_temp_save/epoch=12-step=25.ckpt' as top 1 Epoch 13, global step 26: 'val_direct_loss' was not in top 1 Epoch 13, global step 27: 'val_direct_loss' reached 626.65387 (best 626.65387), saving model to '/home/ci/autogluon/docs/_build/eval/tutorials/multimodal/object_detection/quick_start/tmp/a2cfefff684247a5b1890518015e5635-quick_start_tutorial_temp_save/epoch=13-step=27.ckpt' as top 1 Epoch 14, global step 28: 'val_direct_loss' was not in top 1 Epoch 14, global step 29: 'val_direct_loss' reached 626.18237 (best 626.18237), saving model to '/home/ci/autogluon/docs/_build/eval/tutorials/multimodal/object_detection/quick_start/tmp/a2cfefff684247a5b1890518015e5635-quick_start_tutorial_temp_save/epoch=14-step=29.ckpt' as top 1 Epoch 15, global step 30: 'val_direct_loss' was not in top 1 Epoch 15, global step 31: 'val_direct_loss' was not in top 1 Epoch 16, global step 32: 'val_direct_loss' was not in top 1 Epoch 16, global step 33: 'val_direct_loss' was not in top 1 Epoch 17, global step 34: 'val_direct_loss' was not in top 1 Epoch 17, global step 35: 'val_direct_loss' was not in top 1 Epoch 18, global step 36: 'val_direct_loss' was not in top 1 Epoch 18, global step 37: 'val_direct_loss' was not in top 1 Epoch 19, global step 38: 'val_direct_loss' was not in top 1 Epoch 19, global step 39: 'val_direct_loss' was not in top 1 Epoch 20, global step 40: 'val_direct_loss' was not in top 1 Epoch 20, global step 41: 'val_direct_loss' was not in top 1 Epoch 21, global step 42: 'val_direct_loss' was not in top 1 Epoch 21, global step 43: 'val_direct_loss' was not in top 1 Epoch 22, global step 44: 'val_direct_loss' was not in top 1 Epoch 22, global step 45: 'val_direct_loss' was not in top 1 Epoch 23, global step 46: 'val_direct_loss' was not in top 1 Epoch 23, global step 47: 'val_direct_loss' reached 600.72766 (best 600.72766), saving model to '/home/ci/autogluon/docs/_build/eval/tutorials/multimodal/object_detection/quick_start/tmp/a2cfefff684247a5b1890518015e5635-quick_start_tutorial_temp_save/epoch=23-step=47.ckpt' as top 1 Epoch 24, global step 48: 'val_direct_loss' reached 568.21265 (best 568.21265), saving model to '/home/ci/autogluon/docs/_build/eval/tutorials/multimodal/object_detection/quick_start/tmp/a2cfefff684247a5b1890518015e5635-quick_start_tutorial_temp_save/epoch=24-step=48.ckpt' as top 1 Epoch 24, global step 49: 'val_direct_loss' was not in top 1 Epoch 25, global step 50: 'val_direct_loss' was not in top 1 Epoch 25, global step 51: 'val_direct_loss' was not in top 1 Epoch 26, global step 52: 'val_direct_loss' was not in top 1 Epoch 26, global step 53: 'val_direct_loss' was not in top 1 Epoch 27, global step 54: 'val_direct_loss' was not in top 1 Epoch 27, global step 55: 'val_direct_loss' was not in top 1 Epoch 28, global step 56: 'val_direct_loss' was not in top 1 Epoch 28, global step 57: 'val_direct_loss' was not in top 1 Epoch 29, global step 58: 'val_direct_loss' was not in top 1 Epoch 29, global step 59: 'val_direct_loss' was not in top 1 `Trainer.fit` stopped: `max_epochs=30` reached. Notice that at the end of each progress bar, if the checkpoint at current stage is saved, it prints the model’s save path. In this example, it’s ``./quick_start_tutorial_temp_save``. Print out the time and we can see that it’s fast! .. code:: python print("This finetuning takes %.2f seconds." % (train_end - start)) .. parsed-literal:: :class: output This finetuning takes 141.92 seconds. Evaluation ~~~~~~~~~~ To evaluate the model we just trained, run following code. And the evaluation results are shown in command line output. The first line is mAP in COCO standard, and the second line is mAP in VOC standard (or mAP50). For more details about these metrics, see `COCO’s evaluation guideline