.. _sec_imgadvanced: Image Prediction - Search Space and Hyperparameter Optimization (HPO) ===================================================================== While the :ref:`sec_imgquick` introduced basic usage of AutoGluon ``fit``, ``evaluate``, ``predict`` with default configurations, this tutorial dives into the various options that you can specify for more advanced control over the fitting process. These options include: - Defining the search space of various hyperparameter values for the training of neural networks - Specifying how to search through your choosen hyperparameter space - Specifying how to schedule jobs to train a network under a particular hyperparameter configuration. The advanced functionalities of AutoGluon enable you to use your external knowledge about your particular prediction problem and computing resources to guide the training process. If properly used, you may be able to achieve superior performance within less training time. **Tip**: If you are new to AutoGluon, review :ref:`sec_imgquick` to learn the basics of the AutoGluon API. Since our task is to classify images, we will use AutoGluon to produce an `ImagePredictor <../../api/autogluon.predictor.html#autogluon.vision.ImagePredictor>`__: .. code:: python import autogluon.core as ag from autogluon.vision import ImagePredictor Create AutoGluon Dataset ------------------------ Let's first create the dataset using the same subset of the ``Shopee-IET`` dataset as the :ref:`sec_imgquick` tutorial. Recall that there's no validation split in original data, a 90/10 train/validation split is automatically performed when ``fit`` with ``train_data``. .. code:: python train_data, _, test_data = ImagePredictor.Dataset.from_folders('https://autogluon.s3.amazonaws.com/datasets/shopee-iet.zip') .. parsed-literal:: :class: output data/ ├── test/ └── train/ Specify which Networks to Try ----------------------------- We start with specifying the pretrained neural network candidates. Given such a list, AutoGluon tries to train different networks from this list to identify the best-performing candidate. This is an example of a :class:`autogluon.core.space.Categorical` search space, in which there are a limited number of values to choose from. .. code:: python model = ag.Categorical('resnet18_v1b', 'mobilenetv3_small') # you may choose more than 70+ available model in the model zoo provided by GluonCV: model_list = ImagePredictor.list_models() Specify the training hyper-parameters ------------------------------------- Similarly, we can manually specify many crucial hyper-parameters, with specific value or search space(\ ``autogluon.core.space``). .. code:: python batch_size = 8 lr = ag.Categorical(1e-2, 1e-3) Search Algorithms ----------------- In AutoGluon, ``autogluon.core.searcher`` supports different search search strategies for both hyperparameter optimization and architecture search. Beyond simply specifying the space of hyperparameter configurations to search over, you can also tell AutoGluon what strategy it should employ to actually search through this space. This process of finding good hyperparameters from a given search space is commonly referred to as *hyperparameter optimization* (HPO) or *hyperparameter tuning*. ``autogluon.core.scheduler`` orchestrates how individual training jobs are scheduled. We currently support FIFO (standard) and Hyperband scheduling, along with search by random sampling or Bayesian optimization. These basic techniques are rendered surprisingly powerful by AutoGluon's support of asynchronous parallel execution. Bayesian Optimization ~~~~~~~~~~~~~~~~~~~~~ Here is an example of using Bayesian Optimization using :class:`autogluon.core.searcher.GPFIFOSearcher`. Bayesian Optimization fits a probabilistic *surrogate model* to estimate the function that relates each hyperparameter configuration to the resulting performance of a model trained under this hyperparameter configuration. Our implementation makes use of a Gaussian process surrogate model along with expected improvement as acquisition function. It has been developed specifically to support asynchronous parallel evaluations. .. code:: python hyperparameters={'model': model, 'batch_size': batch_size, 'lr': lr, 'epochs': 2} predictor = ImagePredictor() predictor.fit(train_data, search_strategy='bayesopt', time_limit=60*10, hyperparameters=hyperparameters, hyperparameter_tune_kwargs={'num_trials': 2}) print('Top-1 val acc: %.3f' % predictor.fit_summary()['valid_acc']) .. parsed-literal:: :class: output INFO:root:Reset labels to [0, 1, 2, 3] WARNING:gluoncv.auto.tasks.image_classification:The number of requested GPUs is greater than the number of available GPUs.Reduce the number to 1 INFO:gluoncv.auto.tasks.image_classification:Randomly split train_data into train[721]/validation[79] splits. INFO:gluoncv.auto.tasks.image_classification:Starting HPO experiments .. parsed-literal:: :class: output 0%| | 0/2 [00:00 != ): { INFO:ImageClassificationEstimator:root.valid.batch_size 128 != 8 INFO:ImageClassificationEstimator:root.valid.num_workers 4 != 8 INFO:ImageClassificationEstimator:root.train.epochs 10 != 2 INFO:ImageClassificationEstimator:root.train.num_training_samples 1281167 != -1 INFO:ImageClassificationEstimator:root.train.early_stop_patience -1 != 10 INFO:ImageClassificationEstimator:root.train.early_stop_max_value 1.0 != inf INFO:ImageClassificationEstimator:root.train.data_dir ~/.mxnet/datasets/imagenet != auto INFO:ImageClassificationEstimator:root.train.early_stop_baseline 0.0 != -inf INFO:ImageClassificationEstimator:root.train.rec_val ~/.mxnet/datasets/imagenet/rec/val.rec != auto INFO:ImageClassificationEstimator:root.train.rec_train ~/.mxnet/datasets/imagenet/rec/train.rec != auto INFO:ImageClassificationEstimator:root.train.rec_train_idx ~/.mxnet/datasets/imagenet/rec/train.idx != auto INFO:ImageClassificationEstimator:root.train.num_workers 4 != 8 INFO:ImageClassificationEstimator:root.train.batch_size 128 != 8 INFO:ImageClassificationEstimator:root.train.rec_val_idx ~/.mxnet/datasets/imagenet/rec/val.idx != auto INFO:ImageClassificationEstimator:root.train.lr 0.1 != 0.01 INFO:ImageClassificationEstimator:root.img_cls.model resnet50_v1 != resnet18_v1b INFO:ImageClassificationEstimator:} INFO:ImageClassificationEstimator:Saved config to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-image-classification-v3/docs/_build/eval/tutorials/image_prediction/2795733c/.trial_0/config.yaml INFO:ImageClassificationEstimator:Start training from [Epoch 0] INFO:ImageClassificationEstimator:Epoch[0] Batch [49] Speed: 222.918614 samples/sec accuracy=0.397500 lr=0.010000 INFO:ImageClassificationEstimator:Epoch[0] Batch [99] Speed: 264.451630 samples/sec accuracy=0.501250 lr=0.010000 INFO:ImageClassificationEstimator:[Epoch 0] training: accuracy=0.501250 INFO:ImageClassificationEstimator:[Epoch 0] speed: 239 samples/sec time cost: 4.811845 INFO:ImageClassificationEstimator:[Epoch 0] validation: top1=0.717500 top5=1.000000 INFO:ImageClassificationEstimator:[Epoch 0] Current best top-1: 0.717500 vs previous 0.000000, saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-image-classification-v3/docs/_build/eval/tutorials/image_prediction/2795733c/.trial_0/best_checkpoint.pkl INFO:ImageClassificationEstimator:Epoch[1] Batch [49] Speed: 249.347932 samples/sec accuracy=0.600000 lr=0.010000 INFO:ImageClassificationEstimator:Epoch[1] Batch [99] Speed: 262.866884 samples/sec accuracy=0.631250 lr=0.010000 INFO:ImageClassificationEstimator:[Epoch 1] training: accuracy=0.631250 INFO:ImageClassificationEstimator:[Epoch 1] speed: 253 samples/sec time cost: 4.607433 INFO:ImageClassificationEstimator:[Epoch 1] validation: top1=0.812500 top5=1.000000 INFO:ImageClassificationEstimator:[Epoch 1] Current best top-1: 0.812500 vs previous 0.717500, saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-image-classification-v3/docs/_build/eval/tutorials/image_prediction/2795733c/.trial_0/best_checkpoint.pkl INFO:ImageClassificationEstimator:Applying the state from the best checkpoint... INFO:ImageClassificationEstimator:modified configs( != ): { INFO:ImageClassificationEstimator:root.valid.batch_size 128 != 8 INFO:ImageClassificationEstimator:root.valid.num_workers 4 != 8 INFO:ImageClassificationEstimator:root.train.epochs 10 != 2 INFO:ImageClassificationEstimator:root.train.num_training_samples 1281167 != -1 INFO:ImageClassificationEstimator:root.train.early_stop_patience -1 != 10 INFO:ImageClassificationEstimator:root.train.early_stop_max_value 1.0 != inf INFO:ImageClassificationEstimator:root.train.data_dir ~/.mxnet/datasets/imagenet != auto INFO:ImageClassificationEstimator:root.train.early_stop_baseline 0.0 != -inf INFO:ImageClassificationEstimator:root.train.rec_val ~/.mxnet/datasets/imagenet/rec/val.rec != auto INFO:ImageClassificationEstimator:root.train.rec_train ~/.mxnet/datasets/imagenet/rec/train.rec != auto INFO:ImageClassificationEstimator:root.train.rec_train_idx ~/.mxnet/datasets/imagenet/rec/train.idx != auto INFO:ImageClassificationEstimator:root.train.num_workers 4 != 8 INFO:ImageClassificationEstimator:root.train.batch_size 128 != 8 INFO:ImageClassificationEstimator:root.train.rec_val_idx ~/.mxnet/datasets/imagenet/rec/val.idx != auto INFO:ImageClassificationEstimator:root.train.lr 0.1 != 0.01 INFO:ImageClassificationEstimator:root.img_cls.model resnet50_v1 != mobilenetv3_small INFO:ImageClassificationEstimator:} INFO:ImageClassificationEstimator:Saved config to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-image-classification-v3/docs/_build/eval/tutorials/image_prediction/2795733c/.trial_1/config.yaml INFO:ImageClassificationEstimator:Start training from [Epoch 0] INFO:ImageClassificationEstimator:Epoch[0] Batch [49] Speed: 133.261243 samples/sec accuracy=0.332500 lr=0.010000 INFO:ImageClassificationEstimator:Epoch[0] Batch [99] Speed: 150.084882 samples/sec accuracy=0.426250 lr=0.010000 INFO:ImageClassificationEstimator:[Epoch 0] training: accuracy=0.426250 INFO:ImageClassificationEstimator:[Epoch 0] speed: 139 samples/sec time cost: 8.044586 INFO:ImageClassificationEstimator:[Epoch 0] validation: top1=0.697500 top5=1.000000 INFO:ImageClassificationEstimator:[Epoch 0] Current best top-1: 0.697500 vs previous 0.000000, saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-image-classification-v3/docs/_build/eval/tutorials/image_prediction/2795733c/.trial_1/best_checkpoint.pkl INFO:ImageClassificationEstimator:Epoch[1] Batch [49] Speed: 138.401894 samples/sec accuracy=0.572500 lr=0.010000 INFO:ImageClassificationEstimator:Epoch[1] Batch [99] Speed: 151.617402 samples/sec accuracy=0.592500 lr=0.010000 INFO:ImageClassificationEstimator:[Epoch 1] training: accuracy=0.592500 INFO:ImageClassificationEstimator:[Epoch 1] speed: 143 samples/sec time cost: 7.919803 INFO:ImageClassificationEstimator:[Epoch 1] validation: top1=0.771250 top5=1.000000 INFO:ImageClassificationEstimator:[Epoch 1] Current best top-1: 0.771250 vs previous 0.697500, saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-image-classification-v3/docs/_build/eval/tutorials/image_prediction/2795733c/.trial_1/best_checkpoint.pkl INFO:ImageClassificationEstimator:Applying the state from the best checkpoint... INFO:ImageClassificationEstimator:modified configs( != ): { INFO:ImageClassificationEstimator:root.valid.batch_size 128 != 8 INFO:ImageClassificationEstimator:root.valid.num_workers 4 != 8 INFO:ImageClassificationEstimator:root.train.epochs 10 != 2 INFO:ImageClassificationEstimator:root.train.num_training_samples 1281167 != -1 INFO:ImageClassificationEstimator:root.train.early_stop_patience -1 != 10 INFO:ImageClassificationEstimator:root.train.early_stop_max_value 1.0 != inf INFO:ImageClassificationEstimator:root.train.data_dir ~/.mxnet/datasets/imagenet != auto INFO:ImageClassificationEstimator:root.train.early_stop_baseline 0.0 != -inf INFO:ImageClassificationEstimator:root.train.rec_val ~/.mxnet/datasets/imagenet/rec/val.rec != auto INFO:ImageClassificationEstimator:root.train.rec_train ~/.mxnet/datasets/imagenet/rec/train.rec != auto INFO:ImageClassificationEstimator:root.train.rec_train_idx ~/.mxnet/datasets/imagenet/rec/train.idx != auto INFO:ImageClassificationEstimator:root.train.num_workers 4 != 8 INFO:ImageClassificationEstimator:root.train.batch_size 128 != 8 INFO:ImageClassificationEstimator:root.train.rec_val_idx ~/.mxnet/datasets/imagenet/rec/val.idx != auto INFO:ImageClassificationEstimator:root.train.lr 0.1 != 0.01 INFO:ImageClassificationEstimator:root.img_cls.model resnet50_v1 != resnet18_v1b INFO:ImageClassificationEstimator:} INFO:ImageClassificationEstimator:Saved config to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-image-classification-v3/docs/_build/eval/tutorials/image_prediction/2795733c/.trial_0/config.yaml INFO:ImageClassificationEstimator:Start training from [Epoch 0] INFO:ImageClassificationEstimator:Epoch[0] Batch [49] Speed: 224.288874 samples/sec accuracy=0.350000 lr=0.010000 INFO:ImageClassificationEstimator:Epoch[0] Batch [99] Speed: 261.664337 samples/sec accuracy=0.448750 lr=0.010000 INFO:ImageClassificationEstimator:[Epoch 0] training: accuracy=0.448750 INFO:ImageClassificationEstimator:[Epoch 0] speed: 238 samples/sec time cost: 4.814370 INFO:ImageClassificationEstimator:[Epoch 0] validation: top1=0.753750 top5=1.000000 INFO:ImageClassificationEstimator:[Epoch 0] Current best top-1: 0.753750 vs previous 0.000000, saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-image-classification-v3/docs/_build/eval/tutorials/image_prediction/2795733c/.trial_0/best_checkpoint.pkl INFO:ImageClassificationEstimator:Epoch[1] Batch [49] Speed: 245.543351 samples/sec accuracy=0.562500 lr=0.010000 INFO:ImageClassificationEstimator:Epoch[1] Batch [99] Speed: 259.396705 samples/sec accuracy=0.626250 lr=0.010000 INFO:ImageClassificationEstimator:[Epoch 1] training: accuracy=0.626250 INFO:ImageClassificationEstimator:[Epoch 1] speed: 249 samples/sec time cost: 4.663400 INFO:ImageClassificationEstimator:[Epoch 1] validation: top1=0.820000 top5=1.000000 INFO:ImageClassificationEstimator:[Epoch 1] Current best top-1: 0.820000 vs previous 0.753750, saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-image-classification-v3/docs/_build/eval/tutorials/image_prediction/2795733c/.trial_0/best_checkpoint.pkl INFO:ImageClassificationEstimator:Applying the state from the best checkpoint... INFO:gluoncv.auto.tasks.image_classification:Finished, total runtime is 47.59 s INFO:gluoncv.auto.tasks.image_classification:{ 'best_config': { 'estimator': , 'gpus': [0], 'img_cls': { 'batch_norm': False, 'last_gamma': False, 'model': 'resnet18_v1b', 'use_gn': False, 'use_pretrained': True, 'use_se': False}, 'train': { 'batch_size': 8, 'crop_ratio': 0.875, 'data_dir': 'auto', 'dtype': 'float32', 'early_stop_baseline': -inf, 'early_stop_max_value': inf, 'early_stop_min_delta': 0.001, 'early_stop_patience': 10, 'epochs': 2, 'hard_weight': 0.5, 'input_size': 224, 'label_smoothing': False, 'log_interval': 50, 'lr': 0.01, 'lr_decay': 0.1, 'lr_decay_epoch': '40, 60', 'lr_decay_period': 0, 'lr_mode': 'step', 'mixup': False, 'mixup_alpha': 0.2, 'mixup_off_epoch': 0, 'mode': '', 'momentum': 0.9, 'no_wd': False, 'num_training_samples': -1, 'num_workers': 8, 'output_lr_mult': 0.1, 'pretrained_base': True, 'rec_train': 'auto', 'rec_train_idx': 'auto', 'rec_val': 'auto', 'rec_val_idx': 'auto', 'resume_epoch': 0, 'start_epoch': 0, 'teacher': None, 'temperature': 20, 'transfer_lr_mult': 0.01, 'use_rec': False, 'warmup_epochs': 0, 'warmup_lr': 0.0, 'wd': 0.0001}, 'valid': {'batch_size': 8, 'num_workers': 8}}, 'total_time': 47.58576416969299, 'train_acc': 0.62625, 'valid_acc': 0.82} .. parsed-literal:: :class: output Top-1 val acc: 0.820 The BO searcher can be configured by ``search_options``, see :class:`autogluon.core.searcher.GPFIFOSearcher`. Load the test dataset and evaluate: .. code:: python top1, top5 = predictor.evaluate(test_data) print('Test acc on hold-out data:', top1) .. parsed-literal:: :class: output Test acc on hold-out data: 0.7375 Note that ``num_trials=2`` above is only used to speed up the tutorial. In normal practice, it is common to only use ``time_limit`` and drop ``num_trials``. Hyperband Early Stopping ~~~~~~~~~~~~~~~~~~~~~~~~ AutoGluon currently supports scheduling trials in serial order and with early stopping (e.g., if the performance of the model early within training already looks bad, the trial may be terminated early to free up resources). Here is an example of using an early stopping scheduler :class:`autogluon.core.scheduler.HyperbandScheduler`. ``scheduler_options`` is used to configure the scheduler. In this example, we run Hyperband with a single bracket, and stop/go decisions are made after 1 and 2 epochs (``grace_period``, ``grace_period * reduction_factor``): .. code:: python hyperparameters.update({ 'search_strategy': 'hyperband', 'grace_period': 1 }) The ``fit``, ``evaluate`` and ``predict`` processes are exactly the same, so we will skip training to save some time. Bayesian Optimization and Hyperband ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ While Hyperband scheduling is normally driven by a random searcher, AutoGluon also provides Hyperband together with Bayesian optimization. The tuning of expensive DL models typically works best with this combination. .. code:: python hyperparameters.update({ 'search_strategy': 'bayesopt_hyperband', 'grace_period': 1 }) For a comparison of different search algorithms and scheduling strategies, see :ref:`course_alg`. For more options using ``fit``, see :class:`autogluon.vision.ImagePredictor`.