Image Prediction - Search Space and Hyperparameter Optimization (HPO)¶
While the Image Prediction - Quick Start introduced basic usage of AutoGluon
fit
, evaluate
, predict
with default configurations, this
tutorial dives into the various options that you can specify for more
advanced control over the fitting process.
These options include: - Defining the search space of various hyperparameter values for the training of neural networks - Specifying how to search through your choosen hyperparameter space - Specifying how to schedule jobs to train a network under a particular hyperparameter configuration.
The advanced functionalities of AutoGluon enable you to use your external knowledge about your particular prediction problem and computing resources to guide the training process. If properly used, you may be able to achieve superior performance within less training time.
Tip: If you are new to AutoGluon, review Image Prediction - Quick Start to learn the basics of the AutoGluon API.
Since our task is to classify images, we will use AutoGluon to produce an ImagePredictor:
import autogluon.core as ag
from autogluon.vision import ImagePredictor, ImageDataset
/var/lib/jenkins/workspace/workspace/autogluon-tutorial-image-classification-v3/venv/lib/python3.7/site-packages/gluoncv/__init__.py:40: UserWarning: Both mxnet==1.7.0 and torch==1.7.1+cu101 are installed. You might encounter increased GPU memory footprint if both framework are used at the same time. warnings.warn(f'Both mxnet=={mx.__version__} and torch=={torch.__version__} are installed. '
Create AutoGluon Dataset¶
Let’s first create the dataset using the same subset of the
Shopee-IET
dataset as the Image Prediction - Quick Start tutorial. Recall
that there’s no validation split in original data, a 90/10
train/validation split is automatically performed when fit
with
train_data
.
train_data, _, test_data = ImageDataset.from_folders('https://autogluon.s3.amazonaws.com/datasets/shopee-iet.zip')
data/
├── test/
└── train/
Specify which Networks to Try¶
We start with specifying the pretrained neural network candidates. Given
such a list, AutoGluon tries to train different networks from this list
to identify the best-performing candidate. This is an example of a
autogluon.core.space.Categorical
search space, in which there
are a limited number of values to choose from.
model = ag.Categorical('resnet18_v1b', 'mobilenetv3_small')
# you may choose more than 70+ available model in the model zoo provided by GluonCV:
model_list = ImagePredictor.list_models()
Specify the training hyper-parameters¶
Similarly, we can manually specify many crucial hyper-parameters, with
specific value or search space(autogluon.core.space
).
batch_size = 8
lr = ag.Categorical(1e-2, 1e-3)
Search Algorithms¶
In AutoGluon, autogluon.core.searcher
supports different search
search strategies for both hyperparameter optimization and architecture
search. Beyond simply specifying the space of hyperparameter
configurations to search over, you can also tell AutoGluon what strategy
it should employ to actually search through this space. This process of
finding good hyperparameters from a given search space is commonly
referred to as hyperparameter optimization (HPO) or hyperparameter
tuning. autogluon.core.scheduler
orchestrates how individual
training jobs are scheduled. We currently support FIFO (standard) and
Hyperband scheduling, along with search by random sampling or Bayesian
optimization. These basic techniques are rendered surprisingly powerful
by AutoGluon’s support of asynchronous parallel execution.
Bayesian Optimization¶
Here is an example of using Bayesian Optimization using
autogluon.core.searcher.GPFIFOSearcher
.
Bayesian Optimization fits a probabilistic surrogate model to estimate the function that relates each hyperparameter configuration to the resulting performance of a model trained under this hyperparameter configuration. Our implementation makes use of a Gaussian process surrogate model along with expected improvement as acquisition function. It has been developed specifically to support asynchronous parallel evaluations.
hyperparameters={'model': model, 'batch_size': batch_size, 'lr': lr, 'epochs': 2}
predictor = ImagePredictor()
predictor.fit(train_data, time_limit=60*10, hyperparameters=hyperparameters,
hyperparameter_tune_kwargs={'searcher': 'bayesopt', 'num_trials': 2})
print('Top-1 val acc: %.3f' % predictor.fit_summary()['valid_acc'])
Reset labels to [0, 1, 2, 3]
Randomly split train_data into train[720]/validation[80] splits.
The number of requested GPUs is greater than the number of available GPUs.Reduce the number to 1
Starting HPO experiments
0%| | 0/2 [00:00<?, ?it/s]
modified configs(<old> != <new>): {
root.train.early_stop_baseline 0.0 != -inf
root.train.epochs 10 != 2
root.train.lr 0.1 != 0.01
root.train.num_workers 4 != 8
root.train.batch_size 128 != 8
root.train.data_dir ~/.mxnet/datasets/imagenet != auto
root.train.rec_val_idx ~/.mxnet/datasets/imagenet/rec/val.idx != auto
root.train.early_stop_max_value 1.0 != inf
root.train.early_stop_patience -1 != 10
root.train.rec_train_idx ~/.mxnet/datasets/imagenet/rec/train.idx != auto
root.train.rec_train ~/.mxnet/datasets/imagenet/rec/train.rec != auto
root.train.num_training_samples 1281167 != -1
root.train.rec_val ~/.mxnet/datasets/imagenet/rec/val.rec != auto
root.valid.num_workers 4 != 8
root.valid.batch_size 128 != 8
root.img_cls.model resnet50_v1 != resnet18_v1b
}
Saved config to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-image-classification-v3/docs/_build/eval/tutorials/image_prediction/84191922/.trial_0/config.yaml
Start training from [Epoch 0]
Epoch[0] Batch [49] Speed: 228.703945 samples/sec accuracy=0.380000 lr=0.010000
[Epoch 0] training: accuracy=0.472222
[Epoch 0] speed: 241 samples/sec time cost: 2.948620
[Epoch 0] validation: top1=0.737500 top5=1.000000
[Epoch 0] Current best top-1: 0.737500 vs previous -inf, saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-image-classification-v3/docs/_build/eval/tutorials/image_prediction/84191922/.trial_0/best_checkpoint.pkl
Epoch[1] Batch [49] Speed: 248.309550 samples/sec accuracy=0.612500 lr=0.010000
[Epoch 1] training: accuracy=0.618056
[Epoch 1] speed: 254 samples/sec time cost: 2.798737
[Epoch 1] validation: top1=0.787500 top5=1.000000
[Epoch 1] Current best top-1: 0.787500 vs previous 0.737500, saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-image-classification-v3/docs/_build/eval/tutorials/image_prediction/84191922/.trial_0/best_checkpoint.pkl
Applying the state from the best checkpoint...
modified configs(<old> != <new>): {
root.train.early_stop_baseline 0.0 != -inf
root.train.epochs 10 != 2
root.train.lr 0.1 != 0.01
root.train.num_workers 4 != 8
root.train.batch_size 128 != 8
root.train.data_dir ~/.mxnet/datasets/imagenet != auto
root.train.rec_val_idx ~/.mxnet/datasets/imagenet/rec/val.idx != auto
root.train.early_stop_max_value 1.0 != inf
root.train.early_stop_patience -1 != 10
root.train.rec_train_idx ~/.mxnet/datasets/imagenet/rec/train.idx != auto
root.train.rec_train ~/.mxnet/datasets/imagenet/rec/train.rec != auto
root.train.num_training_samples 1281167 != -1
root.train.rec_val ~/.mxnet/datasets/imagenet/rec/val.rec != auto
root.valid.num_workers 4 != 8
root.valid.batch_size 128 != 8
root.img_cls.model resnet50_v1 != mobilenetv3_small
}
Saved config to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-image-classification-v3/docs/_build/eval/tutorials/image_prediction/84191922/.trial_1/config.yaml
Start training from [Epoch 0]
Epoch[0] Batch [49] Speed: 134.264785 samples/sec accuracy=0.322500 lr=0.010000
[Epoch 0] training: accuracy=0.387500
[Epoch 0] speed: 139 samples/sec time cost: 5.089346
[Epoch 0] validation: top1=0.587500 top5=1.000000
[Epoch 0] Current best top-1: 0.587500 vs previous -inf, saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-image-classification-v3/docs/_build/eval/tutorials/image_prediction/84191922/.trial_1/best_checkpoint.pkl
Epoch[1] Batch [49] Speed: 143.545277 samples/sec accuracy=0.507500 lr=0.010000
[Epoch 1] training: accuracy=0.544444
[Epoch 1] speed: 144 samples/sec time cost: 4.917585
[Epoch 1] validation: top1=0.762500 top5=1.000000
[Epoch 1] Current best top-1: 0.762500 vs previous 0.587500, saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-image-classification-v3/docs/_build/eval/tutorials/image_prediction/84191922/.trial_1/best_checkpoint.pkl
Applying the state from the best checkpoint...
modified configs(<old> != <new>): {
root.train.early_stop_baseline 0.0 != -inf
root.train.epochs 10 != 2
root.train.lr 0.1 != 0.01
root.train.num_workers 4 != 8
root.train.batch_size 128 != 8
root.train.data_dir ~/.mxnet/datasets/imagenet != auto
root.train.rec_val_idx ~/.mxnet/datasets/imagenet/rec/val.idx != auto
root.train.early_stop_max_value 1.0 != inf
root.train.early_stop_patience -1 != 10
root.train.rec_train_idx ~/.mxnet/datasets/imagenet/rec/train.idx != auto
root.train.rec_train ~/.mxnet/datasets/imagenet/rec/train.rec != auto
root.train.num_training_samples 1281167 != -1
root.train.rec_val ~/.mxnet/datasets/imagenet/rec/val.rec != auto
root.valid.num_workers 4 != 8
root.valid.batch_size 128 != 8
root.img_cls.model resnet50_v1 != resnet18_v1b
}
Saved config to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-image-classification-v3/docs/_build/eval/tutorials/image_prediction/84191922/.trial_0/config.yaml
Start training from [Epoch 0]
Epoch[0] Batch [49] Speed: 228.555268 samples/sec accuracy=0.337500 lr=0.010000
[Epoch 0] training: accuracy=0.423611
[Epoch 0] speed: 240 samples/sec time cost: 2.956346
[Epoch 0] validation: top1=0.700000 top5=1.000000
[Epoch 0] Current best top-1: 0.700000 vs previous -inf, saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-image-classification-v3/docs/_build/eval/tutorials/image_prediction/84191922/.trial_0/best_checkpoint.pkl
Epoch[1] Batch [49] Speed: 248.315908 samples/sec accuracy=0.610000 lr=0.010000
[Epoch 1] training: accuracy=0.640278
[Epoch 1] speed: 252 samples/sec time cost: 2.816216
[Epoch 1] validation: top1=0.825000 top5=1.000000
[Epoch 1] Current best top-1: 0.825000 vs previous 0.700000, saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-image-classification-v3/docs/_build/eval/tutorials/image_prediction/84191922/.trial_0/best_checkpoint.pkl
Applying the state from the best checkpoint...
Finished, total runtime is 35.26 s
{ 'best_config': { 'estimator': <class 'gluoncv.auto.estimators.image_classification.image_classification.ImageClassificationEstimator'>,
'gpus': [0],
'img_cls': { 'batch_norm': False,
'last_gamma': False,
'model': 'resnet18_v1b',
'use_gn': False,
'use_pretrained': True,
'use_se': False},
'train': { 'batch_size': 8,
'crop_ratio': 0.875,
'data_dir': 'auto',
'dtype': 'float32',
'early_stop_baseline': -inf,
'early_stop_max_value': inf,
'early_stop_min_delta': 0.001,
'early_stop_patience': 10,
'epochs': 2,
'hard_weight': 0.5,
'input_size': 224,
'label_smoothing': False,
'log_interval': 50,
'lr': 0.01,
'lr_decay': 0.1,
'lr_decay_epoch': '40, 60',
'lr_decay_period': 0,
'lr_mode': 'step',
'mixup': False,
'mixup_alpha': 0.2,
'mixup_off_epoch': 0,
'mode': '',
'momentum': 0.9,
'no_wd': False,
'num_training_samples': -1,
'num_workers': 8,
'output_lr_mult': 0.1,
'pretrained_base': True,
'rec_train': 'auto',
'rec_train_idx': 'auto',
'rec_val': 'auto',
'rec_val_idx': 'auto',
'resume_epoch': 0,
'start_epoch': 0,
'teacher': None,
'temperature': 20,
'transfer_lr_mult': 0.01,
'use_rec': False,
'warmup_epochs': 0,
'warmup_lr': 0.0,
'wd': 0.0001},
'valid': {'batch_size': 8, 'num_workers': 8}},
'total_time': 35.256065368652344,
'train_acc': 0.6402777777777777,
'valid_acc': 0.825}
Top-1 val acc: 0.825
The BO searcher can be configured by search_options
, see
autogluon.core.searcher.GPFIFOSearcher
. Load the test dataset
and evaluate:
results = predictor.evaluate(test_data)
print('Test acc on hold-out data:', results)
Test acc on hold-out data: {'top1': 0.7625, 'top5': 1.0}
Note that num_trials=2
above is only used to speed up the tutorial.
In normal practice, it is common to only use time_limit
and drop
num_trials
.