Image Prediction - Search Space and Hyperparameter Optimization (HPO)¶
While the Image Prediction - Quick Start introduced basic usage of AutoGluon
fit
, evaluate
, predict
with default configurations, this
tutorial dives into the various options that you can specify for more
advanced control over the fitting process.
These options include:
Defining the search space of various hyperparameter values for the training of neural networks
Specifying how to search through your chosen hyperparameter space
Specifying how to schedule jobs to train a network under a particular hyperparameter configuration.
The advanced functionalities of AutoGluon enable you to use your external knowledge about your particular prediction problem and computing resources to guide the training process. If properly used, you may be able to achieve superior performance within less training time.
Tip: If you are new to AutoGluon, review Image Prediction - Quick Start to learn the basics of the AutoGluon API.
Since our task is to classify images, we will use AutoGluon to produce an ImagePredictor:
import autogluon.core as ag
from autogluon.vision import ImagePredictor, ImageDataset
/var/lib/jenkins/miniconda3/envs/autogluon-tutorial-image-classification-v3/lib/python3.9/site-packages/gluoncv/__init__.py:40: UserWarning: Both mxnet==1.9.1 and torch==1.10.2+cu102 are installed. You might encounter increased GPU memory footprint if both framework are used at the same time. warnings.warn(f'Both mxnet=={mx.__version__} and torch=={torch.__version__} are installed. '
Create AutoGluon Dataset¶
Let’s first create the dataset using the same subset of the
Shopee-IET
dataset as the Image Prediction - Quick Start tutorial. Recall
that there’s no validation split in original data, a 90/10
train/validation split is automatically performed when fit
with
train_data
.
train_data, _, test_data = ImageDataset.from_folders('https://autogluon.s3.amazonaws.com/datasets/shopee-iet.zip')
data/
├── test/
└── train/
Specify which Networks to Try¶
We start with specifying the pretrained neural network candidates. Given
such a list, AutoGluon tries to train different networks from this list
to identify the best-performing candidate. This is an example of a
autogluon.core.space.Categorical
search space, in which there
are a limited number of values to choose from.
model = ag.Categorical('resnet18_v1b', 'mobilenetv3_small')
# you may choose more than 70+ available model in the model zoo provided by GluonCV:
model_list = ImagePredictor.list_models()
Specify the training hyper-parameters¶
Similarly, we can manually specify many crucial hyper-parameters, with
specific value or search space (autogluon.core.space
).
batch_size = 8
lr = ag.Categorical(1e-2, 1e-3)
Search Algorithms¶
In AutoGluon, autogluon.core.searcher
supports different search
strategies for both hyperparameter optimization and architecture search.
Beyond simply specifying the space of hyperparameter configurations to
search over, you can also tell AutoGluon what strategy it should employ
to actually search through this space. This process of finding good
hyperparameters from a given search space is commonly referred to as
hyperparameter optimization (HPO) or hyperparameter tuning.
autogluon.core.scheduler
orchestrates how individual training jobs
are scheduled. We currently support random search.
Random Search¶
Here is an example of using random search using
autogluon.core.searcher.LocalRandomSearcher
.
hyperparameters={'model': model, 'batch_size': batch_size, 'lr': lr, 'epochs': 2}
predictor = ImagePredictor()
predictor.fit(train_data, time_limit=60*10, hyperparameters=hyperparameters,
hyperparameter_tune_kwargs={'num_trials': 2})
print('Top-1 val acc: %.3f' % predictor.fit_summary()['valid_acc'])
Reset labels to [0, 1, 2, 3]
Randomly split train_data into train[720]/validation[80] splits.
The number of requested GPUs is greater than the number of available GPUs.Reduce the number to 1
Starting HPO experiments
0%| | 0/2 [00:00<?, ?it/s]
=============================================================================
WARNING: Using MXNet models in ImagePredictor is deprecated as of v0.4.0 and may contain various bugs and issues!
In v0.5.0, ImagePredictor will no longer support training MXNet models. Please consider switching to specifying Torch models instead.
Users should ensure they update their code that depends on ImagePredictor when upgrading to future AutoGluon releases.
For more information, refer to this GitHub issue: https://github.com/awslabs/autogluon/issues/1560
=============================================================================
modified configs(<old> != <new>): {
root.train.batch_size 128 != 8
root.train.num_training_samples 1281167 != -1
root.train.early_stop_baseline 0.0 != -inf
root.train.rec_val ~/.mxnet/datasets/imagenet/rec/val.rec != auto
root.train.rec_train_idx ~/.mxnet/datasets/imagenet/rec/train.idx != auto
root.train.early_stop_max_value 1.0 != inf
root.train.data_dir ~/.mxnet/datasets/imagenet != auto
root.train.rec_val_idx ~/.mxnet/datasets/imagenet/rec/val.idx != auto
root.train.lr 0.1 != 0.01
root.train.num_workers 4 != 8
root.train.epochs 10 != 2
root.train.rec_train ~/.mxnet/datasets/imagenet/rec/train.rec != auto
root.train.early_stop_patience -1 != 10
root.valid.num_workers 4 != 8
root.valid.batch_size 128 != 8
root.img_cls.model resnet50_v1 != resnet18_v1b
}
Saved config to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-image-classification-v3/docs/_build/eval/tutorials/image_prediction/8b5b94dc/.trial_0/config.yaml
No gpu detected, fallback to cpu. You can ignore this warning if this is intended.
Start training from [Epoch 0]
Epoch[0] Batch [49] Speed: 27.895751 samples/sec accuracy=0.345000 lr=0.010000
[Epoch 0] training: accuracy=0.426389
[Epoch 0] speed: 28 samples/sec time cost: 25.279570
[Epoch 0] validation: top1=0.637500 top5=1.000000
[Epoch 0] Current best top-1: 0.637500 vs previous -inf, saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-image-classification-v3/docs/_build/eval/tutorials/image_prediction/8b5b94dc/.trial_0/best_checkpoint.pkl
Epoch[1] Batch [49] Speed: 28.425455 samples/sec accuracy=0.655000 lr=0.010000
[Epoch 1] training: accuracy=0.644444
[Epoch 1] speed: 28 samples/sec time cost: 24.994214
[Epoch 1] validation: top1=0.825000 top5=1.000000
[Epoch 1] Current best top-1: 0.825000 vs previous 0.637500, saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-image-classification-v3/docs/_build/eval/tutorials/image_prediction/8b5b94dc/.trial_0/best_checkpoint.pkl
Applying the state from the best checkpoint...
=============================================================================
WARNING: Using MXNet models in ImagePredictor is deprecated as of v0.4.0 and may contain various bugs and issues!
In v0.5.0, ImagePredictor will no longer support training MXNet models. Please consider switching to specifying Torch models instead.
Users should ensure they update their code that depends on ImagePredictor when upgrading to future AutoGluon releases.
For more information, refer to this GitHub issue: https://github.com/awslabs/autogluon/issues/1560
=============================================================================
modified configs(<old> != <new>): {
root.train.batch_size 128 != 8
root.train.num_training_samples 1281167 != -1
root.train.early_stop_baseline 0.0 != -inf
root.train.rec_val ~/.mxnet/datasets/imagenet/rec/val.rec != auto
root.train.rec_train_idx ~/.mxnet/datasets/imagenet/rec/train.idx != auto
root.train.early_stop_max_value 1.0 != inf
root.train.data_dir ~/.mxnet/datasets/imagenet != auto
root.train.rec_val_idx ~/.mxnet/datasets/imagenet/rec/val.idx != auto
root.train.lr 0.1 != 0.001
root.train.num_workers 4 != 8
root.train.epochs 10 != 2
root.train.rec_train ~/.mxnet/datasets/imagenet/rec/train.rec != auto
root.train.early_stop_patience -1 != 10
root.valid.num_workers 4 != 8
root.valid.batch_size 128 != 8
root.img_cls.model resnet50_v1 != resnet18_v1b
}
Saved config to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-image-classification-v3/docs/_build/eval/tutorials/image_prediction/8b5b94dc/.trial_1/config.yaml
No gpu detected, fallback to cpu. You can ignore this warning if this is intended.
Start training from [Epoch 0]
Epoch[0] Batch [49] Speed: 28.409658 samples/sec accuracy=0.245000 lr=0.001000
[Epoch 0] training: accuracy=0.258333
[Epoch 0] speed: 28 samples/sec time cost: 24.727135
[Epoch 0] validation: top1=0.412500 top5=1.000000
[Epoch 0] Current best top-1: 0.412500 vs previous -inf, saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-image-classification-v3/docs/_build/eval/tutorials/image_prediction/8b5b94dc/.trial_1/best_checkpoint.pkl
Epoch[1] Batch [49] Speed: 29.032407 samples/sec accuracy=0.317500 lr=0.001000
[Epoch 1] training: accuracy=0.345833
[Epoch 1] speed: 29 samples/sec time cost: 24.420626
[Epoch 1] validation: top1=0.512500 top5=1.000000
[Epoch 1] Current best top-1: 0.512500 vs previous 0.412500, saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-image-classification-v3/docs/_build/eval/tutorials/image_prediction/8b5b94dc/.trial_1/best_checkpoint.pkl
Applying the state from the best checkpoint...
=============================================================================
WARNING: Using MXNet models in ImagePredictor is deprecated as of v0.4.0 and may contain various bugs and issues!
In v0.5.0, ImagePredictor will no longer support training MXNet models. Please consider switching to specifying Torch models instead.
Users should ensure they update their code that depends on ImagePredictor when upgrading to future AutoGluon releases.
For more information, refer to this GitHub issue: https://github.com/awslabs/autogluon/issues/1560
=============================================================================
Finished, total runtime is 107.88 s
{ 'best_config': { 'estimator': <class 'gluoncv.auto.estimators.image_classification.image_classification.ImageClassificationEstimator'>,
'gpus': [0],
'img_cls': { 'batch_norm': False,
'last_gamma': False,
'model': 'resnet18_v1b',
'use_gn': False,
'use_pretrained': True,
'use_se': False},
'train': { 'batch_size': 8,
'crop_ratio': 0.875,
'data_dir': 'auto',
'dtype': 'float32',
'early_stop_baseline': -inf,
'early_stop_max_value': inf,
'early_stop_min_delta': 0.001,
'early_stop_patience': 10,
'epochs': 2,
'hard_weight': 0.5,
'input_size': 224,
'label_smoothing': False,
'log_interval': 50,
'lr': 0.01,
'lr_decay': 0.1,
'lr_decay_epoch': '40, 60',
'lr_decay_period': 0,
'lr_mode': 'step',
'mixup': False,
'mixup_alpha': 0.2,
'mixup_off_epoch': 0,
'mode': '',
'momentum': 0.9,
'no_wd': False,
'num_training_samples': -1,
'num_workers': 8,
'output_lr_mult': 0.1,
'pretrained_base': True,
'rec_train': 'auto',
'rec_train_idx': 'auto',
'rec_val': 'auto',
'rec_val_idx': 'auto',
'resume_epoch': 0,
'start_epoch': 0,
'teacher': None,
'temperature': 20,
'transfer_lr_mult': 0.01,
'use_rec': False,
'warmup_epochs': 0,
'warmup_lr': 0.0,
'wd': 0.0001},
'valid': {'batch_size': 8, 'num_workers': 8}},
'total_time': 107.87760162353516,
'train_acc': 0.3458333333333333,
'valid_acc': 0.5125}
Top-1 val acc: 0.512
Load the test dataset and evaluate:
results = predictor.evaluate(test_data)
print('Test acc on hold-out data:', results)
Test acc on hold-out data: {'top1': 0.7625, 'top5': 1.0}
Note that num_trials=2
above is only used to speed up the tutorial.
In normal practice, it is common to only use time_limit
and drop
num_trials
.