Getting started with Advanced HPO Algorithms¶
This tutorial provides a complete example of how to use AutoGluon’s state-of-the-art hyperparameter optimization (HPO) algorithms to tune a basic Multi-Layer Perceptron (MLP) model, which is the most basic type of neural network.
Loading libraries¶
# Basic utils for folder manipulations etc
import time
import multiprocessing # to count the number of CPUs available
# External tools to load and process data
import numpy as np
import pandas as pd
# MXNet (NeuralNets)
import mxnet as mx
from mxnet import gluon, autograd
from mxnet.gluon import nn
# AutoGluon and HPO tools
import autogluon.core as ag
from autogluon.mxnet.utils import load_and_split_openml_data
Check the version of MxNet, you should be fine with version >= 1.5
mx.__version__
'1.7.0'
You can also check the version of AutoGluon and the specific commit and check that it matches what you want.
import autogluon.core.version
ag.version.__version__
'0.2.0b20210429'
Hyperparameter Optimization of a 2-layer MLP¶
Setting up the context¶
Here we declare a few “environment variables” setting the context for what we’re doing
OPENML_TASK_ID = 6 # describes the problem we will tackle
RATIO_TRAIN_VALID = 0.33 # split of the training data used for validation
RESOURCE_ATTR_NAME = 'epoch' # how do we measure resources (will become clearer further)
REWARD_ATTR_NAME = 'objective' # how do we measure performance (will become clearer further)
NUM_CPUS = multiprocessing.cpu_count()
Preparing the data¶
We will use a multi-way classification task from OpenML. Data preparation includes:
Missing values are imputed, using the ‘mean’ strategy of
sklearn.impute.SimpleImputer
Split training set into training and validation
Standardize inputs to mean 0, variance 1
X_train, X_valid, y_train, y_valid, n_classes = load_and_split_openml_data(
OPENML_TASK_ID, RATIO_TRAIN_VALID, download_from_openml=False)
n_classes
100%|██████████| 704/704 [00:00<00:00, 44093.21KB/s]
100%|██████████| 2521/2521 [00:00<00:00, 12276.11KB/s]
3KB [00:00, 3636.68KB/s]
8KB [00:00, 10211.33KB/s]
15KB [00:00, 17365.32KB/s]
2998KB [00:00, 26843.40KB/s]
881KB [00:00, 41516.10KB/s]
3KB [00:00, 2433.36KB/s]
26
The problem has 26 classes.
Declaring a model specifying a hyperparameter space with AutoGluon¶
Two layer MLP where we optimize over:
the number of units on the first layer
the number of units on the second layer
the dropout rate after each layer
the learning rate
the scaling
the
@ag.args
decorator allows us to specify the space we will optimize over, this matches the ConfigSpace syntax
The body of the function run_mlp_openml
is pretty simple:
it reads the hyperparameters given via the decorator
it defines a 2 layer MLP with dropout
it declares a trainer with the ‘adam’ loss function and a provided learning rate
it trains the NN with a number of epochs (most of that is boilerplate code from
mxnet
)the
reporter
at the end is used to keep track of training history in the hyperparameter optimization
Note: The number of epochs and the hyperparameter space are reduced to make for a shorter experiment
@ag.args(n_units_1=ag.space.Int(lower=16, upper=128),
n_units_2=ag.space.Int(lower=16, upper=128),
dropout_1=ag.space.Real(lower=0, upper=.75),
dropout_2=ag.space.Real(lower=0, upper=.75),
learning_rate=ag.space.Real(lower=1e-6, upper=1, log=True),
batch_size=ag.space.Int(lower=8, upper=128),
scale_1=ag.space.Real(lower=0.001, upper=10, log=True),
scale_2=ag.space.Real(lower=0.001, upper=10, log=True),
epochs=9)
def run_mlp_openml(args, reporter, **kwargs):
# Time stamp for elapsed_time
ts_start = time.time()
# Unwrap hyperparameters
n_units_1 = args.n_units_1
n_units_2 = args.n_units_2
dropout_1 = args.dropout_1
dropout_2 = args.dropout_2
scale_1 = args.scale_1
scale_2 = args.scale_2
batch_size = args.batch_size
learning_rate = args.learning_rate
ctx = mx.cpu()
net = nn.Sequential()
with net.name_scope():
# Layer 1
net.add(nn.Dense(n_units_1, activation='relu',
weight_initializer=mx.initializer.Uniform(scale=scale_1)))
# Dropout
net.add(gluon.nn.Dropout(dropout_1))
# Layer 2
net.add(nn.Dense(n_units_2, activation='relu',
weight_initializer=mx.initializer.Uniform(scale=scale_2)))
# Dropout
net.add(gluon.nn.Dropout(dropout_2))
# Output
net.add(nn.Dense(n_classes))
net.initialize(ctx=ctx)
trainer = gluon.Trainer(net.collect_params(), 'adam',
{'learning_rate': learning_rate})
for epoch in range(args.epochs):
ts_epoch = time.time()
train_iter = mx.io.NDArrayIter(
data={'data': X_train},
label={'label': y_train},
batch_size=batch_size,
shuffle=True)
valid_iter = mx.io.NDArrayIter(
data={'data': X_valid},
label={'label': y_valid},
batch_size=batch_size,
shuffle=False)
metric = mx.metric.Accuracy()
loss = gluon.loss.SoftmaxCrossEntropyLoss()
for batch in train_iter:
data = batch.data[0].as_in_context(ctx)
label = batch.label[0].as_in_context(ctx)
with autograd.record():
output = net(data)
L = loss(output, label)
L.backward()
trainer.step(data.shape[0])
metric.update([label], [output])
name, train_acc = metric.get()
metric = mx.metric.Accuracy()
for batch in valid_iter:
data = batch.data[0].as_in_context(ctx)
label = batch.label[0].as_in_context(ctx)
output = net(data)
metric.update([label], [output])
name, val_acc = metric.get()
print('Epoch %d ; Time: %f ; Training: %s=%f ; Validation: %s=%f' % (
epoch + 1, time.time() - ts_start, name, train_acc, name, val_acc))
ts_now = time.time()
eval_time = ts_now - ts_epoch
elapsed_time = ts_now - ts_start
# The resource reported back (as 'epoch') is the number of epochs
# done, starting at 1
reporter(
epoch=epoch + 1,
objective=float(val_acc),
eval_time=eval_time,
time_step=ts_now,
elapsed_time=elapsed_time)
Note: The annotation epochs=9
specifies the maximum number of
epochs for training. It becomes available as args.epochs
.
Importantly, it is also processed by HyperbandScheduler
below in
order to set its max_t
attribute.
Recommendation: Whenever writing training code to be passed as
train_fn
to a scheduler, if this training code reports a resource
(or time) attribute, the corresponding maximum resource value should be
included in train_fn.args
:
If the resource attribute (
time_attr
of scheduler) intrain_fn
isepoch
, make sure to includeepochs=XYZ
in the annotation. This allows the scheduler to readmax_t
fromtrain_fn.args.epochs
. This case corresponds to our example here.If the resource attribute is something else than
epoch
, you can also include the annotationmax_t=XYZ
, which allows the scheduler to readmax_t
fromtrain_fn.args.max_t
.
Annotating the training function by the correct value for max_t
simplifies scheduler creation (since max_t
does not have to be
passed), and avoids inconsistencies between train_fn
and the
scheduler.
Running the Hyperparameter Optimization¶
You can use the following schedulers:
FIFO (
fifo
)Hyperband (either the stopping (
hbs
) or promotion (hbp
) variant)
And the following searchers:
Random search (
random
)Gaussian process based Bayesian optimization (
bayesopt
)SkOpt Bayesian optimization (
skopt
; only with FIFO scheduler)
Note that the method known as (asynchronous) Hyperband is using random
search. Combining Hyperband scheduling with the bayesopt
searcher
uses a novel method called asynchronous BOHB.
Pick the combination you’re interested in (doing the full experiment
takes around 120 seconds, see the time_out
parameter), running
everything with multiple runs can take a fair bit of time. In real life,
you will want to choose a larger time_out
in order to obtain good
performance.
SCHEDULER = "hbs"
SEARCHER = "bayesopt"
def compute_error(df):
return 1.0 - df["objective"]
def compute_runtime(df, start_timestamp):
return df["time_step"] - start_timestamp
def process_training_history(task_dicts, start_timestamp,
runtime_fn=compute_runtime,
error_fn=compute_error):
task_dfs = []
for task_id in task_dicts:
task_df = pd.DataFrame(task_dicts[task_id])
task_df = task_df.assign(task_id=task_id,
runtime=runtime_fn(task_df, start_timestamp),
error=error_fn(task_df),
target_epoch=task_df["epoch"].iloc[-1])
task_dfs.append(task_df)
result = pd.concat(task_dfs, axis="index", ignore_index=True, sort=True)
# re-order by runtime
result = result.sort_values(by="runtime")
# calculate incumbent best -- the cumulative minimum of the error.
result = result.assign(best=result["error"].cummin())
return result
resources = dict(num_cpus=NUM_CPUS, num_gpus=0)
search_options = {
'num_init_random': 2,
'debug_log': True}
if SCHEDULER == 'fifo':
myscheduler = ag.scheduler.FIFOScheduler(
run_mlp_openml,
resource=resources,
searcher=SEARCHER,
search_options=search_options,
time_out=120,
time_attr=RESOURCE_ATTR_NAME,
reward_attr=REWARD_ATTR_NAME)
else:
# This setup uses rung levels at 1, 3, 9 epochs. We just use a single
# bracket, so this is in fact successive halving (Hyperband would use
# more than 1 bracket).
# Also note that since we do not use the max_t argument of
# HyperbandScheduler, this value is obtained from train_fn.args.epochs.
sch_type = 'stopping' if SCHEDULER == 'hbs' else 'promotion'
myscheduler = ag.scheduler.HyperbandScheduler(
run_mlp_openml,
resource=resources,
searcher=SEARCHER,
search_options=search_options,
time_out=120,
time_attr=RESOURCE_ATTR_NAME,
reward_attr=REWARD_ATTR_NAME,
type=sch_type,
grace_period=1,
reduction_factor=3,
brackets=1)
# run tasks
myscheduler.run()
myscheduler.join_jobs()
results_df = process_training_history(
myscheduler.training_history.copy(),
start_timestamp=myscheduler._start_time)
The meaning of 'time_out' has changed. Previously, jobs started before
'time_out' were allowed to continue until stopped by other means. Now,
we stop jobs once 'time_out' is passed (at the next metric reporting).
If you like to keep the old behaviour, use
'stop_jobs_after_time_out=False'
/var/lib/jenkins/workspace/workspace/autogluon-tutorial-course-v3/venv/lib/python3.7/site-packages/distributed/worker.py:3587: UserWarning: Large object of size 1.24 MiB detected in task graph:
(0, <function run_mlp_openml at 0x7fc62e1528c0>, { ... sReporter}, [])
Consider scattering large objects ahead of time
with client.scatter to reduce scheduler burden and
keep data on workers
future = client.submit(func, big_data) # bad
big_future = client.scatter(big_data) # good
future = client.submit(func, big_future) # good
% (format_bytes(len(b)), s)
Epoch 1 ; Time: 0.551913 ; Training: accuracy=0.260079 ; Validation: accuracy=0.531250
Epoch 2 ; Time: 1.010221 ; Training: accuracy=0.496365 ; Validation: accuracy=0.655247
Epoch 3 ; Time: 1.441771 ; Training: accuracy=0.559650 ; Validation: accuracy=0.694686
Epoch 4 ; Time: 1.871930 ; Training: accuracy=0.588896 ; Validation: accuracy=0.711063
Epoch 5 ; Time: 2.303676 ; Training: accuracy=0.609385 ; Validation: accuracy=0.726939
Epoch 6 ; Time: 2.731652 ; Training: accuracy=0.628139 ; Validation: accuracy=0.745321
Epoch 7 ; Time: 3.167528 ; Training: accuracy=0.641193 ; Validation: accuracy=0.750501
Epoch 8 ; Time: 3.618814 ; Training: accuracy=0.653751 ; Validation: accuracy=0.763202
Epoch 9 ; Time: 4.053248 ; Training: accuracy=0.665482 ; Validation: accuracy=0.766043
Epoch 1 ; Time: 0.492952 ; Training: accuracy=0.066969 ; Validation: accuracy=0.135966
Epoch 1 ; Time: 0.407260 ; Training: accuracy=0.207723 ; Validation: accuracy=0.468099
Epoch 2 ; Time: 0.752409 ; Training: accuracy=0.388241 ; Validation: accuracy=0.577211
Epoch 3 ; Time: 1.091997 ; Training: accuracy=0.446209 ; Validation: accuracy=0.640846
Epoch 1 ; Time: 0.642028 ; Training: accuracy=0.392118 ; Validation: accuracy=0.623620
Epoch 2 ; Time: 1.125357 ; Training: accuracy=0.483358 ; Validation: accuracy=0.621445
Epoch 3 ; Time: 1.600750 ; Training: accuracy=0.525170 ; Validation: accuracy=0.631315
Epoch 1 ; Time: 0.504050 ; Training: accuracy=0.059619 ; Validation: accuracy=0.109844
Epoch 1 ; Time: 3.790277 ; Training: accuracy=0.091097 ; Validation: accuracy=0.183546
Epoch 1 ; Time: 3.844794 ; Training: accuracy=0.036472 ; Validation: accuracy=0.037180
Epoch 1 ; Time: 0.413669 ; Training: accuracy=0.375920 ; Validation: accuracy=0.665834
Epoch 2 ; Time: 0.757706 ; Training: accuracy=0.508145 ; Validation: accuracy=0.720473
Epoch 3 ; Time: 1.116523 ; Training: accuracy=0.541636 ; Validation: accuracy=0.740130
Epoch 4 ; Time: 1.453819 ; Training: accuracy=0.562061 ; Validation: accuracy=0.752457
Epoch 5 ; Time: 1.791678 ; Training: accuracy=0.581245 ; Validation: accuracy=0.759787
Epoch 6 ; Time: 2.133008 ; Training: accuracy=0.591086 ; Validation: accuracy=0.746960
Epoch 7 ; Time: 2.474904 ; Training: accuracy=0.602167 ; Validation: accuracy=0.774779
Epoch 8 ; Time: 2.816282 ; Training: accuracy=0.618457 ; Validation: accuracy=0.764951
Epoch 9 ; Time: 3.153015 ; Training: accuracy=0.617547 ; Validation: accuracy=0.779777
Epoch 1 ; Time: 0.478136 ; Training: accuracy=0.040683 ; Validation: accuracy=0.038423
Epoch 1 ; Time: 0.535245 ; Training: accuracy=0.287879 ; Validation: accuracy=0.597138
Epoch 2 ; Time: 0.979881 ; Training: accuracy=0.516145 ; Validation: accuracy=0.700000
Epoch 3 ; Time: 1.485907 ; Training: accuracy=0.574764 ; Validation: accuracy=0.757912
Epoch 4 ; Time: 1.924792 ; Training: accuracy=0.611028 ; Validation: accuracy=0.777609
Epoch 5 ; Time: 2.367511 ; Training: accuracy=0.637440 ; Validation: accuracy=0.796970
Epoch 6 ; Time: 2.818012 ; Training: accuracy=0.659298 ; Validation: accuracy=0.811616
Epoch 7 ; Time: 3.259929 ; Training: accuracy=0.677430 ; Validation: accuracy=0.824579
Epoch 8 ; Time: 3.719176 ; Training: accuracy=0.692582 ; Validation: accuracy=0.835017
Epoch 9 ; Time: 4.163785 ; Training: accuracy=0.702765 ; Validation: accuracy=0.846465
Epoch 1 ; Time: 0.415025 ; Training: accuracy=0.193786 ; Validation: accuracy=0.508235
Epoch 1 ; Time: 0.426751 ; Training: accuracy=0.393377 ; Validation: accuracy=0.707333
Epoch 2 ; Time: 0.811320 ; Training: accuracy=0.585927 ; Validation: accuracy=0.759667
Epoch 3 ; Time: 1.178658 ; Training: accuracy=0.640232 ; Validation: accuracy=0.779667
Epoch 4 ; Time: 1.674079 ; Training: accuracy=0.652152 ; Validation: accuracy=0.798000
Epoch 5 ; Time: 2.055665 ; Training: accuracy=0.673179 ; Validation: accuracy=0.805833
Epoch 6 ; Time: 2.453770 ; Training: accuracy=0.683692 ; Validation: accuracy=0.824000
Epoch 7 ; Time: 2.855500 ; Training: accuracy=0.701656 ; Validation: accuracy=0.824667
Epoch 8 ; Time: 3.247274 ; Training: accuracy=0.707285 ; Validation: accuracy=0.838833
Epoch 9 ; Time: 3.640722 ; Training: accuracy=0.702815 ; Validation: accuracy=0.838333
Epoch 1 ; Time: 0.369572 ; Training: accuracy=0.361873 ; Validation: accuracy=0.523308
Epoch 1 ; Time: 0.473391 ; Training: accuracy=0.655592 ; Validation: accuracy=0.804829
Epoch 2 ; Time: 0.945737 ; Training: accuracy=0.801988 ; Validation: accuracy=0.868712
Epoch 3 ; Time: 1.362050 ; Training: accuracy=0.835708 ; Validation: accuracy=0.892186
Epoch 4 ; Time: 1.775351 ; Training: accuracy=0.852775 ; Validation: accuracy=0.890845
Epoch 5 ; Time: 2.187504 ; Training: accuracy=0.864374 ; Validation: accuracy=0.896881
Epoch 6 ; Time: 2.604181 ; Training: accuracy=0.875559 ; Validation: accuracy=0.908954
Epoch 7 ; Time: 3.023310 ; Training: accuracy=0.883430 ; Validation: accuracy=0.910127
Epoch 8 ; Time: 3.605531 ; Training: accuracy=0.889312 ; Validation: accuracy=0.924547
Epoch 9 ; Time: 4.072795 ; Training: accuracy=0.893621 ; Validation: accuracy=0.919852
Epoch 1 ; Time: 0.482494 ; Training: accuracy=0.536816 ; Validation: accuracy=0.731846
Epoch 2 ; Time: 0.910899 ; Training: accuracy=0.687313 ; Validation: accuracy=0.766896
Epoch 3 ; Time: 1.335477 ; Training: accuracy=0.709867 ; Validation: accuracy=0.784840
Epoch 4 ; Time: 1.757668 ; Training: accuracy=0.727612 ; Validation: accuracy=0.783331
Epoch 5 ; Time: 2.183848 ; Training: accuracy=0.743449 ; Validation: accuracy=0.797082
Epoch 6 ; Time: 2.621294 ; Training: accuracy=0.747927 ; Validation: accuracy=0.795740
Epoch 7 ; Time: 3.045213 ; Training: accuracy=0.755970 ; Validation: accuracy=0.803958
Epoch 8 ; Time: 3.471366 ; Training: accuracy=0.764511 ; Validation: accuracy=0.813517
Epoch 9 ; Time: 3.891623 ; Training: accuracy=0.767828 ; Validation: accuracy=0.803287
Epoch 1 ; Time: 0.539090 ; Training: accuracy=0.233784 ; Validation: accuracy=0.553291
Epoch 1 ; Time: 0.519054 ; Training: accuracy=0.403108 ; Validation: accuracy=0.650538
Epoch 2 ; Time: 0.971895 ; Training: accuracy=0.559524 ; Validation: accuracy=0.680444
Epoch 3 ; Time: 1.417608 ; Training: accuracy=0.591849 ; Validation: accuracy=0.725134
Epoch 1 ; Time: 0.435248 ; Training: accuracy=0.622195 ; Validation: accuracy=0.785167
Epoch 2 ; Time: 0.805448 ; Training: accuracy=0.827578 ; Validation: accuracy=0.853500
Epoch 3 ; Time: 1.172561 ; Training: accuracy=0.873872 ; Validation: accuracy=0.886833
Epoch 4 ; Time: 1.544706 ; Training: accuracy=0.905342 ; Validation: accuracy=0.888000
Epoch 5 ; Time: 1.954413 ; Training: accuracy=0.918923 ; Validation: accuracy=0.913833
Epoch 6 ; Time: 2.322272 ; Training: accuracy=0.931843 ; Validation: accuracy=0.918333
Epoch 7 ; Time: 2.685062 ; Training: accuracy=0.943106 ; Validation: accuracy=0.925333
Epoch 8 ; Time: 3.128999 ; Training: accuracy=0.948323 ; Validation: accuracy=0.923833
Epoch 9 ; Time: 3.495181 ; Training: accuracy=0.952464 ; Validation: accuracy=0.921667
Epoch 1 ; Time: 0.389614 ; Training: accuracy=0.711310 ; Validation: accuracy=0.798942
Epoch 2 ; Time: 0.646255 ; Training: accuracy=0.834077 ; Validation: accuracy=0.865906
Epoch 3 ; Time: 0.899171 ; Training: accuracy=0.880291 ; Validation: accuracy=0.876157
Epoch 4 ; Time: 1.162825 ; Training: accuracy=0.895503 ; Validation: accuracy=0.883929
Epoch 5 ; Time: 1.411408 ; Training: accuracy=0.905837 ; Validation: accuracy=0.893849
Epoch 6 ; Time: 1.661289 ; Training: accuracy=0.907821 ; Validation: accuracy=0.897321
Epoch 7 ; Time: 1.946922 ; Training: accuracy=0.915675 ; Validation: accuracy=0.900132
Epoch 8 ; Time: 2.267820 ; Training: accuracy=0.918403 ; Validation: accuracy=0.891204
Epoch 9 ; Time: 2.539390 ; Training: accuracy=0.909970 ; Validation: accuracy=0.899802
Epoch 1 ; Time: 0.458344 ; Training: accuracy=0.704348 ; Validation: accuracy=0.821256
Epoch 2 ; Time: 0.858968 ; Training: accuracy=0.867743 ; Validation: accuracy=0.877894
Epoch 3 ; Time: 1.263540 ; Training: accuracy=0.900952 ; Validation: accuracy=0.915376
Epoch 4 ; Time: 1.680543 ; Training: accuracy=0.923975 ; Validation: accuracy=0.898051
Epoch 5 ; Time: 2.094674 ; Training: accuracy=0.929275 ; Validation: accuracy=0.906380
Epoch 6 ; Time: 2.491814 ; Training: accuracy=0.945010 ; Validation: accuracy=0.919207
Epoch 7 ; Time: 2.889010 ; Training: accuracy=0.952050 ; Validation: accuracy=0.934533
Epoch 8 ; Time: 3.284588 ; Training: accuracy=0.957350 ; Validation: accuracy=0.927869
Epoch 9 ; Time: 3.680094 ; Training: accuracy=0.950228 ; Validation: accuracy=0.927703
Epoch 1 ; Time: 0.346220 ; Training: accuracy=0.487318 ; Validation: accuracy=0.708554
Epoch 2 ; Time: 0.641563 ; Training: accuracy=0.757046 ; Validation: accuracy=0.806863
Epoch 3 ; Time: 0.942064 ; Training: accuracy=0.827752 ; Validation: accuracy=0.850796
Epoch 4 ; Time: 1.215577 ; Training: accuracy=0.866462 ; Validation: accuracy=0.874834
Epoch 5 ; Time: 1.569674 ; Training: accuracy=0.889920 ; Validation: accuracy=0.887765
Epoch 6 ; Time: 1.852072 ; Training: accuracy=0.903432 ; Validation: accuracy=0.888097
Epoch 7 ; Time: 2.145360 ; Training: accuracy=0.917523 ; Validation: accuracy=0.912798
Epoch 8 ; Time: 2.421490 ; Training: accuracy=0.929460 ; Validation: accuracy=0.917938
Epoch 9 ; Time: 2.690420 ; Training: accuracy=0.928548 ; Validation: accuracy=0.904509
Epoch 1 ; Time: 0.338700 ; Training: accuracy=0.221284 ; Validation: accuracy=0.448120
Epoch 1 ; Time: 0.302172 ; Training: accuracy=0.287171 ; Validation: accuracy=0.536403
Epoch 1 ; Time: 0.690578 ; Training: accuracy=0.389091 ; Validation: accuracy=0.593603
Epoch 1 ; Time: 0.636078 ; Training: accuracy=0.038793 ; Validation: accuracy=0.036789
Epoch 1 ; Time: 3.832486 ; Training: accuracy=0.591346 ; Validation: accuracy=0.748149
Epoch 2 ; Time: 7.810733 ; Training: accuracy=0.798326 ; Validation: accuracy=0.835128
Epoch 3 ; Time: 11.792056 ; Training: accuracy=0.858256 ; Validation: accuracy=0.869953
Epoch 4 ; Time: 15.833877 ; Training: accuracy=0.886025 ; Validation: accuracy=0.878701
Epoch 5 ; Time: 19.821762 ; Training: accuracy=0.904758 ; Validation: accuracy=0.900740
Epoch 6 ; Time: 23.823801 ; Training: accuracy=0.916197 ; Validation: accuracy=0.909993
Epoch 7 ; Time: 28.159303 ; Training: accuracy=0.927387 ; Validation: accuracy=0.922779
Epoch 8 ; Time: 32.123492 ; Training: accuracy=0.934019 ; Validation: accuracy=0.923620
Epoch 9 ; Time: 36.094179 ; Training: accuracy=0.937997 ; Validation: accuracy=0.916386
Analysing the results¶
The training history is stored in the results_df
, the main fields
are the runtime and 'best'
(the objective).
Note: You will get slightly different curves for different pairs of
scheduler/searcher, the time_out
here is a bit too short to really
see the difference in a significant way (it would be better to set it to
>1000s). Generally speaking though, hyperband stopping / promotion +
model will tend to significantly outperform other combinations given
enough time.
results_df.head()
bracket | elapsed_time | epoch | error | eval_time | objective | runtime | target_epoch | task_id | terminated | time_step | best | |
---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 0 | 0.554390 | 1 | 0.468750 | 0.548989 | 0.531250 | 1.053647 | 9 | 0 | NaN | 1.619658e+09 | 0.468750 |
1 | 0 | 1.011912 | 2 | 0.344753 | 0.452909 | 0.655247 | 1.511168 | 9 | 0 | NaN | 1.619658e+09 | 0.344753 |
2 | 0 | 1.443527 | 3 | 0.305314 | 0.429090 | 0.694686 | 1.942784 | 9 | 0 | NaN | 1.619658e+09 | 0.305314 |
3 | 0 | 1.873573 | 4 | 0.288937 | 0.427487 | 0.711063 | 2.372830 | 9 | 0 | NaN | 1.619658e+09 | 0.288937 |
4 | 0 | 2.305264 | 5 | 0.273061 | 0.429003 | 0.726939 | 2.804521 | 9 | 0 | NaN | 1.619658e+09 | 0.273061 |
import matplotlib.pyplot as plt
plt.figure(figsize=(12, 8))
runtime = results_df['runtime'].values
objective = results_df['best'].values
plt.plot(runtime, objective, lw=2)
plt.xticks(fontsize=12)
plt.xlim(0, 120)
plt.ylim(0, 0.5)
plt.yticks(fontsize=12)
plt.xlabel("Runtime [s]", fontsize=14)
plt.ylabel("Objective", fontsize=14)
Text(0, 0.5, 'Objective')
Diving Deeper¶
Now, you are ready to try HPO on your own machine learning models (if you use PyTorch, have a look at Tune PyTorch Model on MNIST). While AutoGluon comes with well-chosen defaults, it can pay off to tune it to your specific needs. Here are some tips which may come useful.
Logging the Search Progress¶
First, it is a good idea in general to switch on debug_log
, which
outputs useful information about the search progress. This is already
done in the example above.
The outputs show which configurations are chosen, stopped, or promoted.
For BO and BOHB, a range of information is displayed for every
get_config
decision. This log output is very useful in order to
figure out what is going on during the search.
Configuring HyperbandScheduler
¶
The most important knobs to turn with HyperbandScheduler
are
max_t
, grace_period
, reduction_factor
, brackets
, and
type
. The first three determine the rung levels at which stopping or
promotion decisions are being made.
The maximum resource level
max_t
(usually, resource equates to epochs, somax_t
is the maximum number of training epochs) is typically hardcoded intrain_fn
passed to the scheduler (this isrun_mlp_openml
in the example above). As already noted above, the value is best fixed in theag.args
decorator asepochs=XYZ
, it can then be accessed asargs.epochs
in thetrain_fn
code. If this is done, you do not have to passmax_t
when creating the scheduler.grace_period
andreduction_factor
determine the rung levels, which aregrace_period
,grace_period * reduction_factor
,grace_period * (reduction_factor ** 2)
, etc. All rung levels must be less or equal thanmax_t
. It is recommended to makemax_t
equal to the largest rung level. For example, ifgrace_period = 1
,reduction_factor = 3
, it is in general recommended to usemax_t = 9
,max_t = 27
, ormax_t = 81
. Choosing amax_t
value “off the grid” works against the successive halving principle that the total resources spent in a rung should be roughly equal between rungs. If in the example above, you setmax_t = 10
, about a third of configurations reaching 9 epochs are allowed to proceed, but only for one more epoch.With
reduction_factor
, you tune the extent to which successive halving filtering is applied. The larger this integer, the fewer configurations make it to higher number of epochs. Values 2, 3, 4 are commonly used.Finally,
grace_period
should be set to the smallest resource (number of epochs) for which you expect any meaningful differentiation between configurations. Whilegrace_period = 1
should always be explored, it may be too low for any meaningful stopping decisions to be made at the first rung.brackets
sets the maximum number of brackets in Hyperband (make sure to study the Hyperband paper or follow-ups for details). Forbrackets = 1
, you are running successive halving (single bracket). Higher brackets have larger effectivegrace_period
values (so runs are not stopped until later), yet are also chosen with less probability. We recommend to always consider successive halving (brackets = 1
) in a comparison.Finally, with
type
(valuesstopping
,promotion
) you are choosing different ways of extending successive halving scheduling to the asynchronous case. The method for the defaultstopping
is simpler and seems to perform well, butpromotion
is more careful promoting configurations to higher resource levels, which can work better in some cases.
Asynchronous BOHB¶
Finally, here are some ideas for tuning asynchronous BOHB, apart from
tuning its HyperbandScheduling
component. You need to pass these
options in search_options
.
We support a range of different surrogate models over the criterion functions across resource levels. All of them are jointly dependent Gaussian process models, meaning that data collected at all resource levels are modelled together. The surrogate model is selected by
gp_resource_kernel
, values arematern52
,matern52-res-warp
,exp-decay-sum
,exp-decay-combined
,exp-decay-delta1
. These are variants of either a joint Matern 5/2 kernel over configuration and resource, or the exponential decay model. Details about the latter can be found here.Fitting a Gaussian process surrogate model to data encurs a cost which scales cubically with the number of datapoints. When applied to expensive deep learning workloads, even multi-fidelity asynchronous BOHB is rarely running up more than 100 observations or so (across all rung levels and brackets), and the GP computations are subdominant. However, if you apply it to cheaper
train_fn
and find yourself beyond 2000 total evaluations, the cost of GP fitting can become painful. In such a situation, you can explore the optionsopt_skip_period
andopt_skip_num_max_resource
. The basic idea is as follows. By far the most expensive part of aget_config
call (picking the next configuration) is the refitting of the GP model to past data (this entails re-optimizing hyperparameters of the surrogate model itself). The options allow you to skip this expensive step for mostget_config
calls, after some initial period. Check the docstrings for details about these options. If you find yourself in such a situation and gain experience with these skipping features, make sure to contact the AutoGluon developers – we would love to learn about your use case.