.. _sec_custom_advancedhpo:
Getting started with Advanced HPO Algorithms
============================================
This tutorial provides a complete example of how to use AutoGluon's
state-of-the-art hyperparameter optimization (HPO) algorithms to tune a
basic Multi-Layer Perceptron (MLP) model, which is the most basic type
of neural network.
Loading libraries
-----------------
.. code:: python
# Basic utils for folder manipulations etc
import time
import multiprocessing # to count the number of CPUs available
# External tools to load and process data
import numpy as np
import pandas as pd
# MXNet (NeuralNets)
import mxnet as mx
from mxnet import gluon, autograd
from mxnet.gluon import nn
# AutoGluon and HPO tools
import autogluon.core as ag
from autogluon.mxnet.utils import load_and_split_openml_data
Check the version of MxNet, you should be fine with version >= 1.5
.. code:: python
mx.__version__
.. parsed-literal::
:class: output
'1.7.0'
You can also check the version of AutoGluon and the specific commit and
check that it matches what you want.
.. code:: python
import autogluon.core.version
ag.version.__version__
.. parsed-literal::
:class: output
'0.3.1b20210831'
Hyperparameter Optimization of a 2-layer MLP
--------------------------------------------
Setting up the context
~~~~~~~~~~~~~~~~~~~~~~
Here we declare a few "environment variables" setting the context for
what we're doing
.. code:: python
OPENML_TASK_ID = 6 # describes the problem we will tackle
RATIO_TRAIN_VALID = 0.33 # split of the training data used for validation
RESOURCE_ATTR_NAME = 'epoch' # how do we measure resources (will become clearer further)
REWARD_ATTR_NAME = 'objective' # how do we measure performance (will become clearer further)
NUM_CPUS = multiprocessing.cpu_count()
Preparing the data
~~~~~~~~~~~~~~~~~~
We will use a multi-way classification task from OpenML. Data
preparation includes:
- Missing values are imputed, using the 'mean' strategy of
``sklearn.impute.SimpleImputer``
- Split training set into training and validation
- Standardize inputs to mean 0, variance 1
.. code:: python
X_train, X_valid, y_train, y_valid, n_classes = load_and_split_openml_data(
OPENML_TASK_ID, RATIO_TRAIN_VALID, download_from_openml=False)
n_classes
.. parsed-literal::
:class: output
100%|██████████| 704/704 [00:00<00:00, 59814.24KB/s]
100%|██████████| 2521/2521 [00:00<00:00, 39975.50KB/s]
3KB [00:00, 3989.51KB/s]
8KB [00:00, 6554.88KB/s]
15KB [00:00, 9802.83KB/s]
2998KB [00:00, 38889.00KB/s]
881KB [00:00, 66170.90KB/s]
3KB [00:00, 4030.40KB/s]
.. parsed-literal::
:class: output
26
The problem has 26 classes.
Declaring a model specifying a hyperparameter space with AutoGluon
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Two layer MLP where we optimize over:
- the number of units on the first layer
- the number of units on the second layer
- the dropout rate after each layer
- the learning rate
- the scaling
- the ``@ag.args`` decorator allows us to specify the space we will
optimize over, this matches the
`ConfigSpace `__ syntax
The body of the function ``run_mlp_openml`` is pretty simple:
- it reads the hyperparameters given via the decorator
- it defines a 2 layer MLP with dropout
- it declares a trainer with the 'adam' loss function and a provided
learning rate
- it trains the NN with a number of epochs (most of that is boilerplate
code from ``mxnet``)
- the ``reporter`` at the end is used to keep track of training history
in the hyperparameter optimization
**Note**: The number of epochs and the hyperparameter space are reduced
to make for a shorter experiment
.. code:: python
@ag.args(n_units_1=ag.space.Int(lower=16, upper=128),
n_units_2=ag.space.Int(lower=16, upper=128),
dropout_1=ag.space.Real(lower=0, upper=.75),
dropout_2=ag.space.Real(lower=0, upper=.75),
learning_rate=ag.space.Real(lower=1e-6, upper=1, log=True),
batch_size=ag.space.Int(lower=8, upper=128),
scale_1=ag.space.Real(lower=0.001, upper=10, log=True),
scale_2=ag.space.Real(lower=0.001, upper=10, log=True),
epochs=9)
def run_mlp_openml(args, reporter, **kwargs):
# Time stamp for elapsed_time
ts_start = time.time()
# Unwrap hyperparameters
n_units_1 = args.n_units_1
n_units_2 = args.n_units_2
dropout_1 = args.dropout_1
dropout_2 = args.dropout_2
scale_1 = args.scale_1
scale_2 = args.scale_2
batch_size = args.batch_size
learning_rate = args.learning_rate
ctx = mx.cpu()
net = nn.Sequential()
with net.name_scope():
# Layer 1
net.add(nn.Dense(n_units_1, activation='relu',
weight_initializer=mx.initializer.Uniform(scale=scale_1)))
# Dropout
net.add(gluon.nn.Dropout(dropout_1))
# Layer 2
net.add(nn.Dense(n_units_2, activation='relu',
weight_initializer=mx.initializer.Uniform(scale=scale_2)))
# Dropout
net.add(gluon.nn.Dropout(dropout_2))
# Output
net.add(nn.Dense(n_classes))
net.initialize(ctx=ctx)
trainer = gluon.Trainer(net.collect_params(), 'adam',
{'learning_rate': learning_rate})
for epoch in range(args.epochs):
ts_epoch = time.time()
train_iter = mx.io.NDArrayIter(
data={'data': X_train},
label={'label': y_train},
batch_size=batch_size,
shuffle=True)
valid_iter = mx.io.NDArrayIter(
data={'data': X_valid},
label={'label': y_valid},
batch_size=batch_size,
shuffle=False)
metric = mx.metric.Accuracy()
loss = gluon.loss.SoftmaxCrossEntropyLoss()
for batch in train_iter:
data = batch.data[0].as_in_context(ctx)
label = batch.label[0].as_in_context(ctx)
with autograd.record():
output = net(data)
L = loss(output, label)
L.backward()
trainer.step(data.shape[0])
metric.update([label], [output])
name, train_acc = metric.get()
metric = mx.metric.Accuracy()
for batch in valid_iter:
data = batch.data[0].as_in_context(ctx)
label = batch.label[0].as_in_context(ctx)
output = net(data)
metric.update([label], [output])
name, val_acc = metric.get()
print('Epoch %d ; Time: %f ; Training: %s=%f ; Validation: %s=%f' % (
epoch + 1, time.time() - ts_start, name, train_acc, name, val_acc))
ts_now = time.time()
eval_time = ts_now - ts_epoch
elapsed_time = ts_now - ts_start
# The resource reported back (as 'epoch') is the number of epochs
# done, starting at 1
reporter(
epoch=epoch + 1,
objective=float(val_acc),
eval_time=eval_time,
time_step=ts_now,
elapsed_time=elapsed_time)
**Note**: The annotation ``epochs=9`` specifies the maximum number of
epochs for training. It becomes available as ``args.epochs``.
Importantly, it is also processed by ``HyperbandScheduler`` below in
order to set its ``max_t`` attribute.
**Recommendation**: Whenever writing training code to be passed as
``train_fn`` to a scheduler, if this training code reports a resource
(or time) attribute, the corresponding maximum resource value should be
included in ``train_fn.args``:
- If the resource attribute (``time_attr`` of scheduler) in
``train_fn`` is ``epoch``, make sure to include ``epochs=XYZ`` in the
annotation. This allows the scheduler to read ``max_t`` from
``train_fn.args.epochs``. This case corresponds to our example here.
- If the resource attribute is something else than ``epoch``, you can
also include the annotation ``max_t=XYZ``, which allows the scheduler
to read ``max_t`` from ``train_fn.args.max_t``.
Annotating the training function by the correct value for ``max_t``
simplifies scheduler creation (since ``max_t`` does not have to be
passed), and avoids inconsistencies between ``train_fn`` and the
scheduler.
Running the Hyperparameter Optimization
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You can use the following schedulers:
- FIFO (``fifo``)
- Hyperband (either the stopping (``hbs``) or promotion (``hbp``)
variant)
And the following searchers:
- Random search (``random``)
- Gaussian process based Bayesian optimization (``bayesopt``)
- SkOpt Bayesian optimization (``skopt``; only with FIFO scheduler)
Note that the method known as (asynchronous) Hyperband is using random
search. Combining Hyperband scheduling with the ``bayesopt`` searcher
uses a novel method called asynchronous BOHB.
Pick the combination you're interested in (doing the full experiment
takes around 120 seconds, see the ``time_out`` parameter), running
everything with multiple runs can take a fair bit of time. In real life,
you will want to choose a larger ``time_out`` in order to obtain good
performance.
.. code:: python
SCHEDULER = "hbs"
SEARCHER = "bayesopt"
.. code:: python
def compute_error(df):
return 1.0 - df["objective"]
def compute_runtime(df, start_timestamp):
return df["time_step"] - start_timestamp
def process_training_history(task_dicts, start_timestamp,
runtime_fn=compute_runtime,
error_fn=compute_error):
task_dfs = []
for task_id in task_dicts:
task_df = pd.DataFrame(task_dicts[task_id])
task_df = task_df.assign(task_id=task_id,
runtime=runtime_fn(task_df, start_timestamp),
error=error_fn(task_df),
target_epoch=task_df["epoch"].iloc[-1])
task_dfs.append(task_df)
result = pd.concat(task_dfs, axis="index", ignore_index=True, sort=True)
# re-order by runtime
result = result.sort_values(by="runtime")
# calculate incumbent best -- the cumulative minimum of the error.
result = result.assign(best=result["error"].cummin())
return result
resources = dict(num_cpus=NUM_CPUS, num_gpus=0)
.. code:: python
search_options = {
'num_init_random': 2,
'debug_log': True}
if SCHEDULER == 'fifo':
myscheduler = ag.scheduler.FIFOScheduler(
run_mlp_openml,
resource=resources,
searcher=SEARCHER,
search_options=search_options,
time_out=120,
time_attr=RESOURCE_ATTR_NAME,
reward_attr=REWARD_ATTR_NAME)
else:
# This setup uses rung levels at 1, 3, 9 epochs. We just use a single
# bracket, so this is in fact successive halving (Hyperband would use
# more than 1 bracket).
# Also note that since we do not use the max_t argument of
# HyperbandScheduler, this value is obtained from train_fn.args.epochs.
sch_type = 'stopping' if SCHEDULER == 'hbs' else 'promotion'
myscheduler = ag.scheduler.HyperbandScheduler(
run_mlp_openml,
resource=resources,
searcher=SEARCHER,
search_options=search_options,
time_out=120,
time_attr=RESOURCE_ATTR_NAME,
reward_attr=REWARD_ATTR_NAME,
type=sch_type,
grace_period=1,
reduction_factor=3,
brackets=1)
# run tasks
myscheduler.run()
myscheduler.join_jobs()
results_df = process_training_history(
myscheduler.training_history.copy(),
start_timestamp=myscheduler._start_time)
.. parsed-literal::
:class: output
The meaning of 'time_out' has changed. Previously, jobs started before
'time_out' were allowed to continue until stopped by other means. Now,
we stop jobs once 'time_out' is passed (at the next metric reporting).
If you like to keep the old behaviour, use
'stop_jobs_after_time_out=False'
/var/lib/jenkins/workspace/workspace/autogluon-tutorial-course-v3/venv/lib/python3.7/site-packages/distributed/worker.py:3871: UserWarning: Large object of size 1.24 MiB detected in task graph:
(0, , { ... sReporter}, [])
Consider scattering large objects ahead of time
with client.scatter to reduce scheduler burden and
keep data on workers
future = client.submit(func, big_data) # bad
big_future = client.scatter(big_data) # good
future = client.submit(func, big_future) # good
% (format_bytes(len(b)), s)
.. parsed-literal::
:class: output
Epoch 1 ; Time: 0.483459 ; Training: accuracy=0.260079 ; Validation: accuracy=0.531250
Epoch 2 ; Time: 0.902005 ; Training: accuracy=0.496365 ; Validation: accuracy=0.655247
Epoch 3 ; Time: 1.410622 ; Training: accuracy=0.559650 ; Validation: accuracy=0.694686
Epoch 4 ; Time: 1.844271 ; Training: accuracy=0.588896 ; Validation: accuracy=0.711063
Epoch 5 ; Time: 2.261347 ; Training: accuracy=0.609385 ; Validation: accuracy=0.726939
Epoch 6 ; Time: 2.678751 ; Training: accuracy=0.628139 ; Validation: accuracy=0.745321
Epoch 7 ; Time: 3.096179 ; Training: accuracy=0.641193 ; Validation: accuracy=0.750501
Epoch 8 ; Time: 3.514390 ; Training: accuracy=0.653751 ; Validation: accuracy=0.763202
Epoch 9 ; Time: 3.933117 ; Training: accuracy=0.665482 ; Validation: accuracy=0.766043
Epoch 1 ; Time: 0.352649 ; Training: accuracy=0.416214 ; Validation: accuracy=0.683117
Epoch 2 ; Time: 0.643649 ; Training: accuracy=0.598039 ; Validation: accuracy=0.754736
Epoch 3 ; Time: 1.019489 ; Training: accuracy=0.642363 ; Validation: accuracy=0.767198
Epoch 4 ; Time: 1.306369 ; Training: accuracy=0.668644 ; Validation: accuracy=0.795779
Epoch 5 ; Time: 1.597358 ; Training: accuracy=0.684462 ; Validation: accuracy=0.815387
Epoch 6 ; Time: 1.878297 ; Training: accuracy=0.696902 ; Validation: accuracy=0.815720
Epoch 7 ; Time: 2.171249 ; Training: accuracy=0.705718 ; Validation: accuracy=0.826354
Epoch 8 ; Time: 2.449944 ; Training: accuracy=0.717087 ; Validation: accuracy=0.836324
Epoch 9 ; Time: 2.739144 ; Training: accuracy=0.724337 ; Validation: accuracy=0.845297
Epoch 1 ; Time: 0.296594 ; Training: accuracy=0.060248 ; Validation: accuracy=0.126942
Epoch 1 ; Time: 0.894330 ; Training: accuracy=0.468579 ; Validation: accuracy=0.704046
Epoch 2 ; Time: 1.675783 ; Training: accuracy=0.569972 ; Validation: accuracy=0.733423
Epoch 3 ; Time: 2.472351 ; Training: accuracy=0.598077 ; Validation: accuracy=0.738963
Epoch 1 ; Time: 0.690242 ; Training: accuracy=0.285502 ; Validation: accuracy=0.478763
Epoch 1 ; Time: 0.581214 ; Training: accuracy=0.043967 ; Validation: accuracy=0.036364
Epoch 1 ; Time: 0.476710 ; Training: accuracy=0.242567 ; Validation: accuracy=0.546560
Epoch 1 ; Time: 1.710624 ; Training: accuracy=0.468434 ; Validation: accuracy=0.661008
Epoch 2 ; Time: 3.424747 ; Training: accuracy=0.652444 ; Validation: accuracy=0.733782
Epoch 3 ; Time: 5.203040 ; Training: accuracy=0.708368 ; Validation: accuracy=0.778487
Epoch 4 ; Time: 6.838795 ; Training: accuracy=0.740514 ; Validation: accuracy=0.800000
Epoch 5 ; Time: 8.628480 ; Training: accuracy=0.757084 ; Validation: accuracy=0.817479
Epoch 6 ; Time: 10.461468 ; Training: accuracy=0.781193 ; Validation: accuracy=0.836807
Epoch 7 ; Time: 12.120683 ; Training: accuracy=0.793952 ; Validation: accuracy=0.846387
Epoch 8 ; Time: 13.763824 ; Training: accuracy=0.805551 ; Validation: accuracy=0.852773
Epoch 9 ; Time: 15.387564 ; Training: accuracy=0.812179 ; Validation: accuracy=0.867395
Epoch 1 ; Time: 1.853784 ; Training: accuracy=0.039788 ; Validation: accuracy=0.035114
Epoch 1 ; Time: 0.361072 ; Training: accuracy=0.304481 ; Validation: accuracy=0.619288
Epoch 2 ; Time: 0.662357 ; Training: accuracy=0.459821 ; Validation: accuracy=0.682292
Epoch 3 ; Time: 0.962646 ; Training: accuracy=0.501323 ; Validation: accuracy=0.699429
Epoch 1 ; Time: 0.481104 ; Training: accuracy=0.652677 ; Validation: accuracy=0.818015
Epoch 2 ; Time: 0.907190 ; Training: accuracy=0.802132 ; Validation: accuracy=0.869318
Epoch 3 ; Time: 1.316419 ; Training: accuracy=0.841127 ; Validation: accuracy=0.898897
Epoch 4 ; Time: 1.742625 ; Training: accuracy=0.864673 ; Validation: accuracy=0.909258
Epoch 5 ; Time: 2.185773 ; Training: accuracy=0.879048 ; Validation: accuracy=0.906250
Epoch 6 ; Time: 2.603434 ; Training: accuracy=0.887393 ; Validation: accuracy=0.915107
Epoch 7 ; Time: 3.008912 ; Training: accuracy=0.894663 ; Validation: accuracy=0.925969
Epoch 8 ; Time: 3.413755 ; Training: accuracy=0.899455 ; Validation: accuracy=0.925301
Epoch 9 ; Time: 3.819936 ; Training: accuracy=0.902842 ; Validation: accuracy=0.925301
Epoch 1 ; Time: 2.348428 ; Training: accuracy=0.254892 ; Validation: accuracy=0.589562
Epoch 1 ; Time: 0.371031 ; Training: accuracy=0.633072 ; Validation: accuracy=0.818424
Epoch 2 ; Time: 0.667401 ; Training: accuracy=0.827876 ; Validation: accuracy=0.876289
Epoch 3 ; Time: 0.961280 ; Training: accuracy=0.867876 ; Validation: accuracy=0.902062
Epoch 4 ; Time: 1.254898 ; Training: accuracy=0.888990 ; Validation: accuracy=0.910708
Epoch 5 ; Time: 1.542356 ; Training: accuracy=0.902268 ; Validation: accuracy=0.920353
Epoch 6 ; Time: 1.863805 ; Training: accuracy=0.917031 ; Validation: accuracy=0.926671
Epoch 7 ; Time: 2.166657 ; Training: accuracy=0.920742 ; Validation: accuracy=0.927170
Epoch 8 ; Time: 2.457258 ; Training: accuracy=0.929237 ; Validation: accuracy=0.937812
Epoch 9 ; Time: 2.751077 ; Training: accuracy=0.936660 ; Validation: accuracy=0.934486
Epoch 1 ; Time: 0.517431 ; Training: accuracy=0.478773 ; Validation: accuracy=0.586700
Epoch 1 ; Time: 0.406796 ; Training: accuracy=0.475408 ; Validation: accuracy=0.733266
Epoch 2 ; Time: 0.915511 ; Training: accuracy=0.657864 ; Validation: accuracy=0.796519
Epoch 3 ; Time: 1.264665 ; Training: accuracy=0.697062 ; Validation: accuracy=0.821620
Epoch 4 ; Time: 1.602995 ; Training: accuracy=0.726110 ; Validation: accuracy=0.846553
Epoch 5 ; Time: 1.943093 ; Training: accuracy=0.743687 ; Validation: accuracy=0.859772
Epoch 6 ; Time: 2.296948 ; Training: accuracy=0.758624 ; Validation: accuracy=0.861111
Epoch 7 ; Time: 2.645051 ; Training: accuracy=0.764730 ; Validation: accuracy=0.872155
Epoch 8 ; Time: 2.994460 ; Training: accuracy=0.774963 ; Validation: accuracy=0.877845
Epoch 9 ; Time: 3.346669 ; Training: accuracy=0.776696 ; Validation: accuracy=0.886379
Epoch 1 ; Time: 0.647823 ; Training: accuracy=0.453545 ; Validation: accuracy=0.729177
Epoch 2 ; Time: 1.196013 ; Training: accuracy=0.601390 ; Validation: accuracy=0.781297
Epoch 3 ; Time: 1.760071 ; Training: accuracy=0.641019 ; Validation: accuracy=0.793531
Epoch 1 ; Time: 0.368349 ; Training: accuracy=0.649896 ; Validation: accuracy=0.782957
Epoch 2 ; Time: 0.660429 ; Training: accuracy=0.828512 ; Validation: accuracy=0.859816
Epoch 3 ; Time: 1.019315 ; Training: accuracy=0.874679 ; Validation: accuracy=0.868839
Epoch 4 ; Time: 1.308338 ; Training: accuracy=0.889349 ; Validation: accuracy=0.899415
Epoch 5 ; Time: 1.655436 ; Training: accuracy=0.902528 ; Validation: accuracy=0.906600
Epoch 6 ; Time: 1.941136 ; Training: accuracy=0.917779 ; Validation: accuracy=0.893233
Epoch 7 ; Time: 2.233518 ; Training: accuracy=0.923995 ; Validation: accuracy=0.911612
Epoch 8 ; Time: 2.529335 ; Training: accuracy=0.934604 ; Validation: accuracy=0.915121
Epoch 9 ; Time: 2.817885 ; Training: accuracy=0.933610 ; Validation: accuracy=0.912448
Epoch 1 ; Time: 0.415952 ; Training: accuracy=0.324007 ; Validation: accuracy=0.615000
Epoch 1 ; Time: 0.343071 ; Training: accuracy=0.700708 ; Validation: accuracy=0.833389
Epoch 2 ; Time: 0.629365 ; Training: accuracy=0.860352 ; Validation: accuracy=0.882737
Epoch 3 ; Time: 0.905989 ; Training: accuracy=0.898041 ; Validation: accuracy=0.893275
Epoch 4 ; Time: 1.184050 ; Training: accuracy=0.907587 ; Validation: accuracy=0.900301
Epoch 5 ; Time: 1.518087 ; Training: accuracy=0.928654 ; Validation: accuracy=0.915022
Epoch 6 ; Time: 1.798130 ; Training: accuracy=0.932439 ; Validation: accuracy=0.913851
Epoch 7 ; Time: 2.072767 ; Training: accuracy=0.940174 ; Validation: accuracy=0.921880
Epoch 8 ; Time: 2.351508 ; Training: accuracy=0.934579 ; Validation: accuracy=0.915022
Epoch 9 ; Time: 2.642320 ; Training: accuracy=0.939928 ; Validation: accuracy=0.911843
Epoch 1 ; Time: 0.590415 ; Training: accuracy=0.634058 ; Validation: accuracy=0.797089
Epoch 2 ; Time: 1.139171 ; Training: accuracy=0.824919 ; Validation: accuracy=0.836400
Epoch 3 ; Time: 1.680709 ; Training: accuracy=0.875485 ; Validation: accuracy=0.881398
Epoch 4 ; Time: 2.223529 ; Training: accuracy=0.902504 ; Validation: accuracy=0.887922
Epoch 5 ; Time: 2.782320 ; Training: accuracy=0.920763 ; Validation: accuracy=0.913684
Epoch 6 ; Time: 3.322632 ; Training: accuracy=0.930761 ; Validation: accuracy=0.917698
Epoch 7 ; Time: 3.859059 ; Training: accuracy=0.938115 ; Validation: accuracy=0.921546
Epoch 8 ; Time: 4.392839 ; Training: accuracy=0.948608 ; Validation: accuracy=0.922215
Epoch 9 ; Time: 4.932820 ; Training: accuracy=0.954474 ; Validation: accuracy=0.925895
Epoch 1 ; Time: 0.526809 ; Training: accuracy=0.661368 ; Validation: accuracy=0.823520
Epoch 2 ; Time: 1.004655 ; Training: accuracy=0.809737 ; Validation: accuracy=0.864670
Epoch 3 ; Time: 1.465803 ; Training: accuracy=0.838467 ; Validation: accuracy=0.879558
Epoch 4 ; Time: 1.920747 ; Training: accuracy=0.858917 ; Validation: accuracy=0.904316
Epoch 5 ; Time: 2.379512 ; Training: accuracy=0.870840 ; Validation: accuracy=0.904483
Epoch 6 ; Time: 2.828920 ; Training: accuracy=0.877215 ; Validation: accuracy=0.902141
Epoch 7 ; Time: 3.282423 ; Training: accuracy=0.890296 ; Validation: accuracy=0.911676
Epoch 8 ; Time: 3.726384 ; Training: accuracy=0.900563 ; Validation: accuracy=0.928237
Epoch 9 ; Time: 4.171594 ; Training: accuracy=0.894933 ; Validation: accuracy=0.930579
Epoch 1 ; Time: 0.276590 ; Training: accuracy=0.709938 ; Validation: accuracy=0.792333
Epoch 2 ; Time: 0.497653 ; Training: accuracy=0.840000 ; Validation: accuracy=0.834000
Epoch 3 ; Time: 0.725849 ; Training: accuracy=0.856742 ; Validation: accuracy=0.878333
Epoch 1 ; Time: 0.326535 ; Training: accuracy=0.506492 ; Validation: accuracy=0.717623
Epoch 1 ; Time: 0.466226 ; Training: accuracy=0.576301 ; Validation: accuracy=0.773613
Epoch 2 ; Time: 0.901360 ; Training: accuracy=0.759372 ; Validation: accuracy=0.836975
Epoch 3 ; Time: 1.303213 ; Training: accuracy=0.811808 ; Validation: accuracy=0.868571
Epoch 1 ; Time: 0.744630 ; Training: accuracy=0.574669 ; Validation: accuracy=0.760235
Epoch 2 ; Time: 1.449729 ; Training: accuracy=0.693709 ; Validation: accuracy=0.808221
Epoch 3 ; Time: 2.179653 ; Training: accuracy=0.717467 ; Validation: accuracy=0.818792
Epoch 1 ; Time: 0.287021 ; Training: accuracy=0.598026 ; Validation: accuracy=0.772440
Epoch 2 ; Time: 0.513405 ; Training: accuracy=0.815707 ; Validation: accuracy=0.836602
Epoch 3 ; Time: 0.767278 ; Training: accuracy=0.869901 ; Validation: accuracy=0.861370
Epoch 1 ; Time: 0.700768 ; Training: accuracy=0.656056 ; Validation: accuracy=0.801177
Epoch 2 ; Time: 1.351327 ; Training: accuracy=0.832823 ; Validation: accuracy=0.853827
Epoch 3 ; Time: 2.003524 ; Training: accuracy=0.877635 ; Validation: accuracy=0.880404
Epoch 4 ; Time: 2.676451 ; Training: accuracy=0.901364 ; Validation: accuracy=0.903448
Epoch 5 ; Time: 3.338692 ; Training: accuracy=0.921951 ; Validation: accuracy=0.913877
Epoch 6 ; Time: 3.984485 ; Training: accuracy=0.930632 ; Validation: accuracy=0.922624
Epoch 7 ; Time: 4.666032 ; Training: accuracy=0.940389 ; Validation: accuracy=0.927334
Epoch 8 ; Time: 5.342632 ; Training: accuracy=0.948739 ; Validation: accuracy=0.927166
Epoch 9 ; Time: 5.995365 ; Training: accuracy=0.955601 ; Validation: accuracy=0.936249
Epoch 1 ; Time: 0.356286 ; Training: accuracy=0.648801 ; Validation: accuracy=0.798723
Epoch 2 ; Time: 0.651466 ; Training: accuracy=0.841191 ; Validation: accuracy=0.859711
Epoch 3 ; Time: 0.938071 ; Training: accuracy=0.891811 ; Validation: accuracy=0.906418
Epoch 4 ; Time: 1.235611 ; Training: accuracy=0.908271 ; Validation: accuracy=0.909946
Epoch 5 ; Time: 1.562430 ; Training: accuracy=0.927543 ; Validation: accuracy=0.923891
Epoch 6 ; Time: 1.853286 ; Training: accuracy=0.935236 ; Validation: accuracy=0.921371
Epoch 7 ; Time: 2.144089 ; Training: accuracy=0.940612 ; Validation: accuracy=0.932964
Epoch 8 ; Time: 2.444236 ; Training: accuracy=0.948883 ; Validation: accuracy=0.931284
Epoch 9 ; Time: 2.731345 ; Training: accuracy=0.951117 ; Validation: accuracy=0.932964
Epoch 1 ; Time: 0.611850 ; Training: accuracy=0.655669 ; Validation: accuracy=0.814593
Epoch 2 ; Time: 0.952525 ; Training: accuracy=0.833375 ; Validation: accuracy=0.855239
Epoch 3 ; Time: 1.285310 ; Training: accuracy=0.886463 ; Validation: accuracy=0.895052
Epoch 4 ; Time: 1.634170 ; Training: accuracy=0.905400 ; Validation: accuracy=0.917375
Epoch 5 ; Time: 1.961615 ; Training: accuracy=0.925908 ; Validation: accuracy=0.922539
Epoch 6 ; Time: 2.293149 ; Training: accuracy=0.927313 ; Validation: accuracy=0.929035
Epoch 7 ; Time: 2.611567 ; Training: accuracy=0.937071 ; Validation: accuracy=0.929535
Epoch 8 ; Time: 2.929065 ; Training: accuracy=0.943356 ; Validation: accuracy=0.936032
Epoch 9 ; Time: 3.249067 ; Training: accuracy=0.948648 ; Validation: accuracy=0.934699
Epoch 1 ; Time: 3.613384 ; Training: accuracy=0.612401 ; Validation: accuracy=0.757907
Analysing the results
~~~~~~~~~~~~~~~~~~~~~
The training history is stored in the ``results_df``, the main fields
are the runtime and ``'best'`` (the objective).
**Note**: You will get slightly different curves for different pairs of
scheduler/searcher, the ``time_out`` here is a bit too short to really
see the difference in a significant way (it would be better to set it to
>1000s). Generally speaking though, hyperband stopping / promotion +
model will tend to significantly outperform other combinations given
enough time.
.. code:: python
results_df.head()
.. raw:: html
|
bracket |
elapsed_time |
epoch |
error |
eval_time |
objective |
runtime |
target_epoch |
task_id |
terminated |
time_step |
best |
0 |
0 |
0.485880 |
1 |
0.468750 |
0.480456 |
0.531250 |
0.889488 |
9 |
0 |
NaN |
1.630447e+09 |
0.468750 |
1 |
0 |
0.903594 |
2 |
0.344753 |
0.413867 |
0.655247 |
1.307202 |
9 |
0 |
NaN |
1.630447e+09 |
0.344753 |
2 |
0 |
1.412447 |
3 |
0.305314 |
0.506804 |
0.694686 |
1.816055 |
9 |
0 |
NaN |
1.630447e+09 |
0.305314 |
3 |
0 |
1.845836 |
4 |
0.288937 |
0.429318 |
0.711063 |
2.249444 |
9 |
0 |
NaN |
1.630447e+09 |
0.288937 |
4 |
0 |
2.262785 |
5 |
0.273061 |
0.414791 |
0.726939 |
2.666393 |
9 |
0 |
NaN |
1.630447e+09 |
0.273061 |
.. code:: python
import matplotlib.pyplot as plt
plt.figure(figsize=(12, 8))
runtime = results_df['runtime'].values
objective = results_df['best'].values
plt.plot(runtime, objective, lw=2)
plt.xticks(fontsize=12)
plt.xlim(0, 120)
plt.ylim(0, 0.5)
plt.yticks(fontsize=12)
plt.xlabel("Runtime [s]", fontsize=14)
plt.ylabel("Objective", fontsize=14)
.. parsed-literal::
:class: output
Text(0, 0.5, 'Objective')
Diving Deeper
-------------
Now, you are ready to try HPO on your own machine learning models (if
you use PyTorch, have a look at :ref:`sec_customstorch`). While
AutoGluon comes with well-chosen defaults, it can pay off to tune it to
your specific needs. Here are some tips which may come useful.
Logging the Search Progress
~~~~~~~~~~~~~~~~~~~~~~~~~~~
First, it is a good idea in general to switch on ``debug_log``, which
outputs useful information about the search progress. This is already
done in the example above.
The outputs show which configurations are chosen, stopped, or promoted.
For BO and BOHB, a range of information is displayed for every
``get_config`` decision. This log output is very useful in order to
figure out what is going on during the search.
Configuring ``HyperbandScheduler``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The most important knobs to turn with ``HyperbandScheduler`` are
``max_t``, ``grace_period``, ``reduction_factor``, ``brackets``, and
``type``. The first three determine the rung levels at which stopping or
promotion decisions are being made.
- The maximum resource level ``max_t`` (usually, resource equates to
epochs, so ``max_t`` is the maximum number of training epochs) is
typically hardcoded in ``train_fn`` passed to the scheduler (this is
``run_mlp_openml`` in the example above). As already noted above, the
value is best fixed in the ``ag.args`` decorator as ``epochs=XYZ``,
it can then be accessed as ``args.epochs`` in the ``train_fn`` code.
If this is done, you do not have to pass ``max_t`` when creating the
scheduler.
- ``grace_period`` and ``reduction_factor`` determine the rung levels,
which are ``grace_period``, ``grace_period * reduction_factor``,
``grace_period * (reduction_factor ** 2)``, etc. All rung levels must
be less or equal than ``max_t``. It is recommended to make ``max_t``
equal to the largest rung level. For example, if
``grace_period = 1``, ``reduction_factor = 3``, it is in general
recommended to use ``max_t = 9``, ``max_t = 27``, or ``max_t = 81``.
Choosing a ``max_t`` value "off the grid" works against the
successive halving principle that the total resources spent in a rung
should be roughly equal between rungs. If in the example above, you
set ``max_t = 10``, about a third of configurations reaching 9 epochs
are allowed to proceed, but only for one more epoch.
- With ``reduction_factor``, you tune the extent to which successive
halving filtering is applied. The larger this integer, the fewer
configurations make it to higher number of epochs. Values 2, 3, 4 are
commonly used.
- Finally, ``grace_period`` should be set to the smallest resource
(number of epochs) for which you expect any meaningful
differentiation between configurations. While ``grace_period = 1``
should always be explored, it may be too low for any meaningful
stopping decisions to be made at the first rung.
- ``brackets`` sets the maximum number of brackets in Hyperband (make
sure to study the Hyperband paper or follow-ups for details). For
``brackets = 1``, you are running successive halving (single
bracket). Higher brackets have larger effective ``grace_period``
values (so runs are not stopped until later), yet are also chosen
with less probability. We recommend to always consider successive
halving (``brackets = 1``) in a comparison.
- Finally, with ``type`` (values ``stopping``, ``promotion``) you are
choosing different ways of extending successive halving scheduling to
the asynchronous case. The method for the default ``stopping`` is
simpler and seems to perform well, but ``promotion`` is more careful
promoting configurations to higher resource levels, which can work
better in some cases.
Asynchronous BOHB
~~~~~~~~~~~~~~~~~
Finally, here are some ideas for tuning asynchronous BOHB, apart from
tuning its ``HyperbandScheduling`` component. You need to pass these
options in ``search_options``.
- We support a range of different surrogate models over the criterion
functions across resource levels. All of them are jointly dependent
Gaussian process models, meaning that data collected at all resource
levels are modelled together. The surrogate model is selected by
``gp_resource_kernel``, values are ``matern52``,
``matern52-res-warp``, ``exp-decay-sum``, ``exp-decay-combined``,
``exp-decay-delta1``. These are variants of either a joint Matern 5/2
kernel over configuration and resource, or the exponential decay
model. Details about the latter can be found
`here `__.
- Fitting a Gaussian process surrogate model to data encurs a cost
which scales cubically with the number of datapoints. When applied to
expensive deep learning workloads, even multi-fidelity asynchronous
BOHB is rarely running up more than 100 observations or so (across
all rung levels and brackets), and the GP computations are
subdominant. However, if you apply it to cheaper ``train_fn`` and
find yourself beyond 2000 total evaluations, the cost of GP fitting
can become painful. In such a situation, you can explore the options
``opt_skip_period`` and ``opt_skip_num_max_resource``. The basic idea
is as follows. By far the most expensive part of a ``get_config``
call (picking the next configuration) is the refitting of the GP
model to past data (this entails re-optimizing hyperparameters of the
surrogate model itself). The options allow you to skip this expensive
step for most ``get_config`` calls, after some initial period. Check
the docstrings for details about these options. If you find yourself
in such a situation and gain experience with these skipping features,
make sure to contact the AutoGluon developers -- we would love to
learn about your use case.