Text Predictionnavigate_next Text Prediction - Customized Hyperparameter Search
Quick search
code
Show Source
Stable Version Documentation API Installation Tutorials Github Other Versions Documentation
AutoGluon Documentation
Table Of Contents
  • Tabular Prediction
    • Predicting Columns in a Table - Quick Start
    • Predicting Columns in a Table - In Depth
    • How to use AutoGluon for Kaggle competitions
    • FAQ
  • Image Classification
    • Image Classification - Quick Start
    • Image Classification - Search Space and Hyperparameter Optimization (HPO)
    • Image Classification - How to Use Your Own Datasets
  • Object Detection
    • Object Detection - Quick Start
  • Text Prediction
    • Text Prediction - Quick Start
    • Text Prediction - Customized Hyperparameter Search
    • Text Prediction - Heterogeneous Data Types
  • Customize AutoGluon
    • Search Space and Decorator
    • Search Algorithms
    • Customize User Objects
    • Customize Training Script
    • Distributed Search
    • Getting started with Advanced HPO Algorithms
  • Neural Architecture Search
    • Demo RL Searcher
    • How to Use ENAS/ProxylessNAS in Ten Minutes
  • For PyTorch Users
    • MNIST Training in PyTorch
  • autogluon.space
  • autogluon.core
  • autogluon.task
  • autogluon.scheduler
  • autogluon.searcher
  • autogluon.utils
  • autogluon.model_zoo
AutoGluon Documentation
Table Of Contents
  • Tabular Prediction
    • Predicting Columns in a Table - Quick Start
    • Predicting Columns in a Table - In Depth
    • How to use AutoGluon for Kaggle competitions
    • FAQ
  • Image Classification
    • Image Classification - Quick Start
    • Image Classification - Search Space and Hyperparameter Optimization (HPO)
    • Image Classification - How to Use Your Own Datasets
  • Object Detection
    • Object Detection - Quick Start
  • Text Prediction
    • Text Prediction - Quick Start
    • Text Prediction - Customized Hyperparameter Search
    • Text Prediction - Heterogeneous Data Types
  • Customize AutoGluon
    • Search Space and Decorator
    • Search Algorithms
    • Customize User Objects
    • Customize Training Script
    • Distributed Search
    • Getting started with Advanced HPO Algorithms
  • Neural Architecture Search
    • Demo RL Searcher
    • How to Use ENAS/ProxylessNAS in Ten Minutes
  • For PyTorch Users
    • MNIST Training in PyTorch
  • autogluon.space
  • autogluon.core
  • autogluon.task
  • autogluon.scheduler
  • autogluon.searcher
  • autogluon.utils
  • autogluon.model_zoo

Text Prediction - Customized Hyperparameter Search¶

This tutorial teaches you how to control the hyperparameter tuning process in TextPrediction by specifying:

  • A custom search space of candidate hyperparameter values to consider.

  • Which hyperparameter optimization algorithm should be used to actually search through this space.

import numpy as np
import warnings
warnings.filterwarnings('ignore')
np.random.seed(123)

Paraphrase Identification¶

We consider a Paraphrase Identification task for illustration. Given a pair of sentences, the goal is to predict whether or not one sentence is a restatement of the other (a binary classification task). Here we train models on the Microsoft Research Paraphrase Corpus dataset.

from autogluon.utils.tabular.utils.loaders import load_pd

train_data = load_pd.load('https://autogluon-text.s3-accelerate.amazonaws.com/glue/mrpc/train.parquet')
dev_data = load_pd.load('https://autogluon-text.s3-accelerate.amazonaws.com/glue/mrpc/dev.parquet')
train_data.head(10)
Loaded data from: https://autogluon-text.s3-accelerate.amazonaws.com/glue/mrpc/train.parquet | Columns = 3 / 3 | Rows = 3668 -> 3668
Loaded data from: https://autogluon-text.s3-accelerate.amazonaws.com/glue/mrpc/dev.parquet | Columns = 3 / 3 | Rows = 408 -> 408
sentence1 sentence2 label
0 Amrozi accused his brother , whom he called " ... Referring to him as only " the witness " , Amr... 1
1 Yucaipa owned Dominick 's before selling the c... Yucaipa bought Dominick 's in 1995 for $ 693 m... 0
2 They had published an advertisement on the Int... On June 10 , the ship 's owners had published ... 1
3 Around 0335 GMT , Tab shares were up 19 cents ... Tab shares jumped 20 cents , or 4.6 % , to set... 0
4 The stock rose $ 2.11 , or about 11 percent , ... PG & E Corp. shares jumped $ 1.63 or 8 percent... 1
5 Revenue in the first quarter of the year dropp... With the scandal hanging over Stewart 's compa... 1
6 The Nasdaq had a weekly gain of 17.27 , or 1.2... The tech-laced Nasdaq Composite .IXIC rallied ... 0
7 The DVD-CCA then appealed to the state Supreme... The DVD CCA appealed that decision to the U.S.... 1
8 That compared with $ 35.18 million , or 24 cen... Earnings were affected by a non-recurring $ 8 ... 0
9 Shares of Genentech , a much larger company wi... Shares of Xoma fell 16 percent in early trade ... 0
from autogluon_contrib_nlp.data.tokenizers import MosesTokenizer
tokenizer = MosesTokenizer('en')  # just used to display sentences
row_index = 2
print('Paraphrase example:')
print('Sentence1: ', tokenizer.decode(train_data['sentence1'][row_index].split()))
print('Sentence2: ', tokenizer.decode(train_data['sentence2'][row_index].split()))
print('Label: ', train_data['label'][row_index])

row_index = 3
print('\nNot Paraphrase example:')
print('Sentence1:', tokenizer.decode(train_data['sentence1'][row_index].split()))
print('Sentence2:', tokenizer.decode(train_data['sentence2'][row_index].split()))
print('Label:', train_data['label'][row_index])
/var/lib/jenkins/miniconda3/envs/autogluon_docs-v0_0_14/lib/python3.7/site-packages/ipykernel/ipkernel.py:287: DeprecationWarning: should_run_async will not call transform_cell automatically in the future. Please pass the result to transformed_cell argument and any exception that happen during thetransform in preprocessing_exc_tuple in IPython 7.17 and above.
  and should_run_async(code)
Paraphrase example:
Sentence1:  They had published an advertisement on the Internet on June 10, offering the cargo for sale, he added.
Sentence2:  On June 10, the ship's owners had published an advertisement on the Internet, offering the explosives for sale.
Label:  1

Not Paraphrase example:
Sentence1: Around 0335 GMT, Tab shares were up 19 cents, or 4.4%, at A $4.56, having earlier set a record high of A $4.57.
Sentence2: Tab shares jumped 20 cents, or 4.6%, to set a record closing high at A $4.57.
Label: 0

Perform HPO over a Customized Search Space with Random Search¶

To control which hyperparameter values are considered during fit(), we specify the hyperparameters argument. Rather than specifying a particular fixed value for a hyperparameter, we can specify a space of values to search over via ag.space. We can also specify which HPO algorithm to use for the search via search_strategy (a simple random search is specified below). In this example, we search for good values of the following hyperparameters:

  • warmup

  • learning rate

  • dropout before the first task-specific layer

  • layer-wise learning rate decay

  • number of task-specific layers

import autogluon as ag
from autogluon import TextPrediction as task

hyperparameters = {
    'models': {
            'BertForTextPredictionBasic': {
                'search_space': {
                    'model.network.agg_net.num_layers': ag.space.Int(0, 3),
                    'model.network.agg_net.data_dropout': ag.space.Categorical(False, True),
                    'optimization.num_train_epochs': 4,
                    'optimization.warmup_portion': ag.space.Real(0.1, 0.2),
                    'optimization.layerwise_lr_decay': ag.space.Real(0.8, 1.0),
                    'optimization.lr': ag.space.Real(1E-5, 1E-4)
                }
            },
    },
    'hpo_params': {
        'scheduler': 'fifo',  # schedule training jobs in a sequential first-in first-out fashion during HPO
        'search_strategy': 'random'  # perform HPO via simple random search
    }
}

We can now call fit() with hyperparameter-tuning over our custom search space. Below num_trials controls the maximal number of different hyperparameter configurations for which AutoGluon will train models (5 models are trained under different hyperparameter configurations in this case). To achieve good performance in your applications, you should use larger values of num_trials, which may identify superior hyperparameter values but will require longer runtimes.

predictor_mrpc = task.fit(train_data,
                          label='label',
                          hyperparameters=hyperparameters,
                          num_trials=5,  # increase this to achieve good performance in your applications
                          time_limits=60 * 6,
                          ngpus_per_trial=1,
                          seed=123,
                          output_directory='./ag_mrpc_random_search')
NumPy-shape semantics has been activated in your code. This is required for creating and manipulating scalar and zero-size tensors, which were not supported in MXNet before, as in the official NumPy library. Please DO NOT manually deactivate this semantics while using mxnet.numpy and mxnet.numpy_extension modules.
2020-10-27 21:19:19,908 - root - INFO - All Logs will be saved to ./ag_mrpc_random_search/ag_text_prediction.log
2020-10-27 21:19:19,925 - root - INFO - Train Dataset:
2020-10-27 21:19:19,925 - root - INFO - Columns:

- Text(
   name="sentence1"
   #total/missing=2934/0
   length, min/avg/max=38/118.57/226
)
- Text(
   name="sentence2"
   #total/missing=2934/0
   length, min/avg/max=42/119.16/215
)
- Categorical(
   name="label"
   #total/missing=2934/0
   num_class (total/non_special)=2/2
   categories=[0, 1]
   freq=[964, 1970]
)


2020-10-27 21:19:19,926 - root - INFO - Tuning Dataset:
2020-10-27 21:19:19,926 - root - INFO - Columns:

- Text(
   name="sentence1"
   #total/missing=734/0
   length, min/avg/max=38/118.13/217
)
- Text(
   name="sentence2"
   #total/missing=734/0
   length, min/avg/max=42/117.21/208
)
- Categorical(
   name="label"
   #total/missing=734/0
   num_class (total/non_special)=2/2
   categories=[0, 1]
   freq=[230, 504]
)


2020-10-27 21:19:19,926 - root - INFO - Label columns=['label'], Feature columns=['sentence1', 'sentence2'], Problem types=['classification'], Label shapes=[2]
2020-10-27 21:19:19,927 - root - INFO - Eval Metric=acc, Stop Metric=acc, Log Metrics=['f1', 'mcc', 'auc', 'acc', 'nll']
HBox(children=(HTML(value=''), FloatProgress(value=0.0, max=5.0), HTML(value='')))
100%|██████████| 368/368 [01:18<00:00,  4.71it/s]
 68%|██████▊   | 249/368 [00:53<00:25,  4.62it/s]
100%|██████████| 368/368 [01:19<00:00,  4.65it/s]
100%|██████████| 368/368 [01:18<00:00,  4.68it/s]
 95%|█████████▍| 349/368 [01:14<00:04,  4.66it/s]

We can again evaluate our model’s performance on separate test data.

dev_score = predictor_mrpc.evaluate(dev_data, metrics=['acc', 'f1'])
print('Best Config = {}'.format(predictor_mrpc.results['best_config']))
print('Total Time = {}s'.format(predictor_mrpc.results['total_time']))
print('Accuracy = {:.2f}%'.format(dev_score['acc'] * 100))
print('F1 = {:.2f}%'.format(dev_score['f1'] * 100))
/var/lib/jenkins/miniconda3/envs/autogluon_docs-v0_0_14/lib/python3.7/site-packages/ipykernel/ipkernel.py:287: DeprecationWarning: should_run_async will not call transform_cell automatically in the future. Please pass the result to transformed_cell argument and any exception that happen during thetransform in preprocessing_exc_tuple in IPython 7.17 and above.
  and should_run_async(code)
Best Config = {'search_space▁model.network.agg_net.data_dropout▁choice': 0, 'search_space▁model.network.agg_net.num_layers': 2, 'search_space▁optimization.layerwise_lr_decay': 0.9, 'search_space▁optimization.lr': 5.5e-05, 'search_space▁optimization.warmup_portion': 0.15}
Total Time = 401.07370805740356s
Accuracy = 81.37%
F1 = 86.71%

And also use the model to predict whether new sentence pairs are paraphrases of each other or not.

sentence1 = 'It is simple to solve NLP problems with AutoGluon.'
sentence2 = 'With AutoGluon, it is easy to solve NLP problems.'
sentence3 = 'AutoGluon gives you a very bad user experience for solving NLP problems.'
prediction1 = predictor_mrpc.predict({'sentence1': [sentence1], 'sentence2': [sentence2]})
prediction1_prob = predictor_mrpc.predict_proba({'sentence1': [sentence1], 'sentence2': [sentence2]})
print('A = "{}"'.format(sentence1))
print('B = "{}"'.format(sentence2))
print('Prediction = "{}"'.format(prediction1[0] == 1))
print('Prob = "{}"'.format(prediction1_prob[0]))
print('')
prediction2 = predictor_mrpc.predict({'sentence1': [sentence1], 'sentence2': [sentence3]})
prediction2_prob = predictor_mrpc.predict_proba({'sentence1': [sentence1], 'sentence2': [sentence3]})
print('A = "{}"'.format(sentence1))
print('B = "{}"'.format(sentence3))
print('Prediction = "{}"'.format(prediction2[0] == 1))
print('Prob = "{}"'.format(prediction2_prob[0]))
A = "It is simple to solve NLP problems with AutoGluon."
B = "With AutoGluon, it is easy to solve NLP problems."
Prediction = "True"
Prob = "[0.00210601 0.997894  ]"

A = "It is simple to solve NLP problems with AutoGluon."
B = "AutoGluon gives you a very bad user experience for solving NLP problems."
Prediction = "False"
Prob = "[0.5207623 0.4792377]"

Use Bayesian Optimization¶

Instead of random search, we can perform HPO via Bayesian Optimization. Here we specify skopt as the searcher, which uses a BayesOpt implementation from the scikit-optimize library.

hyperparameters['hpo_params'] = {
    'scheduler': 'fifo',
    'search_strategy': 'skopt'
}

predictor_mrpc_skopt = task.fit(train_data, label='label',
                                hyperparameters=hyperparameters,
                                time_limits=60 * 6,
                                num_trials=5,  # increase this to get good performance in your applications
                                ngpus_per_trial=1, seed=123,
                                output_directory='./ag_mrpc_custom_space_fifo_skopt')
2020-10-27 21:26:10,533 - root - INFO - All Logs will be saved to ./ag_mrpc_custom_space_fifo_skopt/ag_text_prediction.log
2020-10-27 21:26:10,551 - root - INFO - Train Dataset:
2020-10-27 21:26:10,551 - root - INFO - Columns:

- Text(
   name="sentence1"
   #total/missing=2934/0
   length, min/avg/max=38/118.41/226
)
- Text(
   name="sentence2"
   #total/missing=2934/0
   length, min/avg/max=42/118.65/215
)
- Categorical(
   name="label"
   #total/missing=2934/0
   num_class (total/non_special)=2/2
   categories=[0, 1]
   freq=[947, 1987]
)


2020-10-27 21:26:10,552 - root - INFO - Tuning Dataset:
2020-10-27 21:26:10,552 - root - INFO - Columns:

- Text(
   name="sentence1"
   #total/missing=734/0
   length, min/avg/max=38/118.77/205
)
- Text(
   name="sentence2"
   #total/missing=734/0
   length, min/avg/max=42/119.24/205
)
- Categorical(
   name="label"
   #total/missing=734/0
   num_class (total/non_special)=2/2
   categories=[0, 1]
   freq=[247, 487]
)


2020-10-27 21:26:10,553 - root - INFO - Label columns=['label'], Feature columns=['sentence1', 'sentence2'], Problem types=['classification'], Label shapes=[2]
2020-10-27 21:26:10,554 - root - INFO - Eval Metric=acc, Stop Metric=acc, Log Metrics=['f1', 'mcc', 'auc', 'acc', 'nll']
HBox(children=(HTML(value=''), FloatProgress(value=0.0, max=5.0), HTML(value='')))
 95%|█████████▍| 349/368 [01:15<00:04,  4.62it/s]
100%|██████████| 368/368 [01:19<00:00,  4.65it/s]
100%|██████████| 368/368 [01:19<00:00,  4.60it/s]
100%|██████████| 368/368 [01:19<00:00,  4.60it/s]
100%|██████████| 368/368 [01:18<00:00,  4.67it/s]
dev_score = predictor_mrpc_skopt.evaluate(dev_data, metrics=['acc', 'f1'])
print('Best Config = {}'.format(predictor_mrpc_skopt.results['best_config']))
print('Total Time = {}s'.format(predictor_mrpc_skopt.results['total_time']))
print('Accuracy = {:.2f}%'.format(dev_score['acc'] * 100))
print('F1 = {:.2f}%'.format(dev_score['f1'] * 100))
Best Config = {'search_space▁model.network.agg_net.data_dropout▁choice': 0, 'search_space▁model.network.agg_net.num_layers': 2, 'search_space▁optimization.layerwise_lr_decay': 0.9, 'search_space▁optimization.lr': 5.5e-05, 'search_space▁optimization.warmup_portion': 0.15}
Total Time = 429.88790488243103s
Accuracy = 83.33%
F1 = 88.15%
predictions = predictor_mrpc_skopt.predict(dev_data)
prediction1 = predictor_mrpc_skopt.predict({'sentence1': [sentence1], 'sentence2': [sentence2]})
prediction1_prob = predictor_mrpc_skopt.predict_proba({'sentence1': [sentence1], 'sentence2': [sentence2]})
print('A = "{}"'.format(sentence1))
print('B = "{}"'.format(sentence2))
print('Prediction = "{}"'.format(prediction1[0] == 1))
print('Prob = "{}"'.format(prediction1_prob[0]))
print('')
prediction2 = predictor_mrpc_skopt.predict({'sentence1': [sentence1], 'sentence2': [sentence3]})
prediction2_prob = predictor_mrpc_skopt.predict_proba({'sentence1': [sentence1], 'sentence2': [sentence3]})
print('A = "{}"'.format(sentence1))
print('B = "{}"'.format(sentence3))
print('Prediction = "{}"'.format(prediction2[0] == 1))
print('Prob = "{}"'.format(prediction2_prob[0]))
A = "It is simple to solve NLP problems with AutoGluon."
B = "With AutoGluon, it is easy to solve NLP problems."
Prediction = "True"
Prob = "[0.00311425 0.9968857 ]"

A = "It is simple to solve NLP problems with AutoGluon."
B = "AutoGluon gives you a very bad user experience for solving NLP problems."
Prediction = "True"
Prob = "[0.32809526 0.67190474]"

Use Hyperband¶

Alternatively, we can instead use the Hyperband algorithm for HPO. Hyperband will try multiple hyperparameter configurations simultaneously and will early stop training under poor configurations to free compute resources for exploring new hyperparameter configurations. It may be able to identify good hyperparameter values more quickly than other search strategies in your applications.

hyperparameters['hpo_params'] = {
    'scheduler': 'hyperband',
    'search_strategy': 'random',
    'max_t': 40,  # Number of epochs per training run of one neural network
}
predictor_mrpc_hyperband = task.fit(train_data, label='label',
                                    hyperparameters=hyperparameters,
                                    time_limits=60 * 6, ngpus_per_trial=1, seed=123,
                                    output_directory='./ag_mrpc_custom_space_hyperband')
2020-10-27 21:33:38,992 - root - INFO - All Logs will be saved to ./ag_mrpc_custom_space_hyperband/ag_text_prediction.log
2020-10-27 21:33:39,010 - root - INFO - Train Dataset:
2020-10-27 21:33:39,011 - root - INFO - Columns:

- Text(
   name="sentence1"
   #total/missing=2934/0
   length, min/avg/max=38/118.31/226
)
- Text(
   name="sentence2"
   #total/missing=2934/0
   length, min/avg/max=42/118.48/215
)
- Categorical(
   name="label"
   #total/missing=2934/0
   num_class (total/non_special)=2/2
   categories=[0, 1]
   freq=[952, 1982]
)


2020-10-27 21:33:39,012 - root - INFO - Tuning Dataset:
2020-10-27 21:33:39,012 - root - INFO - Columns:

- Text(
   name="sentence1"
   #total/missing=734/0
   length, min/avg/max=38/119.15/217
)
- Text(
   name="sentence2"
   #total/missing=734/0
   length, min/avg/max=50/119.93/208
)
- Categorical(
   name="label"
   #total/missing=734/0
   num_class (total/non_special)=2/2
   categories=[0, 1]
   freq=[242, 492]
)


2020-10-27 21:33:39,013 - root - INFO - Label columns=['label'], Feature columns=['sentence1', 'sentence2'], Problem types=['classification'], Label shapes=[2]
2020-10-27 21:33:39,013 - root - INFO - Eval Metric=acc, Stop Metric=acc, Log Metrics=['f1', 'mcc', 'auc', 'acc', 'nll']
100%|██████████| 368/368 [01:19<00:00,  4.64it/s]
 30%|██▉       | 109/368 [00:24<00:59,  4.36it/s]
100%|█████████▉| 367/368 [01:19<00:00,  4.64it/s]
 30%|██▉       | 109/368 [00:25<00:59,  4.35it/s]
 84%|████████▍ | 309/368 [01:07<00:12,  4.59it/s]
 30%|██▉       | 109/368 [00:24<00:58,  4.44it/s]
 89%|████████▉ | 329/368 [01:10<00:08,  4.67it/s]
 30%|██▉       | 109/368 [00:24<00:58,  4.43it/s]
dev_score = predictor_mrpc_hyperband.evaluate(dev_data, metrics=['acc', 'f1'])
print('Best Config = {}'.format(predictor_mrpc_hyperband.results['best_config']))
print('Total Time = {}s'.format(predictor_mrpc_hyperband.results['total_time']))
print('Accuracy = {:.2f}%'.format(dev_score['acc'] * 100))
print('F1 = {:.2f}%'.format(dev_score['f1'] * 100))
Best Config = {'search_space▁model.network.agg_net.data_dropout▁choice': 1, 'search_space▁model.network.agg_net.num_layers': 2, 'search_space▁optimization.layerwise_lr_decay': 0.9412635221573601, 'search_space▁optimization.lr': 7.990957884249876e-05, 'search_space▁optimization.warmup_portion': 0.13031465898209632}
Total Time = 454.85684967041016s
Accuracy = 82.60%
F1 = 87.95%
predictions = predictor_mrpc_hyperband.predict(dev_data)
prediction1 = predictor_mrpc_hyperband.predict({'sentence1': [sentence1], 'sentence2': [sentence2]})
prediction1_prob = predictor_mrpc_hyperband.predict_proba({'sentence1': [sentence1], 'sentence2': [sentence2]})
print('A = "{}"'.format(sentence1))
print('B = "{}"'.format(sentence2))
print('Prediction = "{}"'.format(prediction1[0] == 1))
print('Prob = "{}"'.format(prediction1_prob[0]))
print('')
prediction2 = predictor_mrpc_hyperband.predict({'sentence1': [sentence1], 'sentence2': [sentence3]})
prediction2_prob = predictor_mrpc_hyperband.predict_proba({'sentence1': [sentence1], 'sentence2': [sentence3]})
print('A = "{}"'.format(sentence1))
print('B = "{}"'.format(sentence3))
print('Prediction = "{}"'.format(prediction2[0] == 1))
print('Prob = "{}"'.format(prediction2_prob[0]))
A = "It is simple to solve NLP problems with AutoGluon."
B = "With AutoGluon, it is easy to solve NLP problems."
Prediction = "True"
Prob = "[0.00894689 0.99105316]"

A = "It is simple to solve NLP problems with AutoGluon."
B = "AutoGluon gives you a very bad user experience for solving NLP problems."
Prediction = "False"
Prob = "[0.6003977  0.39960226]"

Table Of Contents

  • Text Prediction - Customized Hyperparameter Search
    • Paraphrase Identification
    • Perform HPO over a Customized Search Space with Random Search
    • Use Bayesian Optimization
    • Use Hyperband
Previous
Text Prediction - Quick Start
Next
Text Prediction - Heterogeneous Data Types