AutoGluon Tabular - Essential Functionality¶
Via a simple fit()
call, AutoGluon can produce highly-accurate models to predict the values in one column of a data table based on the rest of the columns’ values. Use AutoGluon with tabular data for both classification and regression problems. This tutorial demonstrates how to use AutoGluon to produce a classification model that predicts whether or not a person’s income exceeds $50,000.
TabularPredictor¶
To start, import AutoGluon’s TabularPredictor and TabularDataset classes:
from autogluon.tabular import TabularDataset, TabularPredictor
Load training data from a CSV file into an AutoGluon Dataset object. This object is essentially equivalent to a Pandas DataFrame and the same methods can be applied to both.
train_data = TabularDataset('https://autogluon.s3.amazonaws.com/datasets/Inc/train.csv')
subsample_size = 500 # subsample subset of data for faster demo, try setting this to much larger values
train_data = train_data.sample(n=subsample_size, random_state=0)
train_data.head()
age | workclass | fnlwgt | education | education-num | marital-status | occupation | relationship | race | sex | capital-gain | capital-loss | hours-per-week | native-country | class | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
6118 | 51 | Private | 39264 | Some-college | 10 | Married-civ-spouse | Exec-managerial | Wife | White | Female | 0 | 0 | 40 | United-States | >50K |
23204 | 58 | Private | 51662 | 10th | 6 | Married-civ-spouse | Other-service | Wife | White | Female | 0 | 0 | 8 | United-States | <=50K |
29590 | 40 | Private | 326310 | Some-college | 10 | Married-civ-spouse | Craft-repair | Husband | White | Male | 0 | 0 | 44 | United-States | <=50K |
18116 | 37 | Private | 222450 | HS-grad | 9 | Never-married | Sales | Not-in-family | White | Male | 0 | 2339 | 40 | El-Salvador | <=50K |
33964 | 62 | Private | 109190 | Bachelors | 13 | Married-civ-spouse | Exec-managerial | Husband | White | Male | 15024 | 0 | 40 | United-States | >50K |
Note that we loaded data from a CSV file stored in the cloud. You can also specify a local file-path instead if you have already downloaded the CSV file to your own machine (e.g., using wget).
Each row in the table train_data
corresponds to a single training example. In this particular dataset, each row corresponds to an individual person, and the columns contain various characteristics reported during a census.
Let’s first use these features to predict whether the person’s income exceeds $50,000 or not, which is recorded in the class
column of this table.
label = 'class'
print(f"Unique classes: {list(train_data[label].unique())}")
Unique classes: [' >50K', ' <=50K']
AutoGluon works with raw data, meaning you don’t need to perform any data preprocessing before fitting AutoGluon. We actively recommend that you avoid performing operations such as missing value imputation or one-hot-encoding, as AutoGluon has dedicated logic to handle these situations automatically. You can learn more about AutoGluon’s preprocessing in the Feature Engineering Tutorial.
Training¶
Now we initialize and fit AutoGluon’s TabularPredictor in one line of code:
predictor = TabularPredictor(label=label).fit(train_data)
Show code cell output
No path specified. Models will be saved in: "AutogluonModels/ag-20240418_043955"
No presets specified! To achieve strong results with AutoGluon, it is recommended to use the available presets.
Recommended Presets (For more details refer to https://auto.gluon.ai/stable/tutorials/tabular/tabular-essentials.html#presets):
presets='best_quality' : Maximize accuracy. Default time_limit=3600.
presets='high_quality' : Strong accuracy with fast inference speed. Default time_limit=3600.
presets='good_quality' : Good accuracy with very fast inference speed. Default time_limit=3600.
presets='medium_quality' : Fast training time, ideal for initial prototyping.
Beginning AutoGluon training ...
AutoGluon will save models to "AutogluonModels/ag-20240418_043955"
=================== System Info ===================
AutoGluon Version: 1.1.0b20240418
Python Version: 3.10.12
Operating System: Linux
Platform Machine: x86_64
Platform Version: #1 SMP Tue Nov 30 00:17:50 UTC 2021
CPU Count: 8
Memory Avail: 28.85 GB / 30.96 GB (93.2%)
Disk Space Avail: 216.94 GB / 255.99 GB (84.7%)
===================================================
Train Data Rows: 500
Train Data Columns: 14
Label Column: class
AutoGluon infers your prediction problem is: 'binary' (because only two unique label-values observed).
2 unique label values: [' >50K', ' <=50K']
If 'binary' is not the correct problem_type, please manually specify the problem_type parameter during predictor init (You may specify problem_type as one of: ['binary', 'multiclass', 'regression'])
Problem Type: binary
Preprocessing data ...
Selected class <--> label mapping: class 1 = >50K, class 0 = <=50K
Note: For your binary classification, AutoGluon arbitrarily selected which label-value represents positive ( >50K) vs negative ( <=50K) class.
To explicitly set the positive_class, either rename classes to 1 and 0, or specify positive_class in Predictor init.
Using Feature Generators to preprocess the data ...
Fitting AutoMLPipelineFeatureGenerator...
Available Memory: 29545.41 MB
Train Data (Original) Memory Usage: 0.28 MB (0.0% of available memory)
Inferring data type of each feature based on column values. Set feature_metadata_in to manually specify special dtypes of the features.
Stage 1 Generators:
Fitting AsTypeFeatureGenerator...
Note: Converting 1 features to boolean dtype as they only contain 2 unique values.
Stage 2 Generators:
Fitting FillNaFeatureGenerator...
Stage 3 Generators:
Fitting IdentityFeatureGenerator...
Fitting CategoryFeatureGenerator...
Fitting CategoryMemoryMinimizeFeatureGenerator...
Stage 4 Generators:
Fitting DropUniqueFeatureGenerator...
Stage 5 Generators:
Fitting DropDuplicatesFeatureGenerator...
Types of features in original data (raw dtype, special dtypes):
('int', []) : 6 | ['age', 'fnlwgt', 'education-num', 'capital-gain', 'capital-loss', ...]
('object', []) : 8 | ['workclass', 'education', 'marital-status', 'occupation', 'relationship', ...]
Types of features in processed data (raw dtype, special dtypes):
('category', []) : 7 | ['workclass', 'education', 'marital-status', 'occupation', 'relationship', ...]
('int', []) : 6 | ['age', 'fnlwgt', 'education-num', 'capital-gain', 'capital-loss', ...]
('int', ['bool']) : 1 | ['sex']
0.1s = Fit runtime
14 features in original data used to generate 14 features in processed data.
Train Data (Processed) Memory Usage: 0.03 MB (0.0% of available memory)
Data preprocessing and feature engineering runtime = 0.1s ...
AutoGluon will gauge predictive performance using evaluation metric: 'accuracy'
To change this, specify the eval_metric parameter of Predictor()
Automatically generating train/validation split with holdout_frac=0.2, Train Rows: 400, Val Rows: 100
User-specified model hyperparameters to be fit:
{
'NN_TORCH': {},
'GBM': [{'extra_trees': True, 'ag_args': {'name_suffix': 'XT'}}, {}, 'GBMLarge'],
'CAT': {},
'XGB': {},
'FASTAI': {},
'RF': [{'criterion': 'gini', 'ag_args': {'name_suffix': 'Gini', 'problem_types': ['binary', 'multiclass']}}, {'criterion': 'entropy', 'ag_args': {'name_suffix': 'Entr', 'problem_types': ['binary', 'multiclass']}}, {'criterion': 'squared_error', 'ag_args': {'name_suffix': 'MSE', 'problem_types': ['regression', 'quantile']}}],
'XT': [{'criterion': 'gini', 'ag_args': {'name_suffix': 'Gini', 'problem_types': ['binary', 'multiclass']}}, {'criterion': 'entropy', 'ag_args': {'name_suffix': 'Entr', 'problem_types': ['binary', 'multiclass']}}, {'criterion': 'squared_error', 'ag_args': {'name_suffix': 'MSE', 'problem_types': ['regression', 'quantile']}}],
'KNN': [{'weights': 'uniform', 'ag_args': {'name_suffix': 'Unif'}}, {'weights': 'distance', 'ag_args': {'name_suffix': 'Dist'}}],
}
Fitting 13 L1 models ...
Fitting model: KNeighborsUnif ...
0.73 = Validation score (accuracy)
0.01s = Training runtime
0.02s = Validation runtime
Fitting model: KNeighborsDist ...
0.65 = Validation score (accuracy)
0.01s = Training runtime
0.01s = Validation runtime
Fitting model: LightGBMXT ...
0.83 = Validation score (accuracy)
0.28s = Training runtime
0.01s = Validation runtime
Fitting model: LightGBM ...
0.85 = Validation score (accuracy)
0.22s = Training runtime
0.01s = Validation runtime
Fitting model: RandomForestGini ...
0.84 = Validation score (accuracy)
0.71s = Training runtime
0.05s = Validation runtime
Fitting model: RandomForestEntr ...
0.83 = Validation score (accuracy)
0.6s = Training runtime
0.05s = Validation runtime
Fitting model: CatBoost ...
0.85 = Validation score (accuracy)
0.88s = Training runtime
0.01s = Validation runtime
Fitting model: ExtraTreesGini ...
0.82 = Validation score (accuracy)
0.6s = Training runtime
0.05s = Validation runtime
Fitting model: ExtraTreesEntr ...
0.81 = Validation score (accuracy)
0.6s = Training runtime
0.05s = Validation runtime
Fitting model: NeuralNetFastAI ...
0.82 = Validation score (accuracy)
3.72s = Training runtime
0.01s = Validation runtime
Fitting model: XGBoost ...
0.86 = Validation score (accuracy)
0.31s = Training runtime
0.01s = Validation runtime
Fitting model: NeuralNetTorch ...
0.83 = Validation score (accuracy)
1.88s = Training runtime
0.01s = Validation runtime
Fitting model: LightGBMLarge ...
0.83 = Validation score (accuracy)
0.43s = Training runtime
0.01s = Validation runtime
Fitting model: WeightedEnsemble_L2 ...
Ensemble Weights: {'XGBoost': 1.0}
0.86 = Validation score (accuracy)
0.14s = Training runtime
0.0s = Validation runtime
AutoGluon training complete, total runtime = 10.93s ... Best model: "WeightedEnsemble_L2"
TabularPredictor saved. To load, use: predictor = TabularPredictor.load("AutogluonModels/ag-20240418_043955")
That’s it! We now have a TabularPredictor that is able to make predictions on new data.
Prediction¶
Next, load separate test data to demonstrate how to make predictions on new examples at inference time:
test_data = TabularDataset('https://autogluon.s3.amazonaws.com/datasets/Inc/test.csv')
test_data.head()
Loaded data from: https://autogluon.s3.amazonaws.com/datasets/Inc/test.csv | Columns = 15 / 15 | Rows = 9769 -> 9769
age | workclass | fnlwgt | education | education-num | marital-status | occupation | relationship | race | sex | capital-gain | capital-loss | hours-per-week | native-country | class | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 31 | Private | 169085 | 11th | 7 | Married-civ-spouse | Sales | Wife | White | Female | 0 | 0 | 20 | United-States | <=50K |
1 | 17 | Self-emp-not-inc | 226203 | 12th | 8 | Never-married | Sales | Own-child | White | Male | 0 | 0 | 45 | United-States | <=50K |
2 | 47 | Private | 54260 | Assoc-voc | 11 | Married-civ-spouse | Exec-managerial | Husband | White | Male | 0 | 1887 | 60 | United-States | >50K |
3 | 21 | Private | 176262 | Some-college | 10 | Never-married | Exec-managerial | Own-child | White | Female | 0 | 0 | 30 | United-States | <=50K |
4 | 17 | Private | 241185 | 12th | 8 | Never-married | Prof-specialty | Own-child | White | Male | 0 | 0 | 20 | United-States | <=50K |
We can now use our trained models to make predictions on the new data:
y_pred = predictor.predict(test_data)
y_pred.head() # Predictions
0 <=50K
1 <=50K
2 >50K
3 <=50K
4 <=50K
Name: class, dtype: object
y_pred_proba = predictor.predict_proba(test_data)
y_pred_proba.head() # Prediction Probabilities
<=50K | >50K | |
---|---|---|
0 | 0.981126 | 0.018874 |
1 | 0.983599 | 0.016401 |
2 | 0.478133 | 0.521867 |
3 | 0.994751 | 0.005249 |
4 | 0.988539 | 0.011461 |
Evaluation¶
Next, we can evaluate the predictor on the (labeled) test data:
predictor.evaluate(test_data)
{'accuracy': 0.8409253761899887,
'balanced_accuracy': 0.7475663839529563,
'mcc': 0.5345297121913682,
'roc_auc': 0.884716037791454,
'f1': 0.6296472831267874,
'precision': 0.7034078807241747,
'recall': 0.5698878343399483}
We can also evaluate each model individually:
predictor.leaderboard(test_data)
model | score_test | score_val | eval_metric | pred_time_test | pred_time_val | fit_time | pred_time_test_marginal | pred_time_val_marginal | fit_time_marginal | stack_level | can_infer | fit_order | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | RandomForestGini | 0.842870 | 0.84 | accuracy | 0.110173 | 0.049699 | 0.711535 | 0.110173 | 0.049699 | 0.711535 | 1 | True | 5 |
1 | CatBoost | 0.842461 | 0.85 | accuracy | 0.008762 | 0.005948 | 0.875380 | 0.008762 | 0.005948 | 0.875380 | 1 | True | 7 |
2 | RandomForestEntr | 0.841130 | 0.83 | accuracy | 0.116793 | 0.051042 | 0.604749 | 0.116793 | 0.051042 | 0.604749 | 1 | True | 6 |
3 | XGBoost | 0.840925 | 0.86 | accuracy | 0.068368 | 0.008688 | 0.313240 | 0.068368 | 0.008688 | 0.313240 | 1 | True | 11 |
4 | WeightedEnsemble_L2 | 0.840925 | 0.86 | accuracy | 0.070169 | 0.009494 | 0.452828 | 0.001801 | 0.000806 | 0.139588 | 2 | True | 14 |
5 | LightGBM | 0.839799 | 0.85 | accuracy | 0.017533 | 0.005744 | 0.223651 | 0.017533 | 0.005744 | 0.223651 | 1 | True | 4 |
6 | LightGBMXT | 0.836421 | 0.83 | accuracy | 0.009999 | 0.006117 | 0.280527 | 0.009999 | 0.006117 | 0.280527 | 1 | True | 3 |
7 | ExtraTreesGini | 0.833862 | 0.82 | accuracy | 0.099410 | 0.049115 | 0.600566 | 0.099410 | 0.049115 | 0.600566 | 1 | True | 8 |
8 | ExtraTreesEntr | 0.833862 | 0.81 | accuracy | 0.101457 | 0.049958 | 0.599523 | 0.101457 | 0.049958 | 0.599523 | 1 | True | 9 |
9 | NeuralNetTorch | 0.833555 | 0.83 | accuracy | 0.050731 | 0.012670 | 1.883483 | 0.050731 | 0.012670 | 1.883483 | 1 | True | 12 |
10 | LightGBMLarge | 0.828949 | 0.83 | accuracy | 0.022204 | 0.005684 | 0.428710 | 0.022204 | 0.005684 | 0.428710 | 1 | True | 13 |
11 | NeuralNetFastAI | 0.818610 | 0.82 | accuracy | 0.175843 | 0.011916 | 3.722124 | 0.175843 | 0.011916 | 3.722124 | 1 | True | 10 |
12 | KNeighborsUnif | 0.725970 | 0.73 | accuracy | 0.026638 | 0.016018 | 0.007585 | 0.026638 | 0.016018 | 0.007585 | 1 | True | 1 |
13 | KNeighborsDist | 0.695158 | 0.65 | accuracy | 0.026713 | 0.014743 | 0.006443 | 0.026713 | 0.014743 | 0.006443 | 1 | True | 2 |
Loading a Trained Predictor¶
Finally, we can load the predictor in a new session (or new machine) by calling TabularPredictor.load() and specifying the location of the predictor artifact on disk.
predictor.path # The path on disk where the predictor is saved
'AutogluonModels/ag-20240418_043955'
# Load the predictor by specifying the path it is saved to on disk.
# You can control where it is saved to by setting the `path` parameter during init
predictor = TabularPredictor.load(predictor.path)
Now you’re ready to try AutoGluon on your own tabular datasets! As long as they’re stored in a popular format like CSV, you should be able to achieve strong predictive performance with just 2 lines of code:
from autogluon.tabular import TabularPredictor
predictor = TabularPredictor(label=<variable-name>).fit(train_data=<file-name>)
Note: This simple call to TabularPredictor.fit() is intended for your first prototype model. In a subsequent section, we’ll demonstrate how to maximize predictive performance by additionally specifying the presets
parameter to fit()
and the eval_metric
parameter to TabularPredictor()
.
Description of fit()¶
Here we discuss what happened during fit()
.
Since there are only two possible values of the class
variable, this was a binary classification problem, for which an appropriate performance metric is accuracy. AutoGluon automatically infers this as well as the type of each feature (i.e., which columns contain continuous numbers vs. discrete categories). AutoGluon can also automatically handle common issues like missing data and rescaling feature values.
We did not specify separate validation data and so AutoGluon automatically chose a random training/validation split of the data. The data used for validation is separated from the training data and is used to determine the models and hyperparameter-values that produce the best results. Rather than just a single model, AutoGluon trains multiple models and ensembles them together to obtain superior predictive performance.
By default, AutoGluon tries to fit various types of models including neural networks and tree ensembles. Each type of model has various hyperparameters, which traditionally, the user would have to specify. AutoGluon automates this process.
AutoGluon automatically and iteratively tests values for hyperparameters to produce the best performance on the validation data. This involves repeatedly training models under different hyperparameter settings and evaluating their performance. This process can be computationally-intensive, so fit()
parallelizes this process across multiple threads using Ray. To control runtimes, you can specify various arguments in fit()
such as time_limit
as demonstrated in the subsequent In-Depth Tutorial.
We can view what properties AutoGluon automatically inferred about our prediction task:
print("AutoGluon infers problem type is: ", predictor.problem_type)
print("AutoGluon identified the following types of features:")
print(predictor.feature_metadata)
AutoGluon infers problem type is: binary
AutoGluon identified the following types of features:
('category', []) : 7 | ['workclass', 'education', 'marital-status', 'occupation', 'relationship', ...]
('int', []) : 6 | ['age', 'fnlwgt', 'education-num', 'capital-gain', 'capital-loss', ...]
('int', ['bool']) : 1 | ['sex']
AutoGluon correctly recognized our prediction problem to be a binary classification task and decided that variables such as age
should be represented as integers, whereas variables such as workclass
should be represented as categorical objects. The feature_metadata
attribute allows you to see the inferred data type of each predictive variable after preprocessing (this is its raw dtype; some features may also be associated with additional special dtypes if produced via feature-engineering, e.g. numerical representations of a datetime/text column).
To transform the data into AutoGluon’s internal representation, we can do the following:
test_data_transform = predictor.transform_features(test_data)
test_data_transform.head()
age | fnlwgt | education-num | sex | capital-gain | capital-loss | hours-per-week | workclass | education | marital-status | occupation | relationship | race | native-country | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 31 | 169085 | 7 | 0 | 0 | 0 | 20 | 3 | 1 | 1 | 10 | 5 | 4 | 14 |
1 | 17 | 226203 | 8 | 1 | 0 | 0 | 45 | 5 | 2 | 3 | 10 | 3 | 4 | 14 |
2 | 47 | 54260 | 11 | 1 | 0 | 1887 | 60 | 3 | 7 | 1 | 3 | 0 | 4 | 14 |
3 | 21 | 176262 | 10 | 0 | 0 | 0 | 30 | 3 | 13 | 3 | 3 | 3 | 4 | 14 |
4 | 17 | 241185 | 8 | 1 | 0 | 0 | 20 | 3 | 2 | 3 | 8 | 3 | 4 | 14 |
Notice how the data is purely numeric after pre-processing (although categorical features will still be treated as categorical downstream).
To better understand our trained predictor, we can estimate the overall importance of each feature via TabularPredictor.feature_importance():
predictor.feature_importance(test_data)
Computing feature importance via permutation shuffling for 14 features using 5000 rows with 5 shuffle sets...
5.27s = Expected runtime (1.05s per shuffle set)
2.29s = Actual runtime (Completed 5 of 5 shuffle sets)
importance | stddev | p_value | n | p99_high | p99_low | |
---|---|---|---|---|---|---|
marital-status | 0.05080 | 0.003792 | 3.698489e-06 | 5 | 0.058608 | 0.042992 |
capital-gain | 0.03852 | 0.002318 | 1.565361e-06 | 5 | 0.043292 | 0.033748 |
education-num | 0.02968 | 0.001346 | 5.063512e-07 | 5 | 0.032452 | 0.026908 |
age | 0.01500 | 0.002850 | 1.490440e-04 | 5 | 0.020867 | 0.009133 |
hours-per-week | 0.01172 | 0.003974 | 1.369430e-03 | 5 | 0.019902 | 0.003538 |
occupation | 0.00528 | 0.001803 | 1.406849e-03 | 5 | 0.008993 | 0.001567 |
relationship | 0.00472 | 0.001154 | 3.967984e-04 | 5 | 0.007096 | 0.002344 |
native-country | 0.00144 | 0.000654 | 3.959537e-03 | 5 | 0.002787 | 0.000093 |
capital-loss | 0.00128 | 0.000415 | 1.155921e-03 | 5 | 0.002134 | 0.000426 |
fnlwgt | 0.00108 | 0.002361 | 1.820562e-01 | 5 | 0.005940 | -0.003780 |
sex | 0.00096 | 0.001090 | 6.012167e-02 | 5 | 0.003204 | -0.001284 |
workclass | 0.00092 | 0.001635 | 1.383281e-01 | 5 | 0.004286 | -0.002446 |
education | 0.00080 | 0.001463 | 1.442554e-01 | 5 | 0.003812 | -0.002212 |
race | 0.00048 | 0.000559 | 6.352320e-02 | 5 | 0.001630 | -0.000670 |
The importance
column is an estimate for the amount the evaluation metric score would drop if the feature were removed from the data.
Negative values of importance
mean that it is likely to improve the results if re-fit with the feature removed.
When we call predict()
, AutoGluon automatically predicts with the model that displayed the best performance on validation data (i.e. the weighted-ensemble).
predictor.model_best
'WeightedEnsemble_L2'
We can instead specify which model to use for predictions like this:
predictor.predict(test_data, model='LightGBM')
You can get the list of trained models via .leaderboard()
or .model_names()
:
predictor.model_names()
['KNeighborsUnif',
'KNeighborsDist',
'LightGBMXT',
'LightGBM',
'RandomForestGini',
'RandomForestEntr',
'CatBoost',
'ExtraTreesGini',
'ExtraTreesEntr',
'NeuralNetFastAI',
'XGBoost',
'NeuralNetTorch',
'LightGBMLarge',
'WeightedEnsemble_L2']
The scores of predictive performance above were based on a default evaluation metric (accuracy for binary classification). Performance in certain applications may be measured by different metrics than the ones AutoGluon optimizes for by default. If you know the metric that counts in your application, you should specify it via the eval_metric
argument as demonstrated in the next section.
Presets¶
AutoGluon comes with a variety of presets that can be specified in the call to .fit
via the presets
argument. medium_quality
is used by default to encourage initial prototyping, but for serious usage, the other presets should be used instead.
Preset |
Model Quality |
Use Cases |
Fit Time (Ideal) |
Inference Time (Relative to medium_quality) |
Disk Usage |
---|---|---|---|---|---|
best_quality |
State-of-the-art (SOTA), much better than high_quality |
When accuracy is what matters |
16x+ |
32x+ |
16x+ |
high_quality |
Better than good_quality |
When a very powerful, portable solution with fast inference is required: Large-scale batch inference |
16x+ |
4x |
2x |
good_quality |
Stronger than any other AutoML Framework |
When a powerful, highly portable solution with very fast inference is required: Billion-scale batch inference, sub-100ms online-inference, edge-devices |
16x |
2x |
0.1x |
medium_quality |
Competitive with other top AutoML Frameworks |
Initial prototyping, establishing a performance baseline |
1x |
1x |
1x |
We recommend users to start with medium_quality
to get a sense of the problem and identify any data related issues. If medium_quality
is taking too long to train, consider subsampling the training data during this prototyping phase.
Once you are comfortable, next try best_quality
. Make sure to specify at least 16x the time_limit
value as used in medium_quality
. Once finished, you should have a very powerful solution that is often stronger than medium_quality
.
Make sure to consider holding out test data that AutoGluon never sees during training to ensure that the models are performing as expected in terms of performance.
Once you evaluate both best_quality
and medium_quality
, check if either satisfies your needs. If neither do, consider trying high_quality
and/or good_quality
.
If none of the presets satisfy requirements, refer to Predicting Columns in a Table - In Depth for more advanced AutoGluon options.
Maximizing predictive performance¶
Note: You should not call fit()
with entirely default arguments if you are benchmarking AutoGluon-Tabular or hoping to maximize its accuracy!
To get the best predictive accuracy with AutoGluon, you should generally use it like this:
time_limit = 60 # for quick demonstration only, you should set this to longest time you are willing to wait (in seconds)
metric = 'roc_auc' # specify your evaluation metric here
predictor = TabularPredictor(label, eval_metric=metric).fit(train_data, time_limit=time_limit, presets='best_quality')
Show code cell output
No path specified. Models will be saved in: "AutogluonModels/ag-20240418_044010"
Presets specified: ['best_quality']
Setting dynamic_stacking from 'auto' to True. Reason: Enable dynamic_stacking when use_bag_holdout is disabled. (use_bag_holdout=False)
Stack configuration (auto_stack=True): num_stack_levels=1, num_bag_folds=8, num_bag_sets=1
Dynamic stacking is enabled (dynamic_stacking=True). AutoGluon will try to determine whether the input data is affected by stacked overfitting and enable or disable stacking as a consequence.
Detecting stacked overfitting by sub-fitting AutoGluon on the input data. That is, copies of AutoGluon will be sub-fit on subset(s) of the data. Then, the holdout validation data is used to detect stacked overfitting.
Sub-fit(s) time limit is: 60 seconds.
Starting holdout-based sub-fit for dynamic stacking. Context path is: AutogluonModels/ag-20240418_044010/ds_sub_fit/sub_fit_ho.
Running the sub-fit in a ray process to avoid memory leakage.
Spend 21 seconds for the sub-fit(s) during dynamic stacking.
Time left for full fit of AutoGluon: 39 seconds.
Starting full fit now with num_stack_levels 1.
Beginning AutoGluon training ... Time limit = 39s
AutoGluon will save models to "AutogluonModels/ag-20240418_044010"
=================== System Info ===================
AutoGluon Version: 1.1.0b20240418
Python Version: 3.10.12
Operating System: Linux
Platform Machine: x86_64
Platform Version: #1 SMP Tue Nov 30 00:17:50 UTC 2021
CPU Count: 8
Memory Avail: 27.43 GB / 30.96 GB (88.6%)
Disk Space Avail: 216.92 GB / 255.99 GB (84.7%)
===================================================
Train Data Rows: 500
Train Data Columns: 14
Label Column: class
Problem Type: binary
Preprocessing data ...
Selected class <--> label mapping: class 1 = >50K, class 0 = <=50K
Note: For your binary classification, AutoGluon arbitrarily selected which label-value represents positive ( >50K) vs negative ( <=50K) class.
To explicitly set the positive_class, either rename classes to 1 and 0, or specify positive_class in Predictor init.
Using Feature Generators to preprocess the data ...
Fitting AutoMLPipelineFeatureGenerator...
Available Memory: 28084.78 MB
Train Data (Original) Memory Usage: 0.28 MB (0.0% of available memory)
Inferring data type of each feature based on column values. Set feature_metadata_in to manually specify special dtypes of the features.
Stage 1 Generators:
Fitting AsTypeFeatureGenerator...
Note: Converting 1 features to boolean dtype as they only contain 2 unique values.
Stage 2 Generators:
Fitting FillNaFeatureGenerator...
Stage 3 Generators:
Fitting IdentityFeatureGenerator...
Fitting CategoryFeatureGenerator...
Fitting CategoryMemoryMinimizeFeatureGenerator...
Stage 4 Generators:
Fitting DropUniqueFeatureGenerator...
Stage 5 Generators:
Fitting DropDuplicatesFeatureGenerator...
Types of features in original data (raw dtype, special dtypes):
('int', []) : 6 | ['age', 'fnlwgt', 'education-num', 'capital-gain', 'capital-loss', ...]
('object', []) : 8 | ['workclass', 'education', 'marital-status', 'occupation', 'relationship', ...]
Types of features in processed data (raw dtype, special dtypes):
('category', []) : 7 | ['workclass', 'education', 'marital-status', 'occupation', 'relationship', ...]
('int', []) : 6 | ['age', 'fnlwgt', 'education-num', 'capital-gain', 'capital-loss', ...]
('int', ['bool']) : 1 | ['sex']
0.1s = Fit runtime
14 features in original data used to generate 14 features in processed data.
Train Data (Processed) Memory Usage: 0.03 MB (0.0% of available memory)
Data preprocessing and feature engineering runtime = 0.15s ...
AutoGluon will gauge predictive performance using evaluation metric: 'roc_auc'
This metric expects predicted probabilities rather than predicted class labels, so you'll need to use predict_proba() instead of predict()
To change this, specify the eval_metric parameter of Predictor()
Large model count detected (112 configs) ... Only displaying the first 3 models of each family. To see all, set `verbosity=3`.
User-specified model hyperparameters to be fit:
{
'NN_TORCH': [{}, {'activation': 'elu', 'dropout_prob': 0.10077639529843717, 'hidden_size': 108, 'learning_rate': 0.002735937344002146, 'num_layers': 4, 'use_batchnorm': True, 'weight_decay': 1.356433327634438e-12, 'ag_args': {'name_suffix': '_r79', 'priority': -2}}, {'activation': 'elu', 'dropout_prob': 0.11897478034205347, 'hidden_size': 213, 'learning_rate': 0.0010474382260641949, 'num_layers': 4, 'use_batchnorm': False, 'weight_decay': 5.594471067786272e-10, 'ag_args': {'name_suffix': '_r22', 'priority': -7}}],
'GBM': [{'extra_trees': True, 'ag_args': {'name_suffix': 'XT'}}, {}, 'GBMLarge'],
'CAT': [{}, {'depth': 6, 'grow_policy': 'SymmetricTree', 'l2_leaf_reg': 2.1542798306067823, 'learning_rate': 0.06864209415792857, 'max_ctr_complexity': 4, 'one_hot_max_size': 10, 'ag_args': {'name_suffix': '_r177', 'priority': -1}}, {'depth': 8, 'grow_policy': 'Depthwise', 'l2_leaf_reg': 2.7997999596449104, 'learning_rate': 0.031375015734637225, 'max_ctr_complexity': 2, 'one_hot_max_size': 3, 'ag_args': {'name_suffix': '_r9', 'priority': -5}}],
'XGB': [{}, {'colsample_bytree': 0.6917311125174739, 'enable_categorical': False, 'learning_rate': 0.018063876087523967, 'max_depth': 10, 'min_child_weight': 0.6028633586934382, 'ag_args': {'name_suffix': '_r33', 'priority': -8}}, {'colsample_bytree': 0.6628423832084077, 'enable_categorical': False, 'learning_rate': 0.08775715546881824, 'max_depth': 5, 'min_child_weight': 0.6294123374222513, 'ag_args': {'name_suffix': '_r89', 'priority': -16}}],
'FASTAI': [{}, {'bs': 256, 'emb_drop': 0.5411770367537934, 'epochs': 43, 'layers': [800, 400], 'lr': 0.01519848858318159, 'ps': 0.23782946566604385, 'ag_args': {'name_suffix': '_r191', 'priority': -4}}, {'bs': 2048, 'emb_drop': 0.05070411322605811, 'epochs': 29, 'layers': [200, 100], 'lr': 0.08974235041576624, 'ps': 0.10393466140748028, 'ag_args': {'name_suffix': '_r102', 'priority': -11}}],
'RF': [{'criterion': 'gini', 'ag_args': {'name_suffix': 'Gini', 'problem_types': ['binary', 'multiclass']}}, {'criterion': 'entropy', 'ag_args': {'name_suffix': 'Entr', 'problem_types': ['binary', 'multiclass']}}, {'criterion': 'squared_error', 'ag_args': {'name_suffix': 'MSE', 'problem_types': ['regression', 'quantile']}}],
'XT': [{'criterion': 'gini', 'ag_args': {'name_suffix': 'Gini', 'problem_types': ['binary', 'multiclass']}}, {'criterion': 'entropy', 'ag_args': {'name_suffix': 'Entr', 'problem_types': ['binary', 'multiclass']}}, {'criterion': 'squared_error', 'ag_args': {'name_suffix': 'MSE', 'problem_types': ['regression', 'quantile']}}],
'KNN': [{'weights': 'uniform', 'ag_args': {'name_suffix': 'Unif'}}, {'weights': 'distance', 'ag_args': {'name_suffix': 'Dist'}}],
}
AutoGluon will fit 2 stack levels (L1 to L2) ...
Fitting 110 L1 models ...
Fitting model: KNeighborsUnif_BAG_L1 ... Training model for up to 25.89s of the 38.84s of remaining time.
0.5196 = Validation score (roc_auc)
0.01s = Training runtime
0.02s = Validation runtime
Fitting model: KNeighborsDist_BAG_L1 ... Training model for up to 25.86s of the 38.81s of remaining time.
0.537 = Validation score (roc_auc)
0.0s = Training runtime
0.01s = Validation runtime
Fitting model: LightGBMXT_BAG_L1 ... Training model for up to 25.83s of the 38.78s of remaining time.
Fitting 8 child models (S1F1 - S1F8) | Fitting with ParallelLocalFoldFittingStrategy (8 workers, per: cpus=1, gpus=0, memory=0.01%)
0.8912 = Validation score (roc_auc)
0.9s = Training runtime
0.06s = Validation runtime
Fitting model: LightGBM_BAG_L1 ... Training model for up to 22.35s of the 35.3s of remaining time.
Fitting 8 child models (S1F1 - S1F8) | Fitting with ParallelLocalFoldFittingStrategy (8 workers, per: cpus=1, gpus=0, memory=0.01%)
0.8799 = Validation score (roc_auc)
0.76s = Training runtime
0.05s = Validation runtime
Fitting model: RandomForestGini_BAG_L1 ... Training model for up to 18.88s of the 31.83s of remaining time.
0.8879 = Validation score (roc_auc)
0.82s = Training runtime
0.11s = Validation runtime
Fitting model: RandomForestEntr_BAG_L1 ... Training model for up to 17.93s of the 30.87s of remaining time.
0.8899 = Validation score (roc_auc)
0.62s = Training runtime
0.11s = Validation runtime
Fitting model: CatBoost_BAG_L1 ... Training model for up to 17.18s of the 30.13s of remaining time.
Fitting 8 child models (S1F1 - S1F8) | Fitting with ParallelLocalFoldFittingStrategy (8 workers, per: cpus=1, gpus=0, memory=0.02%)
0.8902 = Validation score (roc_auc)
5.79s = Training runtime
0.05s = Validation runtime
Fitting model: ExtraTreesGini_BAG_L1 ... Training model for up to 8.87s of the 21.81s of remaining time.
0.8958 = Validation score (roc_auc)
0.63s = Training runtime
0.11s = Validation runtime
Fitting model: ExtraTreesEntr_BAG_L1 ... Training model for up to 8.11s of the 21.06s of remaining time.
0.8904 = Validation score (roc_auc)
0.63s = Training runtime
0.11s = Validation runtime
Fitting model: NeuralNetFastAI_BAG_L1 ... Training model for up to 7.36s of the 20.3s of remaining time.
Fitting 8 child models (S1F1 - S1F8) | Fitting with ParallelLocalFoldFittingStrategy (8 workers, per: cpus=1, gpus=0, memory=0.00%)
0.8753 = Validation score (roc_auc)
5.16s = Training runtime
0.12s = Validation runtime
Fitting model: WeightedEnsemble_L2 ... Training model for up to 38.85s of the 12.45s of remaining time.
Ensemble Weights: {'ExtraTreesGini_BAG_L1': 0.455, 'NeuralNetFastAI_BAG_L1': 0.364, 'LightGBMXT_BAG_L1': 0.182}
0.9058 = Validation score (roc_auc)
0.1s = Training runtime
0.0s = Validation runtime
Fitting 108 L2 models ...
Fitting model: LightGBMXT_BAG_L2 ... Training model for up to 12.33s of the 12.27s of remaining time.
Fitting 8 child models (S1F1 - S1F8) | Fitting with ParallelLocalFoldFittingStrategy (8 workers, per: cpus=1, gpus=0, memory=0.02%)
0.8886 = Validation score (roc_auc)
1.01s = Training runtime
0.04s = Validation runtime
Fitting model: LightGBM_BAG_L2 ... Training model for up to 8.39s of the 8.32s of remaining time.
Fitting 8 child models (S1F1 - S1F8) | Fitting with ParallelLocalFoldFittingStrategy (8 workers, per: cpus=1, gpus=0, memory=0.02%)
0.8797 = Validation score (roc_auc)
1.04s = Training runtime
0.04s = Validation runtime
Fitting model: RandomForestGini_BAG_L2 ... Training model for up to 4.58s of the 4.52s of remaining time.
0.8772 = Validation score (roc_auc)
0.82s = Training runtime
0.1s = Validation runtime
Fitting model: RandomForestEntr_BAG_L2 ... Training model for up to 3.63s of the 3.57s of remaining time.
0.8762 = Validation score (roc_auc)
0.62s = Training runtime
0.1s = Validation runtime
Fitting model: CatBoost_BAG_L2 ... Training model for up to 2.89s of the 2.83s of remaining time.
Fitting 8 child models (S1F1 - S1F8) | Fitting with ParallelLocalFoldFittingStrategy (8 workers, per: cpus=1, gpus=0, memory=0.03%)
0.8868 = Validation score (roc_auc)
2.91s = Training runtime
0.06s = Validation runtime
Fitting model: WeightedEnsemble_L3 ... Training model for up to 38.85s of the -2.97s of remaining time.
Ensemble Weights: {'ExtraTreesGini_BAG_L1': 0.455, 'NeuralNetFastAI_BAG_L1': 0.364, 'LightGBMXT_BAG_L1': 0.182}
0.9058 = Validation score (roc_auc)
0.1s = Training runtime
0.0s = Validation runtime
AutoGluon training complete, total runtime = 42.09s ... Best model: "WeightedEnsemble_L2"
TabularPredictor saved. To load, use: predictor = TabularPredictor.load("AutogluonModels/ag-20240418_044010")
predictor.leaderboard(test_data)
model | score_test | score_val | eval_metric | pred_time_test | pred_time_val | fit_time | pred_time_test_marginal | pred_time_val_marginal | fit_time_marginal | stack_level | can_infer | fit_order | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | CatBoost_BAG_L1 | 0.902618 | 0.890228 | roc_auc | 0.051182 | 0.051126 | 5.785630 | 0.051182 | 0.051126 | 5.785630 | 1 | True | 7 |
1 | LightGBMXT_BAG_L1 | 0.900085 | 0.891223 | roc_auc | 0.267433 | 0.057236 | 0.904865 | 0.267433 | 0.057236 | 0.904865 | 1 | True | 3 |
2 | CatBoost_BAG_L2 | 0.898462 | 0.886758 | roc_auc | 2.217352 | 0.788942 | 18.224949 | 0.037324 | 0.058123 | 2.907994 | 2 | True | 16 |
3 | LightGBMXT_BAG_L2 | 0.898096 | 0.888564 | roc_auc | 2.300236 | 0.771727 | 16.329853 | 0.120209 | 0.040908 | 1.012898 | 2 | True | 12 |
4 | WeightedEnsemble_L2 | 0.896598 | 0.905794 | roc_auc | 1.590899 | 0.284566 | 6.793710 | 0.002284 | 0.000681 | 0.103608 | 2 | True | 11 |
5 | WeightedEnsemble_L3 | 0.896598 | 0.905794 | roc_auc | 1.591064 | 0.284394 | 6.792656 | 0.002449 | 0.000510 | 0.102554 | 3 | True | 17 |
6 | RandomForestEntr_BAG_L2 | 0.890116 | 0.876235 | roc_auc | 2.298398 | 0.831456 | 15.937495 | 0.118370 | 0.100637 | 0.620540 | 2 | True | 15 |
7 | LightGBM_BAG_L1 | 0.889478 | 0.879878 | roc_auc | 0.141366 | 0.047501 | 0.762506 | 0.141366 | 0.047501 | 0.762506 | 1 | True | 4 |
8 | RandomForestGini_BAG_L2 | 0.889191 | 0.877230 | roc_auc | 2.290424 | 0.834639 | 16.140029 | 0.110397 | 0.103820 | 0.823073 | 2 | True | 14 |
9 | RandomForestEntr_BAG_L1 | 0.886981 | 0.889863 | roc_auc | 0.116027 | 0.106585 | 0.619618 | 0.116027 | 0.106585 | 0.619618 | 1 | True | 6 |
10 | NeuralNetFastAI_BAG_L1 | 0.886637 | 0.875271 | roc_auc | 1.204801 | 0.120541 | 5.155263 | 1.204801 | 0.120541 | 5.155263 | 1 | True | 10 |
11 | RandomForestGini_BAG_L1 | 0.885163 | 0.887874 | roc_auc | 0.121287 | 0.105474 | 0.822843 | 0.121287 | 0.105474 | 0.822843 | 1 | True | 5 |
12 | LightGBM_BAG_L2 | 0.883703 | 0.879736 | roc_auc | 2.272192 | 0.770107 | 16.355905 | 0.092165 | 0.039288 | 1.038950 | 2 | True | 13 |
13 | ExtraTreesEntr_BAG_L1 | 0.880342 | 0.890401 | roc_auc | 0.100950 | 0.105789 | 0.626662 | 0.100950 | 0.105789 | 0.626662 | 1 | True | 9 |
14 | ExtraTreesGini_BAG_L1 | 0.879143 | 0.895789 | roc_auc | 0.116382 | 0.106107 | 0.629975 | 0.116382 | 0.106107 | 0.629975 | 1 | True | 8 |
15 | KNeighborsDist_BAG_L1 | 0.525998 | 0.536956 | roc_auc | 0.033036 | 0.013549 | 0.003619 | 0.033036 | 0.013549 | 0.003619 | 1 | True | 2 |
16 | KNeighborsUnif_BAG_L1 | 0.514970 | 0.519604 | roc_auc | 0.027565 | 0.016911 | 0.005974 | 0.027565 | 0.016911 | 0.005974 | 1 | True | 1 |
This command implements the following strategy to maximize accuracy:
Specify the argument
presets='best_quality'
, which allows AutoGluon to automatically construct powerful model ensembles based on stacking/bagging, and will greatly improve the resulting predictions if granted sufficient training time. The default value ofpresets
is'medium_quality'
, which produces less accurate models but facilitates faster prototyping. Withpresets
, you can flexibly prioritize predictive accuracy vs. training/inference speed. For example, if you care less about predictive performance and want to quickly deploy a basic model, consider using:presets=['good_quality', 'optimize_for_deployment']
.Provide the parameter
eval_metric
toTabularPredictor()
if you know what metric will be used to evaluate predictions in your application. Some other non-default metrics you might use include things like:'f1'
(for binary classification),'roc_auc'
(for binary classification),'log_loss'
(for classification),'mean_absolute_error'
(for regression),'median_absolute_error'
(for regression). You can also define your own custom metric function. For more information refer to Adding a custom metric to AutoGluon.Include all your data in
train_data
and do not providetuning_data
(AutoGluon will split the data more intelligently to fit its needs).Do not specify the
hyperparameter_tune_kwargs
argument (counterintuitively, hyperparameter tuning is not the best way to spend a limited training time budgets, as model ensembling is often superior). We recommend you only usehyperparameter_tune_kwargs
if your goal is to deploy a single model rather than an ensemble.Do not specify the
hyperparameters
argument (allow AutoGluon to adaptively select which models/hyperparameters to use).Set
time_limit
to the longest amount of time (in seconds) that you are willing to wait. AutoGluon’s predictive performance improves the longerfit()
is allowed to run.
Regression (predicting numeric table columns):¶
To demonstrate that fit()
can also automatically handle regression tasks, we now try to predict the numeric age
variable in the same table based on the other features:
age_column = 'age'
train_data[age_column].head()
6118 51
23204 58
29590 40
18116 37
33964 62
Name: age, dtype: int64
We again call fit()
, imposing a time-limit this time (in seconds), and also demonstrate a shorthand method to evaluate the resulting model on the test data (which contain labels):
predictor_age = TabularPredictor(label=age_column, path="agModels-predictAge").fit(train_data, time_limit=60)
Show code cell output
No presets specified! To achieve strong results with AutoGluon, it is recommended to use the available presets.
Recommended Presets (For more details refer to https://auto.gluon.ai/stable/tutorials/tabular/tabular-essentials.html#presets):
presets='best_quality' : Maximize accuracy. Default time_limit=3600.
presets='high_quality' : Strong accuracy with fast inference speed. Default time_limit=3600.
presets='good_quality' : Good accuracy with very fast inference speed. Default time_limit=3600.
presets='medium_quality' : Fast training time, ideal for initial prototyping.
Beginning AutoGluon training ... Time limit = 60s
AutoGluon will save models to "agModels-predictAge"
=================== System Info ===================
AutoGluon Version: 1.1.0b20240418
Python Version: 3.10.12
Operating System: Linux
Platform Machine: x86_64
Platform Version: #1 SMP Tue Nov 30 00:17:50 UTC 2021
CPU Count: 8
Memory Avail: 27.41 GB / 30.96 GB (88.5%)
Disk Space Avail: 216.88 GB / 255.99 GB (84.7%)
===================================================
Train Data Rows: 500
Train Data Columns: 14
Label Column: age
AutoGluon infers your prediction problem is: 'regression' (because dtype of label-column == int and many unique label-values observed).
Label info (max, min, mean, stddev): (85, 17, 39.652, 13.52393)
If 'regression' is not the correct problem_type, please manually specify the problem_type parameter during predictor init (You may specify problem_type as one of: ['binary', 'multiclass', 'regression'])
Problem Type: regression
Preprocessing data ...
Using Feature Generators to preprocess the data ...
Fitting AutoMLPipelineFeatureGenerator...
Available Memory: 28066.59 MB
Train Data (Original) Memory Usage: 0.31 MB (0.0% of available memory)
Inferring data type of each feature based on column values. Set feature_metadata_in to manually specify special dtypes of the features.
Stage 1 Generators:
Fitting AsTypeFeatureGenerator...
Note: Converting 2 features to boolean dtype as they only contain 2 unique values.
Stage 2 Generators:
Fitting FillNaFeatureGenerator...
Stage 3 Generators:
Fitting IdentityFeatureGenerator...
Fitting CategoryFeatureGenerator...
Fitting CategoryMemoryMinimizeFeatureGenerator...
Stage 4 Generators:
Fitting DropUniqueFeatureGenerator...
Stage 5 Generators:
Fitting DropDuplicatesFeatureGenerator...
Types of features in original data (raw dtype, special dtypes):
('int', []) : 5 | ['fnlwgt', 'education-num', 'capital-gain', 'capital-loss', 'hours-per-week']
('object', []) : 9 | ['workclass', 'education', 'marital-status', 'occupation', 'relationship', ...]
Types of features in processed data (raw dtype, special dtypes):
('category', []) : 7 | ['workclass', 'education', 'marital-status', 'occupation', 'relationship', ...]
('int', []) : 5 | ['fnlwgt', 'education-num', 'capital-gain', 'capital-loss', 'hours-per-week']
('int', ['bool']) : 2 | ['sex', 'class']
0.1s = Fit runtime
14 features in original data used to generate 14 features in processed data.
Train Data (Processed) Memory Usage: 0.03 MB (0.0% of available memory)
Data preprocessing and feature engineering runtime = 0.1s ...
AutoGluon will gauge predictive performance using evaluation metric: 'root_mean_squared_error'
This metric's sign has been flipped to adhere to being higher_is_better. The metric score can be multiplied by -1 to get the metric value.
To change this, specify the eval_metric parameter of Predictor()
Automatically generating train/validation split with holdout_frac=0.2, Train Rows: 400, Val Rows: 100
User-specified model hyperparameters to be fit:
{
'NN_TORCH': {},
'GBM': [{'extra_trees': True, 'ag_args': {'name_suffix': 'XT'}}, {}, 'GBMLarge'],
'CAT': {},
'XGB': {},
'FASTAI': {},
'RF': [{'criterion': 'gini', 'ag_args': {'name_suffix': 'Gini', 'problem_types': ['binary', 'multiclass']}}, {'criterion': 'entropy', 'ag_args': {'name_suffix': 'Entr', 'problem_types': ['binary', 'multiclass']}}, {'criterion': 'squared_error', 'ag_args': {'name_suffix': 'MSE', 'problem_types': ['regression', 'quantile']}}],
'XT': [{'criterion': 'gini', 'ag_args': {'name_suffix': 'Gini', 'problem_types': ['binary', 'multiclass']}}, {'criterion': 'entropy', 'ag_args': {'name_suffix': 'Entr', 'problem_types': ['binary', 'multiclass']}}, {'criterion': 'squared_error', 'ag_args': {'name_suffix': 'MSE', 'problem_types': ['regression', 'quantile']}}],
'KNN': [{'weights': 'uniform', 'ag_args': {'name_suffix': 'Unif'}}, {'weights': 'distance', 'ag_args': {'name_suffix': 'Dist'}}],
}
Fitting 11 L1 models ...
Fitting model: KNeighborsUnif ... Training model for up to 59.9s of the 59.9s of remaining time.
-15.6869 = Validation score (-root_mean_squared_error)
0.0s = Training runtime
0.01s = Validation runtime
Fitting model: KNeighborsDist ... Training model for up to 59.88s of the 59.88s of remaining time.
-15.1801 = Validation score (-root_mean_squared_error)
0.0s = Training runtime
0.01s = Validation runtime
Fitting model: LightGBMXT ... Training model for up to 59.85s of the 59.85s of remaining time.
-11.7092 = Validation score (-root_mean_squared_error)
0.34s = Training runtime
0.01s = Validation runtime
Fitting model: LightGBM ... Training model for up to 59.5s of the 59.5s of remaining time.
-11.9295 = Validation score (-root_mean_squared_error)
0.28s = Training runtime
0.0s = Validation runtime
Fitting model: RandomForestMSE ... Training model for up to 59.21s of the 59.21s of remaining time.
-11.6624 = Validation score (-root_mean_squared_error)
0.52s = Training runtime
0.05s = Validation runtime
Fitting model: CatBoost ... Training model for up to 58.62s of the 58.62s of remaining time.
-11.7993 = Validation score (-root_mean_squared_error)
0.64s = Training runtime
0.01s = Validation runtime
Fitting model: ExtraTreesMSE ... Training model for up to 57.97s of the 57.97s of remaining time.
-11.3627 = Validation score (-root_mean_squared_error)
0.48s = Training runtime
0.05s = Validation runtime
Fitting model: NeuralNetFastAI ... Training model for up to 57.43s of the 57.43s of remaining time.
-12.0733 = Validation score (-root_mean_squared_error)
0.6s = Training runtime
0.01s = Validation runtime
Fitting model: XGBoost ... Training model for up to 56.81s of the 56.81s of remaining time.
-11.5274 = Validation score (-root_mean_squared_error)
0.32s = Training runtime
0.01s = Validation runtime
Fitting model: NeuralNetTorch ... Training model for up to 56.47s of the 56.47s of remaining time.
-11.9345 = Validation score (-root_mean_squared_error)
1.76s = Training runtime
0.01s = Validation runtime
Fitting model: LightGBMLarge ... Training model for up to 54.7s of the 54.7s of remaining time.
-12.3153 = Validation score (-root_mean_squared_error)
0.47s = Training runtime
0.0s = Validation runtime
Fitting model: WeightedEnsemble_L2 ... Training model for up to 59.9s of the 54.21s of remaining time.
Ensemble Weights: {'ExtraTreesMSE': 0.5, 'XGBoost': 0.273, 'NeuralNetTorch': 0.136, 'LightGBMXT': 0.091}
-11.2115 = Validation score (-root_mean_squared_error)
0.02s = Training runtime
0.0s = Validation runtime
AutoGluon training complete, total runtime = 5.83s ... Best model: "WeightedEnsemble_L2"
TabularPredictor saved. To load, use: predictor = TabularPredictor.load("agModels-predictAge")
predictor_age.evaluate(test_data)
{'root_mean_squared_error': -10.479844649129983,
'mean_squared_error': -109.82714386989834,
'mean_absolute_error': -8.18290968153125,
'r2': 0.41295259639637305,
'pearsonr': 0.6434155154747895,
'median_absolute_error': -6.78033447265625}
Note that we didn’t need to tell AutoGluon this is a regression problem, it automatically inferred this from the data and reported the appropriate performance metric (RMSE by default). To specify a particular evaluation metric other than the default, set the eval_metric
parameter of TabularPredictor() and AutoGluon will tailor its models to optimize your metric (e.g. eval_metric = 'mean_absolute_error'
). For evaluation metrics where higher values are worse (like RMSE), AutoGluon will flip their sign and print them as negative values during training (as it internally assumes higher values are better). You can even specify a custom metric by following the Custom Metric Tutorial.
We can call leaderboard to see the per-model performance:
predictor_age.leaderboard(test_data)
model | score_test | score_val | eval_metric | pred_time_test | pred_time_val | fit_time | pred_time_test_marginal | pred_time_val_marginal | fit_time_marginal | stack_level | can_infer | fit_order | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | WeightedEnsemble_L2 | -10.479845 | -11.211550 | root_mean_squared_error | 0.310587 | 0.074230 | 2.913798 | 0.002761 | 0.000564 | 0.020920 | 2 | True | 12 |
1 | ExtraTreesMSE | -10.655482 | -11.362738 | root_mean_squared_error | 0.118743 | 0.049224 | 0.478322 | 0.118743 | 0.049224 | 0.478322 | 1 | True | 7 |
2 | RandomForestMSE | -10.746175 | -11.662354 | root_mean_squared_error | 0.139265 | 0.049973 | 0.519627 | 0.139265 | 0.049973 | 0.519627 | 1 | True | 5 |
3 | CatBoost | -10.780312 | -11.799279 | root_mean_squared_error | 0.012123 | 0.005769 | 0.641221 | 0.012123 | 0.005769 | 0.641221 | 1 | True | 6 |
4 | LightGBMXT | -10.837373 | -11.709228 | root_mean_squared_error | 0.079220 | 0.005203 | 0.342456 | 0.079220 | 0.005203 | 0.342456 | 1 | True | 3 |
5 | XGBoost | -10.903558 | -11.527441 | root_mean_squared_error | 0.060690 | 0.007420 | 0.316976 | 0.060690 | 0.007420 | 0.316976 | 1 | True | 9 |
6 | LightGBM | -10.972156 | -11.929546 | root_mean_squared_error | 0.027025 | 0.004662 | 0.279831 | 0.027025 | 0.004662 | 0.279831 | 1 | True | 4 |
7 | NeuralNetTorch | -11.120472 | -11.934454 | root_mean_squared_error | 0.049174 | 0.011820 | 1.755125 | 0.049174 | 0.011820 | 1.755125 | 1 | True | 10 |
8 | NeuralNetFastAI | -11.225698 | -12.073282 | root_mean_squared_error | 0.142899 | 0.013047 | 0.595998 | 0.142899 | 0.013047 | 0.595998 | 1 | True | 8 |
9 | LightGBMLarge | -11.469922 | -12.315314 | root_mean_squared_error | 0.033023 | 0.004760 | 0.465217 | 0.033023 | 0.004760 | 0.465217 | 1 | True | 11 |
10 | KNeighborsUnif | -14.902058 | -15.686937 | root_mean_squared_error | 0.035761 | 0.014471 | 0.004472 | 0.035761 | 0.014471 | 0.004472 | 1 | True | 1 |
11 | KNeighborsDist | -15.771259 | -15.180149 | root_mean_squared_error | 0.039874 | 0.014455 | 0.004374 | 0.039874 | 0.014455 | 0.004374 | 1 | True | 2 |
Data Formats: AutoGluon can currently operate on data tables already loaded into Python as pandas DataFrames, or those stored in files of CSV format or Parquet format. If your data lives in multiple tables, you will first need to join them into a single table whose rows correspond to statistically independent observations (datapoints) and columns correspond to different features (aka. variables/covariates).
Refer to the TabularPredictor documentation to see all of the available methods/options.
Advanced Usage¶
For more advanced usage examples of AutoGluon, refer to the In Depth Tutorial
If you are interested in deployment optimization, refer to the Deployment Optimization Tutorial.
For adding custom models to AutoGluon, refer to the Custom Model and Custom Model Advanced tutorials.