Predicting Columns in a Table - Deployment Optimization¶
This tutorial will cover how to perform the end-to-end AutoML process to create an optimized and deployable AutoGluon artifact for production usage.
This tutorial assumes you have already read Predicting Columns in a Table - Quick Start and Predicting Columns in a Table - In Depth.
Fitting a TabularPredictor¶
We will again use the AdultIncome dataset as in the previous tutorials and train a predictor
to predict whether the person’s income exceeds $50,000 or not, which is recorded in the class
column of this table.
from autogluon.tabular import TabularDataset, TabularPredictor
train_data = TabularDataset('https://autogluon.s3.amazonaws.com/datasets/Inc/train.csv')
label = 'class'
subsample_size = 500 # subsample subset of data for faster demo, try setting this to much larger values
train_data = train_data.sample(n=subsample_size, random_state=0)
train_data.head()
age | workclass | fnlwgt | education | education-num | marital-status | occupation | relationship | race | sex | capital-gain | capital-loss | hours-per-week | native-country | class | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
6118 | 51 | Private | 39264 | Some-college | 10 | Married-civ-spouse | Exec-managerial | Wife | White | Female | 0 | 0 | 40 | United-States | >50K |
23204 | 58 | Private | 51662 | 10th | 6 | Married-civ-spouse | Other-service | Wife | White | Female | 0 | 0 | 8 | United-States | <=50K |
29590 | 40 | Private | 326310 | Some-college | 10 | Married-civ-spouse | Craft-repair | Husband | White | Male | 0 | 0 | 44 | United-States | <=50K |
18116 | 37 | Private | 222450 | HS-grad | 9 | Never-married | Sales | Not-in-family | White | Male | 0 | 2339 | 40 | El-Salvador | <=50K |
33964 | 62 | Private | 109190 | Bachelors | 13 | Married-civ-spouse | Exec-managerial | Husband | White | Male | 15024 | 0 | 40 | United-States | >50K |
save_path = 'agModels-predictClass-deployment' # specifies folder to store trained models
predictor = TabularPredictor(label=label, path=save_path).fit(train_data)
Verbosity: 2 (Standard Logging)
=================== System Info ===================
AutoGluon Version: 1.2b20250107
Python Version: 3.11.9
Operating System: Linux
Platform Machine: x86_64
Platform Version: #1 SMP Tue Sep 24 10:00:37 UTC 2024
CPU Count: 8
Memory Avail: 28.79 GB / 30.95 GB (93.0%)
Disk Space Avail: 213.48 GB / 255.99 GB (83.4%)
===================================================
No presets specified! To achieve strong results with AutoGluon, it is recommended to use the available presets. Defaulting to `'medium'`...
Recommended Presets (For more details refer to https://auto.gluon.ai/stable/tutorials/tabular/tabular-essentials.html#presets):
presets='experimental' : New in v1.2: Pre-trained foundation model + parallel fits. The absolute best accuracy without consideration for inference speed. Does not support GPU.
presets='best' : Maximize accuracy. Recommended for most users. Use in competitions and benchmarks.
presets='high' : Strong accuracy with fast inference speed.
presets='good' : Good accuracy with very fast inference speed.
presets='medium' : Fast training time, ideal for initial prototyping.
Beginning AutoGluon training ...
AutoGluon will save models to "/home/ci/autogluon/docs/tutorials/tabular/advanced/agModels-predictClass-deployment"
Train Data Rows: 500
Train Data Columns: 14
Label Column: class
AutoGluon infers your prediction problem is: 'binary' (because only two unique label-values observed).
2 unique label values: [' >50K', ' <=50K']
If 'binary' is not the correct problem_type, please manually specify the problem_type parameter during Predictor init (You may specify problem_type as one of: ['binary', 'multiclass', 'regression', 'quantile'])
Problem Type: binary
Preprocessing data ...
Selected class <--> label mapping: class 1 = >50K, class 0 = <=50K
Note: For your binary classification, AutoGluon arbitrarily selected which label-value represents positive ( >50K) vs negative ( <=50K) class.
To explicitly set the positive_class, either rename classes to 1 and 0, or specify positive_class in Predictor init.
Using Feature Generators to preprocess the data ...
Fitting AutoMLPipelineFeatureGenerator...
Available Memory: 29480.36 MB
Train Data (Original) Memory Usage: 0.28 MB (0.0% of available memory)
Inferring data type of each feature based on column values. Set feature_metadata_in to manually specify special dtypes of the features.
Stage 1 Generators:
Fitting AsTypeFeatureGenerator...
Note: Converting 1 features to boolean dtype as they only contain 2 unique values.
Stage 2 Generators:
Fitting FillNaFeatureGenerator...
Stage 3 Generators:
Fitting IdentityFeatureGenerator...
Fitting CategoryFeatureGenerator...
Fitting CategoryMemoryMinimizeFeatureGenerator...
Stage 4 Generators:
Fitting DropUniqueFeatureGenerator...
Stage 5 Generators:
Fitting DropDuplicatesFeatureGenerator...
Types of features in original data (raw dtype, special dtypes):
('int', []) : 6 | ['age', 'fnlwgt', 'education-num', 'capital-gain', 'capital-loss', ...]
('object', []) : 8 | ['workclass', 'education', 'marital-status', 'occupation', 'relationship', ...]
Types of features in processed data (raw dtype, special dtypes):
('category', []) : 7 | ['workclass', 'education', 'marital-status', 'occupation', 'relationship', ...]
('int', []) : 6 | ['age', 'fnlwgt', 'education-num', 'capital-gain', 'capital-loss', ...]
('int', ['bool']) : 1 | ['sex']
0.1s = Fit runtime
14 features in original data used to generate 14 features in processed data.
Train Data (Processed) Memory Usage: 0.03 MB (0.0% of available memory)
Data preprocessing and feature engineering runtime = 0.08s ...
AutoGluon will gauge predictive performance using evaluation metric: 'accuracy'
To change this, specify the eval_metric parameter of Predictor()
Automatically generating train/validation split with holdout_frac=0.2, Train Rows: 400, Val Rows: 100
User-specified model hyperparameters to be fit:
{
'NN_TORCH': [{}],
'GBM': [{'extra_trees': True, 'ag_args': {'name_suffix': 'XT'}}, {}, {'learning_rate': 0.03, 'num_leaves': 128, 'feature_fraction': 0.9, 'min_data_in_leaf': 3, 'ag_args': {'name_suffix': 'Large', 'priority': 0, 'hyperparameter_tune_kwargs': None}}],
'CAT': [{}],
'XGB': [{}],
'FASTAI': [{}],
'RF': [{'criterion': 'gini', 'ag_args': {'name_suffix': 'Gini', 'problem_types': ['binary', 'multiclass']}}, {'criterion': 'entropy', 'ag_args': {'name_suffix': 'Entr', 'problem_types': ['binary', 'multiclass']}}, {'criterion': 'squared_error', 'ag_args': {'name_suffix': 'MSE', 'problem_types': ['regression', 'quantile']}}],
'XT': [{'criterion': 'gini', 'ag_args': {'name_suffix': 'Gini', 'problem_types': ['binary', 'multiclass']}}, {'criterion': 'entropy', 'ag_args': {'name_suffix': 'Entr', 'problem_types': ['binary', 'multiclass']}}, {'criterion': 'squared_error', 'ag_args': {'name_suffix': 'MSE', 'problem_types': ['regression', 'quantile']}}],
'KNN': [{'weights': 'uniform', 'ag_args': {'name_suffix': 'Unif'}}, {'weights': 'distance', 'ag_args': {'name_suffix': 'Dist'}}],
}
Fitting 13 L1 models, fit_strategy="sequential" ...
Fitting model: KNeighborsUnif ...
0.73 = Validation score (accuracy)
0.03s = Training runtime
0.01s = Validation runtime
Fitting model: KNeighborsDist ...
0.65 = Validation score (accuracy)
0.01s = Training runtime
0.01s = Validation runtime
Fitting model: LightGBMXT ...
0.83 = Validation score (accuracy)
0.26s = Training runtime
0.0s = Validation runtime
Fitting model: LightGBM ...
0.85 = Validation score (accuracy)
0.23s = Training runtime
0.0s = Validation runtime
Fitting model: RandomForestGini ...
0.84 = Validation score (accuracy)
0.64s = Training runtime
0.05s = Validation runtime
Fitting model: RandomForestEntr ...
0.83 = Validation score (accuracy)
0.54s = Training runtime
0.05s = Validation runtime
Fitting model: CatBoost ...
0.85 = Validation score (accuracy)
0.82s = Training runtime
0.0s = Validation runtime
Fitting model: ExtraTreesGini ...
0.82 = Validation score (accuracy)
0.55s = Training runtime
0.05s = Validation runtime
Fitting model: ExtraTreesEntr ...
0.81 = Validation score (accuracy)
0.56s = Training runtime
0.05s = Validation runtime
Fitting model: NeuralNetFastAI ...
0.84 = Validation score (accuracy)
2.71s = Training runtime
0.01s = Validation runtime
Fitting model: XGBoost ...
0.86 = Validation score (accuracy)
0.25s = Training runtime
0.01s = Validation runtime
Fitting model: NeuralNetTorch ...
0.83 = Validation score (accuracy)
2.18s = Training runtime
0.01s = Validation runtime
Fitting model: LightGBMLarge ...
0.83 = Validation score (accuracy)
0.5s = Training runtime
0.0s = Validation runtime
Fitting model: WeightedEnsemble_L2 ...
Ensemble Weights: {'XGBoost': 1.0}
0.86 = Validation score (accuracy)
0.08s = Training runtime
0.0s = Validation runtime
AutoGluon training complete, total runtime = 9.85s ... Best model: WeightedEnsemble_L2 | Estimated inference throughput: 14007.2 rows/s (100 batch size)
Disabling decision threshold calibration for metric `accuracy` due to having fewer than 10000 rows of validation data for calibration, to avoid overfitting (100 rows).
`accuracy` is generally not improved through threshold calibration. Force calibration via specifying `calibrate_decision_threshold=True`.
TabularPredictor saved. To load, use: predictor = TabularPredictor.load("/home/ci/autogluon/docs/tutorials/tabular/advanced/agModels-predictClass-deployment")
Next, load separate test data to demonstrate how to make predictions on new examples at inference time:
test_data = TabularDataset('https://autogluon.s3.amazonaws.com/datasets/Inc/test.csv')
y_test = test_data[label] # values to predict
test_data.head()
Loaded data from: https://autogluon.s3.amazonaws.com/datasets/Inc/test.csv | Columns = 15 / 15 | Rows = 9769 -> 9769
age | workclass | fnlwgt | education | education-num | marital-status | occupation | relationship | race | sex | capital-gain | capital-loss | hours-per-week | native-country | class | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 31 | Private | 169085 | 11th | 7 | Married-civ-spouse | Sales | Wife | White | Female | 0 | 0 | 20 | United-States | <=50K |
1 | 17 | Self-emp-not-inc | 226203 | 12th | 8 | Never-married | Sales | Own-child | White | Male | 0 | 0 | 45 | United-States | <=50K |
2 | 47 | Private | 54260 | Assoc-voc | 11 | Married-civ-spouse | Exec-managerial | Husband | White | Male | 0 | 1887 | 60 | United-States | >50K |
3 | 21 | Private | 176262 | Some-college | 10 | Never-married | Exec-managerial | Own-child | White | Female | 0 | 0 | 30 | United-States | <=50K |
4 | 17 | Private | 241185 | 12th | 8 | Never-married | Prof-specialty | Own-child | White | Male | 0 | 0 | 20 | United-States | <=50K |
We use our trained models to make predictions on the new data:
predictor = TabularPredictor.load(save_path) # unnecessary, just demonstrates how to load previously-trained predictor from file
y_pred = predictor.predict(test_data)
y_pred
0 <=50K
1 <=50K
2 >50K
3 <=50K
4 <=50K
...
9764 <=50K
9765 <=50K
9766 <=50K
9767 <=50K
9768 <=50K
Name: class, Length: 9769, dtype: object
We can use leaderboard to evaluate the performance of each individual trained model on our labeled test data:
predictor.leaderboard(test_data)
model | score_test | score_val | eval_metric | pred_time_test | pred_time_val | fit_time | pred_time_test_marginal | pred_time_val_marginal | fit_time_marginal | stack_level | can_infer | fit_order | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | RandomForestGini | 0.842870 | 0.84 | accuracy | 0.107112 | 0.047360 | 0.638633 | 0.107112 | 0.047360 | 0.638633 | 1 | True | 5 |
1 | CatBoost | 0.842461 | 0.85 | accuracy | 0.007071 | 0.003603 | 0.821584 | 0.007071 | 0.003603 | 0.821584 | 1 | True | 7 |
2 | RandomForestEntr | 0.841130 | 0.83 | accuracy | 0.113159 | 0.046974 | 0.542192 | 0.113159 | 0.046974 | 0.542192 | 1 | True | 6 |
3 | XGBoost | 0.840925 | 0.86 | accuracy | 0.060522 | 0.006424 | 0.251390 | 0.060522 | 0.006424 | 0.251390 | 1 | True | 11 |
4 | WeightedEnsemble_L2 | 0.840925 | 0.86 | accuracy | 0.061788 | 0.007139 | 0.328607 | 0.001266 | 0.000715 | 0.077218 | 2 | True | 14 |
5 | LightGBM | 0.839799 | 0.85 | accuracy | 0.015656 | 0.003564 | 0.227408 | 0.015656 | 0.003564 | 0.227408 | 1 | True | 4 |
6 | LightGBMXT | 0.836421 | 0.83 | accuracy | 0.008211 | 0.003681 | 0.264290 | 0.008211 | 0.003681 | 0.264290 | 1 | True | 3 |
7 | ExtraTreesGini | 0.833862 | 0.82 | accuracy | 0.086338 | 0.046719 | 0.553945 | 0.086338 | 0.046719 | 0.553945 | 1 | True | 8 |
8 | ExtraTreesEntr | 0.833862 | 0.81 | accuracy | 0.098859 | 0.046818 | 0.556661 | 0.098859 | 0.046818 | 0.556661 | 1 | True | 9 |
9 | NeuralNetTorch | 0.833657 | 0.83 | accuracy | 0.046598 | 0.009680 | 2.184598 | 0.046598 | 0.009680 | 2.184598 | 1 | True | 12 |
10 | NeuralNetFastAI | 0.828949 | 0.84 | accuracy | 0.136528 | 0.009449 | 2.711279 | 0.136528 | 0.009449 | 2.711279 | 1 | True | 10 |
11 | LightGBMLarge | 0.817074 | 0.83 | accuracy | 0.012224 | 0.003467 | 0.496160 | 0.012224 | 0.003467 | 0.496160 | 1 | True | 13 |
12 | KNeighborsUnif | 0.725970 | 0.73 | accuracy | 0.025513 | 0.014810 | 0.034552 | 0.025513 | 0.014810 | 0.034552 | 1 | True | 1 |
13 | KNeighborsDist | 0.695158 | 0.65 | accuracy | 0.025131 | 0.013529 | 0.010108 | 0.025131 | 0.013529 | 0.010108 | 1 | True | 2 |
Snapshot a Predictor with .clone()¶
Now that we have a working predictor artifact, we may want to alter it in a variety of ways to better suite our needs.
For example, we may want to delete certain models to reduce disk usage via .delete_models()
,
or train additional models on top of the ones we already have via .fit_extra()
.
While you can do all of these operations on your predictor,
you may want to be able to be able to revert to a prior state of the predictor in case something goes wrong.
This is where predictor.clone()
comes in.
predictor.clone()
allows you to create a snapshot of the given predictor,
cloning the artifacts of the predictor to a new location.
You can then freely play around with the predictor and always load
the earlier snapshot in case you want to undo your actions.
All you need to do to clone a predictor is specify a new directory path to clone to:
save_path_clone = save_path + '-clone'
# will return the path to the cloned predictor, identical to save_path_clone
path_clone = predictor.clone(path=save_path_clone)
Cloned TabularPredictor located in '/home/ci/autogluon/docs/tutorials/tabular/advanced/agModels-predictClass-deployment' to 'agModels-predictClass-deployment-clone'.
To load the cloned predictor: predictor_clone = TabularPredictor.load(path="agModels-predictClass-deployment-clone")
Note that this logic doubles disk usage, as it completely clones every predictor artifact on disk to make an exact replica.
Now we can load the cloned predictor:
predictor_clone = TabularPredictor.load(path=path_clone)
# You can alternatively load the cloned TabularPredictor at the time of cloning:
# predictor_clone = predictor.clone(path=save_path_clone, return_clone=True)
We can see that the cloned predictor has the same leaderboard and functionality as the original:
y_pred_clone = predictor.predict(test_data)
y_pred_clone
0 <=50K
1 <=50K
2 >50K
3 <=50K
4 <=50K
...
9764 <=50K
9765 <=50K
9766 <=50K
9767 <=50K
9768 <=50K
Name: class, Length: 9769, dtype: object
y_pred.equals(y_pred_clone)
True
predictor_clone.leaderboard(test_data)
model | score_test | score_val | eval_metric | pred_time_test | pred_time_val | fit_time | pred_time_test_marginal | pred_time_val_marginal | fit_time_marginal | stack_level | can_infer | fit_order | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | RandomForestGini | 0.842870 | 0.84 | accuracy | 0.107066 | 0.047360 | 0.638633 | 0.107066 | 0.047360 | 0.638633 | 1 | True | 5 |
1 | CatBoost | 0.842461 | 0.85 | accuracy | 0.007103 | 0.003603 | 0.821584 | 0.007103 | 0.003603 | 0.821584 | 1 | True | 7 |
2 | RandomForestEntr | 0.841130 | 0.83 | accuracy | 0.107647 | 0.046974 | 0.542192 | 0.107647 | 0.046974 | 0.542192 | 1 | True | 6 |
3 | XGBoost | 0.840925 | 0.86 | accuracy | 0.056551 | 0.006424 | 0.251390 | 0.056551 | 0.006424 | 0.251390 | 1 | True | 11 |
4 | WeightedEnsemble_L2 | 0.840925 | 0.86 | accuracy | 0.057803 | 0.007139 | 0.328607 | 0.001252 | 0.000715 | 0.077218 | 2 | True | 14 |
5 | LightGBM | 0.839799 | 0.85 | accuracy | 0.019953 | 0.003564 | 0.227408 | 0.019953 | 0.003564 | 0.227408 | 1 | True | 4 |
6 | LightGBMXT | 0.836421 | 0.83 | accuracy | 0.011012 | 0.003681 | 0.264290 | 0.011012 | 0.003681 | 0.264290 | 1 | True | 3 |
7 | ExtraTreesGini | 0.833862 | 0.82 | accuracy | 0.096639 | 0.046719 | 0.553945 | 0.096639 | 0.046719 | 0.553945 | 1 | True | 8 |
8 | ExtraTreesEntr | 0.833862 | 0.81 | accuracy | 0.097510 | 0.046818 | 0.556661 | 0.097510 | 0.046818 | 0.556661 | 1 | True | 9 |
9 | NeuralNetTorch | 0.833657 | 0.83 | accuracy | 0.058146 | 0.009680 | 2.184598 | 0.058146 | 0.009680 | 2.184598 | 1 | True | 12 |
10 | NeuralNetFastAI | 0.828949 | 0.84 | accuracy | 0.134532 | 0.009449 | 2.711279 | 0.134532 | 0.009449 | 2.711279 | 1 | True | 10 |
11 | LightGBMLarge | 0.817074 | 0.83 | accuracy | 0.012055 | 0.003467 | 0.496160 | 0.012055 | 0.003467 | 0.496160 | 1 | True | 13 |
12 | KNeighborsUnif | 0.725970 | 0.73 | accuracy | 0.025374 | 0.014810 | 0.034552 | 0.025374 | 0.014810 | 0.034552 | 1 | True | 1 |
13 | KNeighborsDist | 0.695158 | 0.65 | accuracy | 0.025348 | 0.013529 | 0.010108 | 0.025348 | 0.013529 | 0.010108 | 1 | True | 2 |
Now let’s do some extra logic with the clone, such as calling refit_full:
predictor_clone.refit_full()
predictor_clone.leaderboard(test_data)
Refitting models via `predictor.refit_full` using all of the data (combined train and validation)...
Models trained in this way will have the suffix "_FULL" and have NaN validation score.
This process is not bound by time_limit, but should take less time than the original `predictor.fit` call.
To learn more, refer to the `.refit_full` method docstring which explains how "_FULL" models differ from normal models.
Fitting 1 L1 models, fit_strategy="sequential" ...
Fitting model: KNeighborsUnif_FULL ...
0.0s = Training runtime
Fitting 1 L1 models, fit_strategy="sequential" ...
Fitting model: KNeighborsDist_FULL ...
0.0s = Training runtime
Fitting 1 L1 models, fit_strategy="sequential" ...
Fitting model: LightGBMXT_FULL ...
0.19s = Training runtime
Fitting 1 L1 models, fit_strategy="sequential" ...
Fitting model: LightGBM_FULL ...
0.19s = Training runtime
Fitting 1 L1 models, fit_strategy="sequential" ...
Fitting model: RandomForestGini_FULL ...
0.56s = Training runtime
Fitting 1 L1 models, fit_strategy="sequential" ...
Fitting model: RandomForestEntr_FULL ...
0.52s = Training runtime
Fitting 1 L1 models, fit_strategy="sequential" ...
Fitting model: CatBoost_FULL ...
0.02s = Training runtime
Fitting 1 L1 models, fit_strategy="sequential" ...
Fitting model: ExtraTreesGini_FULL ...
0.51s = Training runtime
Fitting 1 L1 models, fit_strategy="sequential" ...
Fitting model: ExtraTreesEntr_FULL ...
0.52s = Training runtime
Fitting 1 L1 models, fit_strategy="sequential" ...
Fitting model: NeuralNetFastAI_FULL ...
No improvement since epoch 0: early stopping
0.34s = Training runtime
Fitting 1 L1 models, fit_strategy="sequential" ...
Fitting model: XGBoost_FULL ...
0.08s = Training runtime
Fitting 1 L1 models, fit_strategy="sequential" ...
Fitting model: NeuralNetTorch_FULL ...
0.63s = Training runtime
Fitting 1 L1 models, fit_strategy="sequential" ...
Fitting model: LightGBMLarge_FULL ...
0.22s = Training runtime
Fitting model: WeightedEnsemble_L2_FULL | Skipping fit via cloning parent ...
Ensemble Weights: {'XGBoost': 1.0}
0.08s = Training runtime
Updated best model to "WeightedEnsemble_L2_FULL" (Previously "WeightedEnsemble_L2"). AutoGluon will default to using "WeightedEnsemble_L2_FULL" for predict() and predict_proba().
Refit complete, total runtime = 4.16s ... Best model: "WeightedEnsemble_L2_FULL"
model | score_test | score_val | eval_metric | pred_time_test | pred_time_val | fit_time | pred_time_test_marginal | pred_time_val_marginal | fit_time_marginal | stack_level | can_infer | fit_order | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | CatBoost_FULL | 0.842870 | NaN | accuracy | 0.005849 | NaN | 0.024848 | 0.005849 | NaN | 0.024848 | 1 | True | 21 |
1 | RandomForestGini | 0.842870 | 0.84 | accuracy | 0.105947 | 0.047360 | 0.638633 | 0.105947 | 0.047360 | 0.638633 | 1 | True | 5 |
2 | CatBoost | 0.842461 | 0.85 | accuracy | 0.007189 | 0.003603 | 0.821584 | 0.007189 | 0.003603 | 0.821584 | 1 | True | 7 |
3 | RandomForestEntr | 0.841130 | 0.83 | accuracy | 0.111790 | 0.046974 | 0.542192 | 0.111790 | 0.046974 | 0.542192 | 1 | True | 6 |
4 | XGBoost | 0.840925 | 0.86 | accuracy | 0.063695 | 0.006424 | 0.251390 | 0.063695 | 0.006424 | 0.251390 | 1 | True | 11 |
5 | WeightedEnsemble_L2 | 0.840925 | 0.86 | accuracy | 0.065046 | 0.007139 | 0.328607 | 0.001350 | 0.000715 | 0.077218 | 2 | True | 14 |
6 | LightGBM_FULL | 0.840823 | NaN | accuracy | 0.024074 | NaN | 0.192750 | 0.024074 | NaN | 0.192750 | 1 | True | 18 |
7 | LightGBM | 0.839799 | 0.85 | accuracy | 0.015887 | 0.003564 | 0.227408 | 0.015887 | 0.003564 | 0.227408 | 1 | True | 4 |
8 | RandomForestGini_FULL | 0.839390 | NaN | accuracy | 0.112645 | NaN | 0.556750 | 0.112645 | NaN | 0.556750 | 1 | True | 19 |
9 | RandomForestEntr_FULL | 0.839185 | NaN | accuracy | 0.108242 | NaN | 0.519348 | 0.108242 | NaN | 0.519348 | 1 | True | 20 |
10 | LightGBMXT_FULL | 0.837957 | NaN | accuracy | 0.012052 | NaN | 0.185195 | 0.012052 | NaN | 0.185195 | 1 | True | 17 |
11 | LightGBMXT | 0.836421 | 0.83 | accuracy | 0.009035 | 0.003681 | 0.264290 | 0.009035 | 0.003681 | 0.264290 | 1 | True | 3 |
12 | ExtraTreesEntr_FULL | 0.835705 | NaN | accuracy | 0.099466 | NaN | 0.519290 | 0.099466 | NaN | 0.519290 | 1 | True | 23 |
13 | NeuralNetTorch_FULL | 0.835091 | NaN | accuracy | 0.049339 | NaN | 0.625952 | 0.049339 | NaN | 0.625952 | 1 | True | 26 |
14 | ExtraTreesGini | 0.833862 | 0.82 | accuracy | 0.096635 | 0.046719 | 0.553945 | 0.096635 | 0.046719 | 0.553945 | 1 | True | 8 |
15 | ExtraTreesEntr | 0.833862 | 0.81 | accuracy | 0.097135 | 0.046818 | 0.556661 | 0.097135 | 0.046818 | 0.556661 | 1 | True | 9 |
16 | NeuralNetTorch | 0.833657 | 0.83 | accuracy | 0.056488 | 0.009680 | 2.184598 | 0.056488 | 0.009680 | 2.184598 | 1 | True | 12 |
17 | XGBoost_FULL | 0.833453 | NaN | accuracy | 0.057516 | NaN | 0.084706 | 0.057516 | NaN | 0.084706 | 1 | True | 25 |
18 | WeightedEnsemble_L2_FULL | 0.833453 | NaN | accuracy | 0.058780 | NaN | 0.161923 | 0.001265 | NaN | 0.077218 | 2 | True | 28 |
19 | ExtraTreesGini_FULL | 0.833453 | NaN | accuracy | 0.097387 | NaN | 0.513472 | 0.097387 | NaN | 0.513472 | 1 | True | 22 |
20 | NeuralNetFastAI | 0.828949 | 0.84 | accuracy | 0.133458 | 0.009449 | 2.711279 | 0.133458 | 0.009449 | 2.711279 | 1 | True | 10 |
21 | LightGBMLarge | 0.817074 | 0.83 | accuracy | 0.012463 | 0.003467 | 0.496160 | 0.012463 | 0.003467 | 0.496160 | 1 | True | 13 |
22 | LightGBMLarge_FULL | 0.809704 | NaN | accuracy | 0.012398 | NaN | 0.219814 | 0.012398 | NaN | 0.219814 | 1 | True | 27 |
23 | NeuralNetFastAI_FULL | 0.768349 | NaN | accuracy | 0.132461 | NaN | 0.338044 | 0.132461 | NaN | 0.338044 | 1 | True | 24 |
24 | KNeighborsUnif | 0.725970 | 0.73 | accuracy | 0.026294 | 0.014810 | 0.034552 | 0.026294 | 0.014810 | 0.034552 | 1 | True | 1 |
25 | KNeighborsUnif_FULL | 0.725151 | NaN | accuracy | 0.027053 | NaN | 0.004205 | 0.027053 | NaN | 0.004205 | 1 | True | 15 |
26 | KNeighborsDist | 0.695158 | 0.65 | accuracy | 0.036280 | 0.013529 | 0.010108 | 0.036280 | 0.013529 | 0.010108 | 1 | True | 2 |
27 | KNeighborsDist_FULL | 0.685434 | NaN | accuracy | 0.037077 | NaN | 0.004356 | 0.037077 | NaN | 0.004356 | 1 | True | 16 |
We can see that we were able to fit additional models, but for whatever reason we may want to undo this operation.
Luckily, our original predictor is untouched!
predictor.leaderboard(test_data)
model | score_test | score_val | eval_metric | pred_time_test | pred_time_val | fit_time | pred_time_test_marginal | pred_time_val_marginal | fit_time_marginal | stack_level | can_infer | fit_order | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | RandomForestGini | 0.842870 | 0.84 | accuracy | 0.107079 | 0.047360 | 0.638633 | 0.107079 | 0.047360 | 0.638633 | 1 | True | 5 |
1 | CatBoost | 0.842461 | 0.85 | accuracy | 0.007092 | 0.003603 | 0.821584 | 0.007092 | 0.003603 | 0.821584 | 1 | True | 7 |
2 | RandomForestEntr | 0.841130 | 0.83 | accuracy | 0.103151 | 0.046974 | 0.542192 | 0.103151 | 0.046974 | 0.542192 | 1 | True | 6 |
3 | XGBoost | 0.840925 | 0.86 | accuracy | 0.061646 | 0.006424 | 0.251390 | 0.061646 | 0.006424 | 0.251390 | 1 | True | 11 |
4 | WeightedEnsemble_L2 | 0.840925 | 0.86 | accuracy | 0.062884 | 0.007139 | 0.328607 | 0.001237 | 0.000715 | 0.077218 | 2 | True | 14 |
5 | LightGBM | 0.839799 | 0.85 | accuracy | 0.019338 | 0.003564 | 0.227408 | 0.019338 | 0.003564 | 0.227408 | 1 | True | 4 |
6 | LightGBMXT | 0.836421 | 0.83 | accuracy | 0.011226 | 0.003681 | 0.264290 | 0.011226 | 0.003681 | 0.264290 | 1 | True | 3 |
7 | ExtraTreesEntr | 0.833862 | 0.81 | accuracy | 0.086501 | 0.046818 | 0.556661 | 0.086501 | 0.046818 | 0.556661 | 1 | True | 9 |
8 | ExtraTreesGini | 0.833862 | 0.82 | accuracy | 0.096114 | 0.046719 | 0.553945 | 0.096114 | 0.046719 | 0.553945 | 1 | True | 8 |
9 | NeuralNetTorch | 0.833657 | 0.83 | accuracy | 0.063789 | 0.009680 | 2.184598 | 0.063789 | 0.009680 | 2.184598 | 1 | True | 12 |
10 | NeuralNetFastAI | 0.828949 | 0.84 | accuracy | 0.134934 | 0.009449 | 2.711279 | 0.134934 | 0.009449 | 2.711279 | 1 | True | 10 |
11 | LightGBMLarge | 0.817074 | 0.83 | accuracy | 0.013785 | 0.003467 | 0.496160 | 0.013785 | 0.003467 | 0.496160 | 1 | True | 13 |
12 | KNeighborsUnif | 0.725970 | 0.73 | accuracy | 0.032609 | 0.014810 | 0.034552 | 0.032609 | 0.014810 | 0.034552 | 1 | True | 1 |
13 | KNeighborsDist | 0.695158 | 0.65 | accuracy | 0.033633 | 0.013529 | 0.010108 | 0.033633 | 0.013529 | 0.010108 | 1 | True | 2 |
We can simply clone a new predictor from our original, and we will no longer be impacted by the call to refit_full on the prior clone.
Snapshot a deployment optimized Predictor via .clone_for_deployment()¶
Instead of cloning an exact copy, we can instead clone a copy which has the minimal set of artifacts needed to do prediction.
Note that this optimized clone will have very limited functionality outside of calling predict and predict_proba. For example, it will be unable to train more models.
save_path_clone_opt = save_path + '-clone-opt'
# will return the path to the cloned predictor, identical to save_path_clone_opt
path_clone_opt = predictor.clone_for_deployment(path=save_path_clone_opt)
Cloned TabularPredictor located in '/home/ci/autogluon/docs/tutorials/tabular/advanced/agModels-predictClass-deployment' to 'agModels-predictClass-deployment-clone-opt'.
To load the cloned predictor: predictor_clone = TabularPredictor.load(path="agModels-predictClass-deployment-clone-opt")
Clone: Keeping minimum set of models required to predict with best model 'WeightedEnsemble_L2'...
Deleting model KNeighborsUnif. All files under /home/ci/autogluon/docs/tutorials/tabular/advanced/agModels-predictClass-deployment-clone-opt/models/KNeighborsUnif will be removed.
Deleting model KNeighborsDist. All files under /home/ci/autogluon/docs/tutorials/tabular/advanced/agModels-predictClass-deployment-clone-opt/models/KNeighborsDist will be removed.
Deleting model LightGBMXT. All files under /home/ci/autogluon/docs/tutorials/tabular/advanced/agModels-predictClass-deployment-clone-opt/models/LightGBMXT will be removed.
Deleting model LightGBM. All files under /home/ci/autogluon/docs/tutorials/tabular/advanced/agModels-predictClass-deployment-clone-opt/models/LightGBM will be removed.
Deleting model RandomForestGini. All files under /home/ci/autogluon/docs/tutorials/tabular/advanced/agModels-predictClass-deployment-clone-opt/models/RandomForestGini will be removed.
Deleting model RandomForestEntr. All files under /home/ci/autogluon/docs/tutorials/tabular/advanced/agModels-predictClass-deployment-clone-opt/models/RandomForestEntr will be removed.
Deleting model CatBoost. All files under /home/ci/autogluon/docs/tutorials/tabular/advanced/agModels-predictClass-deployment-clone-opt/models/CatBoost will be removed.
Deleting model ExtraTreesGini. All files under /home/ci/autogluon/docs/tutorials/tabular/advanced/agModels-predictClass-deployment-clone-opt/models/ExtraTreesGini will be removed.
Deleting model ExtraTreesEntr. All files under /home/ci/autogluon/docs/tutorials/tabular/advanced/agModels-predictClass-deployment-clone-opt/models/ExtraTreesEntr will be removed.
Deleting model NeuralNetFastAI. All files under /home/ci/autogluon/docs/tutorials/tabular/advanced/agModels-predictClass-deployment-clone-opt/models/NeuralNetFastAI will be removed.
Deleting model NeuralNetTorch. All files under /home/ci/autogluon/docs/tutorials/tabular/advanced/agModels-predictClass-deployment-clone-opt/models/NeuralNetTorch will be removed.
Deleting model LightGBMLarge. All files under /home/ci/autogluon/docs/tutorials/tabular/advanced/agModels-predictClass-deployment-clone-opt/models/LightGBMLarge will be removed.
Clone: Removing artifacts unnecessary for prediction. NOTE: Clone can no longer fit new models, and most functionality except for predict and predict_proba will no longer work
predictor_clone_opt = TabularPredictor.load(path=path_clone_opt)
To avoid loading the model in every prediction call, we can persist the model in memory by:
predictor_clone_opt.persist()
Persisting 2 models in memory. Models will require 0.0% of memory.
['WeightedEnsemble_L2', 'XGBoost']
We can see that the optimized clone still makes the same predictions:
y_pred_clone_opt = predictor_clone_opt.predict(test_data)
y_pred_clone_opt
0 <=50K
1 <=50K
2 >50K
3 <=50K
4 <=50K
...
9764 <=50K
9765 <=50K
9766 <=50K
9767 <=50K
9768 <=50K
Name: class, Length: 9769, dtype: object
y_pred.equals(y_pred_clone_opt)
True
predictor_clone_opt.leaderboard(test_data)
model | score_test | score_val | eval_metric | pred_time_test | pred_time_val | fit_time | pred_time_test_marginal | pred_time_val_marginal | fit_time_marginal | stack_level | can_infer | fit_order | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | XGBoost | 0.840925 | 0.86 | accuracy | 0.026069 | 0.006424 | 0.251390 | 0.026069 | 0.006424 | 0.251390 | 1 | True | 1 |
1 | WeightedEnsemble_L2 | 0.840925 | 0.86 | accuracy | 0.026760 | 0.007139 | 0.328607 | 0.000691 | 0.000715 | 0.077218 | 2 | True | 2 |
We can check the disk usage of the optimized clone compared to the original:
size_original = predictor.disk_usage()
size_opt = predictor_clone_opt.disk_usage()
print(f'Size Original: {size_original} bytes')
print(f'Size Optimized: {size_opt} bytes')
print(f'Optimized predictor achieved a {round((1 - (size_opt/size_original)) * 100, 1)}% reduction in disk usage.')
Size Original: 18442881 bytes
Size Optimized: 561464 bytes
Optimized predictor achieved a 97.0% reduction in disk usage.
We can also investigate the difference in the files that exist in the original and optimized predictor.
Original:
predictor.disk_usage_per_file()
/models/ExtraTreesGini/model.pkl 5065861
/models/ExtraTreesEntr/model.pkl 5024091
/models/RandomForestGini/model.pkl 3408836
/models/RandomForestEntr/model.pkl 3267235
/models/XGBoost/xgb.ubj 524230
/models/LightGBMLarge/model.pkl 310318
/models/NeuralNetTorch/model.pkl 253878
/models/NeuralNetFastAI/model-internals.pkl 169727
/models/LightGBM/model.pkl 146394
/models/CatBoost/model.pkl 52184
/models/LightGBMXT/model.pkl 42427
/models/KNeighborsDist/model.pkl 40129
/models/KNeighborsUnif/model.pkl 40128
/utils/data/X.pkl 27583
/learner.pkl 10328
/metadata.json 9124
/utils/data/X_val.pkl 8349
/models/WeightedEnsemble_L2/model.pkl 7783
/utils/data/y.pkl 7461
/models/XGBoost/model.pkl 6114
/models/trainer.pkl 5553
/models/NeuralNetFastAI/model.pkl 2660
/utils/data/y_val.pkl 2354
/models/WeightedEnsemble_L2/utils/model_template.pkl 1226
/predictor.pkl 982
/models/WeightedEnsemble_L2/utils/oof.pkl 764
/utils/attr/LightGBM/y_pred_proba_val.pkl 550
/utils/attr/LightGBMLarge/y_pred_proba_val.pkl 550
/utils/attr/ExtraTreesEntr/y_pred_proba_val.pkl 550
/utils/attr/NeuralNetFastAI/y_pred_proba_val.pkl 550
/utils/attr/XGBoost/y_pred_proba_val.pkl 550
/utils/attr/NeuralNetTorch/y_pred_proba_val.pkl 550
/utils/attr/KNeighborsUnif/y_pred_proba_val.pkl 550
/utils/attr/LightGBMXT/y_pred_proba_val.pkl 550
/utils/attr/KNeighborsDist/y_pred_proba_val.pkl 550
/utils/attr/RandomForestGini/y_pred_proba_val.pkl 550
/utils/attr/CatBoost/y_pred_proba_val.pkl 550
/utils/attr/RandomForestEntr/y_pred_proba_val.pkl 550
/utils/attr/ExtraTreesGini/y_pred_proba_val.pkl 550
/version.txt 12
Name: size, dtype: int64
Optimized:
predictor_clone_opt.disk_usage_per_file()
/models/XGBoost/xgb.ubj 524230
/learner.pkl 10328
/metadata.json 9124
/models/WeightedEnsemble_L2/model.pkl 7842
/models/XGBoost/model.pkl 6135
/models/trainer.pkl 2811
/predictor.pkl 982
/version.txt 12
Name: size, dtype: int64
Compile models for maximized inference speed¶
In order to further improve inference efficiency, we can call .compile()
to automatically
convert sklearn function calls into their ONNX equivalents.
Note that this is currently an experimental feature, which only improves RandomForest and TabularNeuralNetwork models.
The compilation and inference speed acceleration require installation of skl2onnx
and onnxruntime
packages.
To install supported versions of these packages automatically, we can call pip install autogluon.tabular[skl2onnx]
on top of an existing AutoGluon installation, or pip install autogluon.tabular[all,skl2onnx]
on a new AutoGluon installation.
It is important to make sure the predictor is cloned, because once the models are compiled, it won’t support fitting.
predictor_clone_opt.compile()
Compiling 2 Models ...
Skipping compilation for WeightedEnsemble_L2 ... (No config specified)
Skipping compilation for XGBoost ... (No config specified)
Finished compiling models, total runtime = 0s.
With the compiled predictor, the prediction results might not be exactly the same but should be very close.
y_pred_compile_opt = predictor_clone_opt.predict(test_data)
y_pred_compile_opt
0 <=50K
1 <=50K
2 >50K
3 <=50K
4 <=50K
...
9764 <=50K
9765 <=50K
9766 <=50K
9767 <=50K
9768 <=50K
Name: class, Length: 9769, dtype: object
Now all that is left is to upload the optimized predictor to a centralized storage location such as S3. To use this predictor in a new machine / system, simply download the artifact to local disk and load the predictor. Ensure that when loading a predictor you use the same Python version and AutoGluon version used during training to avoid instability.