{ "cells": [ { "attachments": {}, "cell_type": "markdown", "id": "487344de", "metadata": {}, "source": [ "# AutoGluon Tabular - In Depth\n", "\n", "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/autogluon/autogluon/blob/stable/docs/tutorials/tabular/tabular-indepth.ipynb)\n", "[![Open In SageMaker Studio Lab](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/autogluon/autogluon/blob/stable/docs/tutorials/tabular/tabular-indepth.ipynb)\n", "\n", "\n", "**Tip**: If you are new to AutoGluon, review [Predicting Columns in a Table - Quick Start](tabular-quick-start.ipynb) to learn the basics of the AutoGluon API. To learn how to add your own custom models to the set that AutoGluon trains, tunes, and ensembles, review [Adding a custom model to AutoGluon](advanced/tabular-custom-model.ipynb).\n", "\n", "This tutorial describes how you can exert greater control when using AutoGluon's `fit()` or `predict()`. Recall that to maximize predictive performance, you should first try `TabularPredictor()` and `fit()` with all default arguments. Then, consider non-default arguments for `TabularPredictor(eval_metric=...)`, and `fit(presets=...)`. Later, you can experiment with other arguments to fit() covered in this in-depth tutorial like `hyperparameter_tune_kwargs`, `hyperparameters`, `num_stack_levels`, `num_bag_folds`, `num_bag_sets`, etc.\n", "\n", "Using the same census data table as in the [Predicting Columns in a Table - Quick Start](tabular-quick-start.ipynb) tutorial, we'll now predict the `occupation` of an individual - a multiclass classification problem. Start by importing AutoGluon's TabularPredictor and TabularDataset, and loading the data." ] }, { "cell_type": "code", "execution_count": null, "id": "aa00faab-252f-44c9-b8f7-57131aa8251c", "metadata": { "tags": [ "remove-cell" ] }, "outputs": [], "source": [ "!pip install autogluon.tabular[all]\n" ] }, { "cell_type": "code", "execution_count": null, "id": "fae7a5f3", "metadata": {}, "outputs": [], "source": [ "from autogluon.tabular import TabularDataset, TabularPredictor\n", "\n", "import numpy as np\n", "\n", "train_data = TabularDataset('https://autogluon.s3.amazonaws.com/datasets/Inc/train.csv')\n", "subsample_size = 1000 # subsample subset of data for faster demo, try setting this to much larger values\n", "train_data = train_data.sample(n=subsample_size, random_state=0)\n", "print(train_data.head())\n", "\n", "label = 'occupation'\n", "print(\"Summary of occupation column: \\n\", train_data['occupation'].describe())\n", "\n", "test_data = TabularDataset('https://autogluon.s3.amazonaws.com/datasets/Inc/test.csv')\n", "y_test = test_data[label]\n", "test_data_nolabel = test_data.drop(columns=[label]) # delete label column\n", "\n", "metric = 'accuracy' # we specify eval-metric just for demo (unnecessary as it's the default)" ] }, { "cell_type": "markdown", "source": [ "## Specifying hyperparameters and tuning them\n", "\n", "**Note: We don't recommend doing hyperparameter-tuning with AutoGluon in most cases**. AutoGluon achieves its best performance without hyperparameter tuning and simply specifying `presets=\"best_quality\"`.\n", "\n", "We first demonstrate hyperparameter-tuning and how you can provide your own validation dataset that AutoGluon internally relies on to: tune hyperparameters, early-stop iterative training, and construct model ensembles. One reason you may specify validation data is when future test data will stem from a different distribution than training data (and your specified validation data is more representative of the future data that will likely be encountered).\n", "\n", " If you don't have a strong reason to provide your own validation dataset, we recommend you omit the `tuning_data` argument. This lets AutoGluon automatically select validation data from your provided training set (it uses smart strategies such as stratified sampling). For greater control, you can specify the `holdout_frac` argument to tell AutoGluon what fraction of the provided training data to hold out for validation.\n", "\n", "**Caution:** Since AutoGluon tunes internal knobs based on this validation data, performance estimates reported on this data may be over-optimistic. For unbiased performance estimates, you should always call `predict()` on a separate dataset (that was never passed to `fit()`), as we did in the previous **Quick-Start** tutorial. We also emphasize that most options specified in this tutorial are chosen to minimize runtime for the purposes of demonstration and you should select more reasonable values in order to obtain high-quality models.\n", "\n", "`fit()` trains neural networks and various types of tree ensembles by default. You can specify various hyperparameter values for each type of model. For each hyperparameter, you can either specify a single fixed value, or a search space of values to consider during hyperparameter optimization. Hyperparameters which you do not specify are left at default settings chosen automatically by AutoGluon, which may be fixed values or search spaces.\n", "\n", "Refer to the [Search Space documentation](../../api/autogluon.common.space.rst) to learn more about AutoGluon search space." ], "metadata": { "collapsed": false }, "id": "98733672" }, { "cell_type": "code", "execution_count": null, "outputs": [], "source": [ "from autogluon.common import space\n", "\n", "nn_options = { # specifies non-default hyperparameter values for neural network models\n", " 'num_epochs': 10, # number of training epochs (controls training time of NN models)\n", " 'learning_rate': space.Real(1e-4, 1e-2, default=5e-4, log=True), # learning rate used in training (real-valued hyperparameter searched on log-scale)\n", " 'activation': space.Categorical('relu', 'softrelu', 'tanh'), # activation function used in NN (categorical hyperparameter, default = first entry)\n", " 'dropout_prob': space.Real(0.0, 0.5, default=0.1), # dropout probability (real-valued hyperparameter)\n", "}\n", "\n", "gbm_options = { # specifies non-default hyperparameter values for lightGBM gradient boosted trees\n", " 'num_boost_round': 100, # number of boosting rounds (controls training time of GBM models)\n", " 'num_leaves': space.Int(lower=26, upper=66, default=36), # number of leaves in trees (integer hyperparameter)\n", "}\n", "\n", "hyperparameters = { # hyperparameters of each model type\n", " 'GBM': gbm_options,\n", " 'NN_TORCH': nn_options, # NOTE: comment this line out if you get errors on Mac OSX\n", " } # When these keys are missing from hyperparameters dict, no models of that type are trained\n", "\n", "time_limit = 2*60 # train various models for ~2 min\n", "num_trials = 5 # try at most 5 different hyperparameter configurations for each type of model\n", "search_strategy = 'auto' # to tune hyperparameters using random search routine with a local scheduler\n", "\n", "hyperparameter_tune_kwargs = { # HPO is not performed unless hyperparameter_tune_kwargs is specified\n", " 'num_trials': num_trials,\n", " 'scheduler' : 'local',\n", " 'searcher': search_strategy,\n", "} # Refer to TabularPredictor.fit docstring for all valid values\n", "\n", "predictor = TabularPredictor(label=label, eval_metric=metric).fit(\n", " train_data,\n", " time_limit=time_limit,\n", " hyperparameters=hyperparameters,\n", " hyperparameter_tune_kwargs=hyperparameter_tune_kwargs,\n", ")" ], "metadata": { "collapsed": false }, "id": "87f28cf4" }, { "cell_type": "markdown", "source": [ "We again demonstrate how to use the trained models to predict on the test data." ], "metadata": { "collapsed": false }, "id": "816e4beb" }, { "cell_type": "code", "execution_count": null, "outputs": [], "source": [ "y_pred = predictor.predict(test_data_nolabel)\n", "print(\"Predictions: \", list(y_pred)[:5])\n", "perf = predictor.evaluate(test_data, auxiliary_metrics=False)" ], "metadata": { "collapsed": false }, "id": "3bf2965a" }, { "cell_type": "markdown", "source": [ "Use the following to view a summary of what happened during `fit()`. Now this command will show details of the hyperparameter-tuning process for each type of model:" ], "metadata": { "collapsed": false }, "id": "5c2f4648" }, { "cell_type": "code", "execution_count": null, "outputs": [], "source": [ "results = predictor.fit_summary()" ], "metadata": { "collapsed": false }, "id": "1bfc4fe3" }, { "cell_type": "markdown", "source": [ "In the above example, the predictive performance may be poor because we specified very little training to ensure quick runtimes. You can call `fit()` multiple times while modifying the above settings to better understand how these choices affect performance outcomes. For example: you can comment out the `train_data.head` command or increase `subsample_size` to train using a larger dataset, increase the `num_epochs` and `num_boost_round` hyperparameters, and increase the `time_limit` (which you should do for all code in these tutorials). To see more detailed output during the execution of `fit()`, you can also pass in the argument: `verbosity = 3`." ], "metadata": { "collapsed": false }, "id": "1d06b7ab" }, { "cell_type": "markdown", "source": [ "## Model ensembling with stacking/bagging\n", "\n", "Beyond hyperparameter-tuning with a correctly-specified evaluation metric, two other methods to boost predictive performance are [bagging and stack-ensembling](https://arxiv.org/abs/2003.06505). You'll often see performance improve if you specify `num_bag_folds` = 5-10, `num_stack_levels` = 1 in the call to `fit()`, but this will increase training times and memory/disk usage." ], "metadata": { "collapsed": false }, "id": "cc894bfde6cbc5f1" }, { "cell_type": "code", "execution_count": null, "id": "d821c4af", "metadata": {}, "outputs": [], "source": [ "label = 'class' # Now lets predict the \"class\" column (binary classification)\n", "test_data_nolabel = test_data.drop(columns=[label])\n", "y_test = test_data[label]\n", "save_path = 'agModels-predictClass' # folder where to store trained models\n", "\n", "predictor = TabularPredictor(label=label, eval_metric=metric).fit(train_data,\n", " num_bag_folds=5, num_bag_sets=1, num_stack_levels=1,\n", " hyperparameters = {'NN_TORCH': {'num_epochs': 2}, 'GBM': {'num_boost_round': 20}}, # last argument is just for quick demo here, omit it in real applications\n", ")" ] }, { "cell_type": "markdown", "id": "38f61e8d", "metadata": {}, "source": "You should not provide `tuning_data` when stacking/bagging, and instead provide all your available data as `train_data` (which AutoGluon will split in more intellgent ways). `num_bag_sets` controls how many times the k-fold bagging process is repeated to further reduce variance (increasing this may further boost accuracy but will substantially increase training times, inference latency, and memory/disk usage). Rather than manually searching for good bagging/stacking values yourself, AutoGluon will automatically select good values for you if you specify `auto_stack` instead (which is used in the `best_quality` preset):" }, { "cell_type": "code", "execution_count": null, "id": "e1cf666c", "metadata": {}, "outputs": [], "source": [ "# Lets also specify the \"balanced_accuracy\" metric\n", "predictor = TabularPredictor(label=label, eval_metric='balanced_accuracy', path=save_path).fit(\n", " train_data, auto_stack=True,\n", " calibrate_decision_threshold=False, # Disabling for demonstration in next section\n", " hyperparameters={'FASTAI': {'num_epochs': 10}, 'GBM': {'num_boost_round': 200}} # last 2 arguments are for quick demo, omit them in real applications\n", ")\n", "predictor.leaderboard(test_data)" ] }, { "cell_type": "markdown", "source": [ "Often stacking/bagging will produce superior accuracy than hyperparameter-tuning, but you may try combining both techniques (note: specifying `presets='best_quality'` in `fit()` simply sets `auto_stack=True`).\n", "\n", "## Decision Threshold Calibration\n", "\n", "Major metric score improvements can be achieved in binary classification for metrics such as `\"f1\"` and `\"balanced_accuracy\"` by adjusting the prediction decision threshold via `calibrate_decision_threshold` to a value other than the default 0.5.\n", "\n", "Below is an example of the `\"balanced_accuracy\"` score achieved on the test data with and without calibrating the decision threshold:" ], "metadata": { "collapsed": false }, "id": "209aed94e755e675" }, { "cell_type": "code", "execution_count": null, "outputs": [], "source": [ "print(f'Prior to calibration (predictor.decision_threshold={predictor.decision_threshold}):')\n", "scores = predictor.evaluate(test_data)\n", "\n", "calibrated_decision_threshold = predictor.calibrate_decision_threshold()\n", "predictor.set_decision_threshold(calibrated_decision_threshold)\n", "\n", "print(f'After calibration (predictor.decision_threshold={predictor.decision_threshold}):')\n", "scores_calibrated = predictor.evaluate(test_data)" ], "metadata": { "collapsed": false }, "id": "10e6f148501e94c4" }, { "cell_type": "code", "execution_count": null, "outputs": [], "source": [ "for metric_name in scores:\n", " metric_score = scores[metric_name]\n", " metric_score_calibrated = scores_calibrated[metric_name]\n", " decision_threshold = predictor.decision_threshold\n", " print(f'decision_threshold={decision_threshold:.3f}\\t| metric=\"{metric_name}\"'\n", " f'\\n\\ttest_score uncalibrated: {metric_score:.4f}'\n", " f'\\n\\ttest_score calibrated: {metric_score_calibrated:.4f}'\n", " f'\\n\\ttest_score delta: {metric_score_calibrated-metric_score:.4f}')" ], "metadata": { "collapsed": false }, "id": "f18f7817111c6477" }, { "cell_type": "markdown", "source": [ "Notice that calibrating for \"balanced_accuracy\" majorly improved the \"balanced_accuracy\" metric score, but it harmed the \"accuracy\" score. Threshold calibration will often result in a tradeoff between performance on different metrics, and the user should keep this in mind.\n", "\n", "Instead of calibrating for \"balanced_accuracy\" specifically, we can calibrate for any metric if we want to maximize the score of that metric:" ], "metadata": { "collapsed": false }, "id": "9b689133a34fe3d9" }, { "cell_type": "code", "execution_count": null, "outputs": [], "source": [ "predictor.set_decision_threshold(0.5) # Reset decision threshold\n", "for metric_name in ['f1', 'balanced_accuracy', 'mcc']:\n", " metric_score = predictor.evaluate(test_data, silent=True)[metric_name]\n", " calibrated_decision_threshold = predictor.calibrate_decision_threshold(metric=metric_name, verbose=False)\n", " metric_score_calibrated = predictor.evaluate(\n", " test_data, decision_threshold=calibrated_decision_threshold, silent=True\n", " )[metric_name]\n", " print(f'decision_threshold={calibrated_decision_threshold:.3f}\\t| metric=\"{metric_name}\"'\n", " f'\\n\\ttest_score uncalibrated: {metric_score:.4f}'\n", " f'\\n\\ttest_score calibrated: {metric_score_calibrated:.4f}'\n", " f'\\n\\ttest_score delta: {metric_score_calibrated-metric_score:.4f}')" ], "metadata": { "collapsed": false }, "id": "251a1bf30667c186" }, { "cell_type": "markdown", "source": [ "Instead of calibrating the decision threshold post-fit, you can have it automatically occur during the fit call by specifying the fit parameter `predictor.fit(..., calibrate_decision_threshold=True)`.\n", "\n", "Luckily, AutoGluon will automatically apply decision threshold calibration when beneficial, as the default value is `calibrate_decision_threshold=\"auto\"`. We recommend keeping this value as the default in most cases.\n", "\n", "Additional usage examples are below:" ], "metadata": { "collapsed": false }, "id": "b6bbe09b403c54b9" }, { "cell_type": "code", "execution_count": null, "outputs": [], "source": [ "# Will use the decision_threshold specified in `predictor.decision_threshold`, can be set via `predictor.set_decision_threshold`\n", "# y_pred = predictor.predict(test_data)\n", "# y_pred_08 = predictor.predict(test_data, decision_threshold=0.8) # Specify a specific threshold to use only for this predict\n", "\n", "# y_pred_proba = predictor.predict_proba(test_data)\n", "# y_pred = predictor.predict_from_proba(y_pred_proba) # Identical output to calling .predict(test_data)\n", "# y_pred_08 = predictor.predict_from_proba(y_pred_proba, decision_threshold=0.8) # Identical output to calling .predict(test_data, decision_threshold=0.8)" ], "metadata": { "collapsed": false }, "id": "6e24e98242a6398a" }, { "cell_type": "markdown", "id": "be2b3534", "metadata": {}, "source": [ "## Prediction options (inference)\n", "\n", "Even if you've started a new Python session since last calling `fit()`, you can still load a previously trained predictor from disk:" ] }, { "cell_type": "code", "execution_count": null, "id": "67cc8c19", "metadata": {}, "outputs": [], "source": [ "predictor = TabularPredictor.load(save_path) # `predictor.path` is another way to get the relative path needed to later load predictor." ] }, { "cell_type": "markdown", "id": "13759fc5", "metadata": {}, "source": [ "Above `save_path` is the same folder previously passed to `TabularPredictor`, in which all the trained models have been saved. You can train easily models on one machine and deploy them on another. Simply copy the `save_path` folder to the new machine and specify its new path in `TabularPredictor.load()`.\n", "\n", "To find out the required feature columns to make predictions, call `predictor.features()`:" ] }, { "cell_type": "code", "execution_count": null, "id": "dc1d292a", "metadata": {}, "outputs": [], "source": [ "predictor.features()" ] }, { "cell_type": "markdown", "id": "91285570", "metadata": {}, "source": [ "We can make a prediction on an individual example rather than a full dataset:" ] }, { "cell_type": "code", "execution_count": null, "id": "2b5df1ea", "metadata": {}, "outputs": [], "source": [ "datapoint = test_data_nolabel.iloc[[0]] # Note: .iloc[0] won't work because it returns pandas Series instead of DataFrame\n", "print(datapoint)\n", "predictor.predict(datapoint)" ] }, { "cell_type": "markdown", "id": "7f5e5d4c", "metadata": {}, "source": [ "To output predicted class probabilities instead of predicted classes, you can use:" ] }, { "cell_type": "code", "execution_count": null, "id": "a9c88edf", "metadata": {}, "outputs": [], "source": [ "predictor.predict_proba(datapoint) # returns a DataFrame that shows which probability corresponds to which class" ] }, { "cell_type": "markdown", "id": "e118c312", "metadata": {}, "source": [ "By default, `predict()` and `predict_proba()` will utilize the model that AutoGluon thinks is most accurate, which is usually an ensemble of many individual models. Here's how to see which model this is:" ] }, { "cell_type": "code", "execution_count": null, "id": "357da7e2", "metadata": {}, "outputs": [], "source": [ "predictor.model_best" ] }, { "cell_type": "markdown", "id": "06f47f3a", "metadata": {}, "source": [ "We can instead specify a particular model to use for predictions (e.g. to reduce inference latency). Note that a 'model' in AutoGluon may refer to, for example, a single Neural Network, a bagged ensemble of many Neural Network copies trained on different training/validation splits, a weighted ensemble that aggregates the predictions of many other models, or a stacker model that operates on predictions output by other models. This is akin to viewing a Random Forest as one 'model' when it is in fact an ensemble of many decision trees.\n", "\n", "\n", "Before deciding which model to use, let's evaluate all of the models AutoGluon has previously trained on our test data:" ] }, { "cell_type": "code", "execution_count": null, "id": "d5f02254", "metadata": {}, "outputs": [], "source": [ "predictor.leaderboard(test_data)" ] }, { "cell_type": "markdown", "id": "11084a6a", "metadata": {}, "source": [ "The leaderboard shows each model's predictive performance on the test data (`score_test`) and validation data (`score_val`), as well as the time required to: produce predictions for the test data (`pred_time_val`), produce predictions on the validation data (`pred_time_val`), and train only this model (`fit_time`). Below, we show that a leaderboard can be produced without new data (just uses the data previously reserved for validation inside `fit`) and can display extra information about each model:" ] }, { "cell_type": "code", "execution_count": null, "id": "2cd4b79f", "metadata": {}, "outputs": [], "source": [ "predictor.leaderboard(extra_info=True)" ] }, { "cell_type": "markdown", "id": "13dff991", "metadata": {}, "source": [ "The expanded leaderboard shows properties like how many features are used by each model (`num_features`), which other models are ancestors whose predictions are required inputs for each model (`ancestors`), and how much memory each model and all its ancestors would occupy if simultaneously persisted (`memory_size_w_ancestors`). See the [leaderboard documentation](../../api/autogluon.tabular.TabularPredictor.leaderboard.rst) for full details.\n", "\n", "To show scores for other metrics, you can specify the `extra_metrics` argument when passing in `test_data`:" ] }, { "cell_type": "code", "execution_count": null, "id": "dc39b3b1", "metadata": {}, "outputs": [], "source": [ "predictor.leaderboard(test_data, extra_metrics=['accuracy', 'balanced_accuracy', 'log_loss'])" ] }, { "cell_type": "markdown", "id": "b01083ae", "metadata": {}, "source": [ "Notice that `log_loss` scores are negative.\n", "This is because metrics in AutoGluon are always shown in `higher_is_better` form.\n", "This means that metrics such as `log_loss` and `root_mean_squared_error` will have their signs FLIPPED, and values will be negative.\n", "This is necessary to avoid the user needing to know the metric to understand if higher is better when looking at leaderboard.\n", "\n", "One additional caveat: It is possible that `log_loss` values can be `-inf` when computed via `extra_metrics`.\n", "This is because the models were not optimized with `log_loss` in mind during training and\n", "may have prediction probabilities giving a class `0` (particularly common with K-Nearest-Neighbors models).\n", "Because `log_loss` gives infinite error when the correct class was given `0` probability, this results in a score of `-inf`.\n", "It is therefore recommended that `log_loss` should not be used as a secondary metric to determine model quality.\n", "Either use `log_loss` as the `eval_metric` or avoid it altogether.\n", "\n", "Here's how to specify a particular model to use for prediction instead of AutoGluon's default model-choice:" ] }, { "cell_type": "code", "execution_count": null, "id": "1f938d89", "metadata": {}, "outputs": [], "source": [ "i = 0 # index of model to use\n", "model_to_use = predictor.model_names()[i]\n", "model_pred = predictor.predict(datapoint, model=model_to_use)\n", "print(\"Prediction from %s model: %s\" % (model_to_use, model_pred.iloc[0]))" ] }, { "cell_type": "markdown", "id": "775cb4a5", "metadata": {}, "source": [ "We can easily access various information about the trained predictor or a particular model:" ] }, { "cell_type": "code", "execution_count": null, "id": "5cd13fed", "metadata": {}, "outputs": [], "source": [ "all_models = predictor.model_names()\n", "model_to_use = all_models[i]\n", "specific_model = predictor._trainer.load_model(model_to_use)\n", "\n", "# Objects defined below are dicts of various information (not printed here as they are quite large):\n", "model_info = specific_model.get_info()\n", "predictor_information = predictor.info()" ] }, { "cell_type": "markdown", "id": "640a8c38", "metadata": {}, "source": [ "The `predictor` also remembers what metric predictions should be evaluated with, which can be done with ground truth labels as follows:" ] }, { "cell_type": "code", "execution_count": null, "id": "39b53a65", "metadata": {}, "outputs": [], "source": [ "y_pred_proba = predictor.predict_proba(test_data_nolabel)\n", "perf = predictor.evaluate_predictions(y_true=y_test, y_pred=y_pred_proba)" ] }, { "cell_type": "markdown", "id": "f2932b36", "metadata": {}, "source": [ "Since the label columns remains in the `test_data` DataFrame, we can instead use the shorthand:" ] }, { "cell_type": "code", "execution_count": null, "id": "5494aae6", "metadata": {}, "outputs": [], "source": [ "perf = predictor.evaluate(test_data)" ] }, { "cell_type": "markdown", "id": "02223a4e", "metadata": {}, "source": [ "## Interpretability (feature importance)\n", "\n", "To better understand our trained predictor, we can estimate the overall importance of each feature:" ] }, { "cell_type": "code", "execution_count": null, "id": "82ffd4bd", "metadata": {}, "outputs": [], "source": [ "predictor.feature_importance(test_data)" ] }, { "cell_type": "markdown", "id": "954655e5", "metadata": {}, "source": [ "Computed via [permutation-shuffling](https://explained.ai/rf-importance/), these feature importance scores quantify the drop in predictive performance (of the already trained predictor) when one column's values are randomly shuffled across rows. The top features in this list contribute most to AutoGluon's accuracy (for predicting when/if a patient will be readmitted to the hospital). Features with non-positive importance score hardly contribute to the predictor's accuracy, or may even be actively harmful to include in the data (consider removing these features from your data and calling `fit` again). These scores facilitate interpretability of the predictor's global behavior (which features it relies on for *all* predictions).\n", "To get [local explanations](https://christophm.github.io/interpretable-ml-book/taxonomy-of-interpretability-methods.html) regarding which features influence a *particular* prediction, check out the [example notebooks](https://github.com/autogluon/autogluon/tree/master/examples/tabular/interpret) for explaining particular AutoGluon predictions using [Shapely values](https://github.com/slundberg/shap/).\n", "\n", "Before making judgement on if AutoGluon is more or less interpretable than another solution, we recommend reading [The Mythos of Model Interpretability](https://dl.acm.org/doi/pdf/10.1145/3236386.3241340) by Zachary Lipton, which covers why often-claimed interpretable models such as trees and linear models are rarely meaningfully more interpretable than more advanced models.\n", "\n", "## Accelerating inference\n", "\n", "We describe multiple ways to reduce the time it takes for AutoGluon to produce predictions.\n", "\n", "Before providing code examples, it is important to understand that\n", "there are several ways to accelerate inference in AutoGluon. The table below lists the options in order of priority.\n", "\n", "| Optimization | Inference Speedup | Cost | Notes |\n", "|:----------------------------------|:------------------------------------------------------------------------------------------------------|:----------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n", "| refit_full | At least 8x+, up to 160x (requires bagging) | -Quality, +FitTime | Only provides speedup with bagging enabled. |\n", "| persist | Up to 10x in online-inference | ++MemoryUsage | If memory is not sufficient to persist model, speedup is not gained. Speedup is most effective in online-inference and is not relevant in batch inference. |\n", "| infer_limit | Configurable, ~up to 50x | -Quality (Relative to speedup) | If bagging is enabled, always use refit_full if using infer_limit. |\n", "| distill | ~Equals combined speedup of refit_full and infer_limit set to extreme values | --Quality, ++FitTime | Not compatible with refit_full and infer_limit. |\n", "| feature pruning | Typically at most 1.5x. More if willing to lower quality significantly. | -Quality?, ++FitTime | Dependent on the existence of unimportant features in data. Call `predictor.feature_importance(test_data)` to gauge which features could be removed. |\n", "| use faster hardware | Usually at most 3x. Depends on hardware (ignoring GPU). | +Hardware | As an example, an EC2 c6i.2xlarge is ~1.6x faster than an m5.2xlarge for a similar price. Laptops in particular might be slow compared to cloud instances. |\n", "| manual hyperparameters adjustment | Usually at most 2x assuming infer_limit is already specified. | ---Quality?, +++UserMLExpertise | Can be very complicated and is not recommended. Potential ways to get speedups this way is to reduce the number of trees in LightGBM, XGBoost, CatBoost, RandomForest, and ExtraTrees. |\n", "| manual data preprocessing | Usually at most 1.2x assuming all other optimizations are specified and setting is online-inference. | ++++UserMLExpertise, ++++UserCode | Only relevant for online-inference. This is not recommended as AutoGluon's default preprocessing is highly optimized. |\n", "\n", "If bagging is enabled (num_bag_folds>0 or num_stack_levels>0 or using 'best_quality' preset), the order of inference optimizations should be: \n", "1. refit_full \n", "2. persist \n", "3. infer_limit \n", "\n", "If bagging is not enabled (num_bag_folds=0, num_stack_levels=0), the order of inference optimizations should be: \n", "1. persist \n", "2. infer_limit \n", "\n", "If following these recommendations does not lead to a sufficiently fast model, you may consider the more advanced options in the table.\n", "\n", "### Keeping models in memory\n", "\n", "By default, AutoGluon loads models into memory one at a time and only when they are needed for prediction. This strategy is robust for large stacked/bagged ensembles, but leads to slower prediction times. If you plan to repeatedly make predictions (e.g. on new datapoints one at a time rather than one large test dataset), you can first specify that all models required for inference should be loaded into memory as follows:" ] }, { "cell_type": "code", "execution_count": null, "id": "e2eced22", "metadata": {}, "outputs": [], "source": [ "predictor.persist()\n", "\n", "num_test = 20\n", "preds = np.array(['']*num_test, dtype='object')\n", "for i in range(num_test):\n", " datapoint = test_data_nolabel.iloc[[i]]\n", " pred_numpy = predictor.predict(datapoint, as_pandas=False)\n", " preds[i] = pred_numpy[0]\n", "\n", "perf = predictor.evaluate_predictions(y_test[:num_test], preds, auxiliary_metrics=True)\n", "print(\"Predictions: \", preds)\n", "\n", "predictor.unpersist() # free memory by clearing models, future predict() calls will load models from disk" ] }, { "cell_type": "markdown", "id": "ec079f09", "metadata": {}, "source": [ "You can alternatively specify a particular model to persist via the `models` argument of `persist()`, or simply set `models='all'` to simultaneously load every single model that was trained during `fit`.\n", "\n", "### Inference speed as a fit constraint\n", "\n", "If you know your latency constraint prior to fitting the predictor, you can specify it explicitly as a fit argument.\n", "AutoGluon will then automatically train models in a fashion that attempts to satisfy the constraint.\n", "\n", "This constraint has two components: `infer_limit` and `infer_limit_batch_size`: \n", "- `infer_limit` is the time in seconds to predict 1 row of data.\n", "For example, `infer_limit=0.05` means 50 ms per row of data,\n", "or 20 rows / second throughput. \n", "- `infer_limit_batch_size` is the amount of rows passed at once to predict when calculating per-row speed.\n", "This is very important because `infer_limit_batch_size=1` (online-inference) is highly suboptimal as\n", "various operations have a fixed cost overhead regardless of data size. If you can pass your test data in bulk,\n", "you should specify `infer_limit_batch_size=10000`." ] }, { "cell_type": "code", "execution_count": null, "id": "c845d3cd", "metadata": {}, "outputs": [], "source": [ "# At most 0.05 ms per row (20000 rows per second throughput)\n", "infer_limit = 0.00005\n", "# adhere to infer_limit with batches of size 10000 (batch-inference, easier to satisfy infer_limit)\n", "infer_limit_batch_size = 10000\n", "# adhere to infer_limit with batches of size 1 (online-inference, much harder to satisfy infer_limit)\n", "# infer_limit_batch_size = 1 # Note that infer_limit<0.02 when infer_limit_batch_size=1 can be difficult to satisfy.\n", "predictor_infer_limit = TabularPredictor(label=label, eval_metric=metric).fit(\n", " train_data=train_data,\n", " time_limit=30,\n", " infer_limit=infer_limit,\n", " infer_limit_batch_size=infer_limit_batch_size,\n", ")\n", "\n", "# NOTE: If bagging was enabled, it is important to call refit_full at this stage.\n", "# infer_limit assumes that the user will call refit_full after fit.\n", "# predictor_infer_limit.refit_full()\n", "\n", "# NOTE: To align with inference speed calculated during fit, models must be persisted.\n", "predictor_infer_limit.persist()\n", "# Below is an optimized version that only persists the minimum required models for prediction.\n", "# predictor_infer_limit.persist('best')\n", "\n", "predictor_infer_limit.leaderboard()" ] }, { "cell_type": "markdown", "id": "a69ab0da", "metadata": {}, "source": [ "Now we can test the inference speed of the final model and check if it satisfies the inference constraints." ] }, { "cell_type": "code", "execution_count": null, "id": "0a668eac", "metadata": {}, "outputs": [], "source": [ "test_data_batch = test_data.sample(infer_limit_batch_size, replace=True, ignore_index=True)\n", "\n", "import time\n", "time_start = time.time()\n", "predictor_infer_limit.predict(test_data_batch)\n", "time_end = time.time()\n", "\n", "infer_time_per_row = (time_end - time_start) / len(test_data_batch)\n", "rows_per_second = 1 / infer_time_per_row\n", "infer_time_per_row_ratio = infer_time_per_row / infer_limit\n", "is_constraint_satisfied = infer_time_per_row_ratio <= 1\n", "\n", "print(f'Model is able to predict {round(rows_per_second, 1)} rows per second. (User-specified Throughput = {1 / infer_limit})')\n", "print(f'Model uses {round(infer_time_per_row_ratio * 100, 1)}% of infer_limit time per row.')\n", "print(f'Model satisfies inference constraint: {is_constraint_satisfied}')" ] }, { "cell_type": "markdown", "id": "9d988fa3", "metadata": {}, "source": [ "### Using smaller ensemble or faster model for prediction\n", "\n", "Without having to retrain any models, one can construct alternative ensembles that aggregate individual models' predictions with different weighting schemes. These ensembles become smaller (and hence faster for prediction) if they assign nonzero weight to less models. You can produce a wide variety of ensembles with different accuracy-speed tradeoffs like this:" ] }, { "cell_type": "code", "execution_count": null, "id": "1dcda6fd", "metadata": {}, "outputs": [], "source": [ "additional_ensembles = predictor.fit_weighted_ensemble(expand_pareto_frontier=True)\n", "print(\"Alternative ensembles you can use for prediction:\", additional_ensembles)\n", "\n", "predictor.leaderboard(only_pareto_frontier=True)" ] }, { "cell_type": "markdown", "id": "edd8fe52", "metadata": {}, "source": [ "The resulting leaderboard will contain the most accurate model for a given inference-latency. You can select whichever model exhibits acceptable latency from the leaderboard and use it for prediction." ] }, { "cell_type": "code", "execution_count": null, "id": "a757a79b", "metadata": {}, "outputs": [], "source": [ "model_for_prediction = additional_ensembles[0]\n", "predictions = predictor.predict(test_data, model=model_for_prediction)\n", "predictor.delete_models(models_to_delete=additional_ensembles, dry_run=False) # delete these extra models so they don't affect rest of tutorial" ] }, { "cell_type": "markdown", "id": "9158cc13", "metadata": {}, "source": [ "### Collapsing bagged ensembles via refit_full\n", "\n", "For an ensemble predictor trained with bagging (as done above), recall there are ~10 bagged copies of each individual model trained on different train/validation folds. We can collapse this bag of ~10 models into a single model that's fit to the full dataset, which can greatly reduce its memory/latency requirements (but may also reduce accuracy). Below we refit such a model for each original model but you can alternatively do this for just a particular model by specifying the `model` argument of `refit_full()`." ] }, { "cell_type": "code", "execution_count": null, "id": "fd8ea890", "metadata": {}, "outputs": [], "source": [ "refit_model_map = predictor.refit_full()\n", "print(\"Name of each refit-full model corresponding to a previous bagged ensemble:\")\n", "print(refit_model_map)\n", "predictor.leaderboard(test_data)" ] }, { "cell_type": "markdown", "id": "ee8a611c", "metadata": {}, "source": [ "This adds the refit-full models to the leaderboard and we can opt to use any of them for prediction just like any other model. Note `pred_time_test` and `pred_time_val` list the time taken to produce predictions with each model (in seconds) on the test/validation data. Since the refit-full models were trained using all of the data, there is no internal validation score (`score_val`) available for them. You can also call `refit_full()` with non-bagged models to refit the same models to your full dataset (there won't be memory/latency gains in this case but test accuracy may improve).\n", "\n", "### Model distillation\n", "\n", "While computationally-favorable, single individual models will usually have lower accuracy than weighted/stacked/bagged ensembles. [Model Distillation](https://arxiv.org/abs/2006.14284) offers one way to retain the computational benefits of a single model, while enjoying some of the accuracy-boost that comes with ensembling. The idea is to train the individual model (which we can call the student) to mimic the predictions of the full stack ensemble (the teacher). Like `refit_full()`, the `distill()` function will produce additional models we can opt to use for prediction." ] }, { "cell_type": "code", "execution_count": null, "id": "13d8854f", "metadata": {}, "outputs": [], "source": [ "student_models = predictor.distill(time_limit=30) # specify much longer time limit in real applications\n", "print(student_models)\n", "preds_student = predictor.predict(test_data_nolabel, model=student_models[0])\n", "print(f\"predictions from {student_models[0]}:\", list(preds_student)[:5])\n", "predictor.leaderboard(test_data)" ] }, { "cell_type": "markdown", "id": "ba4a36ab", "metadata": {}, "source": [ "### Faster presets or hyperparameters\n", "\n", "Instead of trying to speed up a cumbersome trained model at prediction time, if you know inference latency or memory will be an issue at the outset, then you can adjust the training process accordingly to ensure `fit()` does not produce unwieldy models.\n", "\n", "One option is to specify more lightweight `presets`:" ] }, { "cell_type": "code", "execution_count": null, "id": "3bd5e5e4", "metadata": {}, "outputs": [], "source": [ "presets = ['good_quality', 'optimize_for_deployment']\n", "predictor_light = TabularPredictor(label=label, eval_metric=metric).fit(train_data, presets=presets, time_limit=30)" ] }, { "cell_type": "markdown", "id": "b8d97180", "metadata": {}, "source": [ "Another option is to specify more lightweight hyperparameters:" ] }, { "cell_type": "code", "execution_count": null, "id": "3e808d32", "metadata": {}, "outputs": [], "source": [ "predictor_light = TabularPredictor(label=label, eval_metric=metric).fit(train_data, hyperparameters='very_light', time_limit=30)" ] }, { "cell_type": "markdown", "id": "d49c182b", "metadata": {}, "source": [ "Here you can set `hyperparameters` to either 'light', 'very_light', or 'toy' to obtain progressively smaller (but less accurate) models and predictors. Advanced users may instead try manually specifying particular models' hyperparameters in order to make them faster/smaller.\n", "\n", "Finally, you may also exclude specific unwieldy models from being trained at all. Below we exclude models that tend to be slower (K Nearest Neighbors, Neural Networks):" ] }, { "cell_type": "code", "execution_count": null, "id": "ae2298e4", "metadata": {}, "outputs": [], "source": [ "excluded_model_types = ['KNN', 'NN_TORCH']\n", "predictor_light = TabularPredictor(label=label, eval_metric=metric).fit(train_data, excluded_model_types=excluded_model_types, time_limit=30)" ] }, { "cell_type": "markdown", "id": "7cd7ac48", "metadata": {}, "source": [ "### (Advanced) Cache preprocessed data\n", "\n", "If you are repeatedly predicting on the same data you can cache the preprocessed version of the data and\n", "directly send the preprocessed data to `predictor.predict` for faster inference:" ] }, { "cell_type": "markdown", "id": "4aa0c796", "metadata": {}, "source": [ "```\n", "test_data_preprocessed = predictor.transform_features(test_data)\n", "\n", "# The following call will be faster than a normal predict call because we are skipping the preprocessing stage.\n", "predictions = predictor.predict(test_data_preprocessed, transform_features=False)\n", "```\n" ] }, { "cell_type": "markdown", "id": "782040dc", "metadata": {}, "source": [ "Note that this is only useful in situations where you are repeatedly predicting on the same data.\n", "If this significantly speeds up your use-case, consider whether your current approach makes sense\n", "or if a cache on the predictions is a better solution. \n", "\n", "### (Advanced) Disable preprocessing\n", "\n", "If you would rather do data preprocessing outside of TabularPredictor,\n", "you can disable TabularPredictor's preprocessing entirely via:" ] }, { "cell_type": "markdown", "id": "15ca510b", "metadata": {}, "source": [ "```\n", "predictor.fit(..., feature_generator=None, feature_metadata=YOUR_CUSTOM_FEATURE_METADATA)\n", "```\n" ] }, { "cell_type": "markdown", "id": "6d32cbe6", "metadata": {}, "source": [ "Be warned that this removes ALL guardrails on data sanitization.\n", "It is very likely that you will run into errors doing this unless you are very familiar with AutoGluon.\n", "\n", "One instance where this can be helpful is if you have many problems\n", "that re-use the exact same data with the exact same features. If you had 30 tasks that re-use the same features,\n", "you could fit a `autogluon.features` feature generator once on the data, and then when you need to\n", "predict on the 30 tasks, preprocess the data only once and then send the preprocessed data to all 30 predictors.\n", "\n", "\n", "## If you encounter memory issues\n", "\n", "To reduce memory usage during training, you may try each of the following strategies individually or combinations of them (these may harm accuracy):\n", "\n", "- In `fit()`, set `excluded_model_types = ['KNN', 'XT' ,'RF']` (or some subset of these models).\n", "- Try different `presets` in `fit()`.\n", "- In `fit()`, set `hyperparameters = 'light'` or `hyperparameters = 'very_light'`.\n", "- Text fields in your table require substantial memory for N-gram featurization. To mitigate this in `fit()`, you can either: (1) add `'ignore_text'` to your `presets` list (to ignore text features), or (2) specify the argument:" ] }, { "cell_type": "markdown", "id": "d30ebcf6", "metadata": {}, "source": [ "```\n", "from sklearn.feature_extraction.text import CountVectorizer\n", "from autogluon.features.generators import AutoMLPipelineFeatureGenerator\n", "feature_generator = AutoMLPipelineFeatureGenerator(vectorizer=CountVectorizer(min_df=30, ngram_range=(1, 3), max_features=MAX_NGRAM, dtype=np.uint8))\n", "```\n" ] }, { "cell_type": "markdown", "id": "6f1b86fb", "metadata": {}, "source": [ "for example using `MAX_NGRAM = 1000` (try various values under 10000 to reduce the number of N-gram features used to represent each text field)\n", "\n", "In addition to reducing memory usage, many of the above strategies can also be used to reduce training times.\n", "\n", "To reduce memory usage during inference:\n", "\n", "- If trying to produce predictions for a large test dataset, break the test data into smaller chunks as demonstrated in [FAQ](tabular-faq.ipynb).\n", "\n", "- If models have been previously persisted in memory but inference-speed is not a major concern, call `predictor.unpersist()`.\n", "\n", "- If models have been previously persisted in memory, bagging was used in `fit()`, and inference-speed is a concern: call `predictor.refit_full()` and use one of the refit-full models for prediction (ensure this is the only model persisted in memory).\n", "\n", "\n", "\n", "## If you encounter disk space issues\n", "\n", "To reduce disk usage, you may try each of the following strategies individually or combinations of them:\n", "\n", "- Make sure to delete all `predictor.path` folders from previous `fit()` runs! These can eat up your free space if you call `fit()` many times. If you didn't specify `path`, AutoGluon still automatically saved its models to a folder called: \"AutogluonModels/ag-[TIMESTAMP]\", where TIMESTAMP records when `fit()` was called, so make sure to also delete these folders if you run low on free space.\n", "\n", "- Call `predictor.save_space()` to delete auxiliary files produced during `fit()`.\n", "\n", "- Call `predictor.delete_models(models_to_keep='best', dry_run=False)` if you only intend to use this predictor for inference going forward (will delete files required for non-prediction-related functionality like `fit_summary`).\n", "\n", "- In `fit()`, you can add `'optimize_for_deployment'` to the `presets` list, which will automatically invoke the previous two strategies after training.\n", "\n", "- Most of the above strategies to reduce memory usage will also reduce disk usage (but may harm accuracy).\n", "\n", "\n", "## References\n", "\n", "The following paper describes how AutoGluon internally operates on tabular data:\n", "\n", "Erickson et al. [AutoGluon-Tabular: Robust and Accurate AutoML for Structured Data](https://arxiv.org/abs/2003.06505). *Arxiv*, 2020.\n", "\n", "## Next Steps\n", "\n", "If you are interested in deployment optimization, refer to the [Predicting Columns in a Table - Deployment Optimization](advanced/tabular-deployment.ipynb) tutorial." ] } ], "metadata": { "language_info": { "name": "python" } }, "nbformat": 4, "nbformat_minor": 5 }